Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
At the last NVIDIA GTC conference, the company unveiled what it described as the first single system of servers capable of one-billion billion, or Quintillion, and floating point operations (FLOPS) per second. This penetration depends on the latest GB200 NVL72 system, which includes the latest graphics processing units in NVIDIA (GPU). The standard computer rack is about 6 feet, depth more than 3 feet and width less than 2 feet.
Reducing Exaflop: From Borders to Blackwail
Some things hit me about the advertisement. First, the first computer capable of EXAFLOP was installed in the world just a few years ago, in 2022, at the Oak Ridge National Laboratory. For comparison, the “Frontier” computers consist of HPE and is run by AMD graphics processing units and central processing units, originally from 74 shelfs of servers. The new NVIDIA has achieved about 73X more performance density in just three years, equivalent to three times the performance each year. This progress reflects remarkable progress in computing density, energy efficiency and architectural design.
Second, it must be said that although both systems struck an exagale teacher, they were built for various challenges, one of which is improved for speed, and the other for accuracy. NVIDIA Exaflop specifications depend on low accuracy mathematics-specifically 4-bit floating point operations-are ideal for artificial intelligence work burdens including tasks such as training and operating large language models (LLMS). These calculations give speed on accuracy. On the contrary, the Exaflop classification of Frontier was achieved using a 64 -bit dual mathematics, which is the golden standard for scientific simulation where accuracy is very important.
We have come a long way (very quickly)
This level of progress seems almost unreasonable, especially as I remember the latest thing when my career started in the computing industry. My first professional function was a programmer in DEC KL 1090. This device, which is part of the DEC PDP-10 series of Central Computer Poles, provided 1.8 million instructions per second (MIPS). Regardless of the performance of the CPU, the device connected to the Cathod Radiology (CRT) is displayed via solid cables. There were no graphics capabilities, only the light of light on a dark background. Of course, not the Internet. Users are connected to a distance via the phone lines using modem devices that run at up to 1200 -bit per second.

500 billion times more
While comparing MIPS to Flops gives a general sense of progress, it is important to remember that these measures measure the burdens of different computing work. The MIPS reflects the speed of processing a correct number, which is useful for computing for general purposes, especially in business applications. Performing performance floundering performance, which is very important to the burdens of scientific work and heavy numbers points behind modern artificial intelligence, such as Matrix Math and Linear Alexbra used to train and operate machine learning models (ML).
Although it is not a direct comparison, the huge scale of the difference between MIPS and then floundering now provides a strong clarification of rapid growth in computing performance. Using these as approximate advice to measure the work that has been implemented, the new NVIDIA system is about 500 billion times stronger than the DEC. This type of jump embodies the sample growth of the power of computing over one professional profession and the question arises: If this progress is a lot within 40 years, then what may bring the following five?
For its part, NVIDIA provided some evidence. In GTC, the company has shared a road map that expects its full generation system depends on the superior “Vera Rubin” structure that will offer 14x Blackwell Ultra Rack charging this year, reaching somewhere between 14 and 15 Exaflops in improved work next year or two.
As noticeable is efficiency. Achieving this level of performance in one shelf means less physical space for each unit of work, a lower number of materials and the use of energy for each process decreases, although the absolute energy requirements of these systems remain enormous.
Do artificial intelligence really need all this mathematical power?
While these performance gains are already impressive, the artificial intelligence industry is now struggling with a basic question: How much computing strength is really necessary and at what cost? The race is made to build huge new artificial intelligence centers driven by the increasing requirements for EXASCALE computing and artificial intelligence models that are absolutely capable.
The most ambitious effort is the $ 500 billion Stargate project, which imagines 20 data centers across the United States, each of which extends over half a million square feet. There is a wave of other excessive projects, either ongoing or in planning stages all over the world, as companies and countries are scrambling to ensure their infrastructure to support the burdens of artificial intelligence work tomorrow.
Some analysts are now anxious that we may exceed the ability of the artificial intelligence data center. Anxiety is intensified after the release of the R1, which is the thinking model of Deepseek in China that requires a much lower account than many of its peers. Microsoft has later canceled a rental with multiple data center service providers, which has sparked speculation that she might calibrate her expectations for future infrastructure.
but, Record Proposal This withdrawal may have a greater relationship with some planned artificial intelligence centers that are not strong enough to support energy and cooling needs for the upcoming artificial intelligence systems. Indeed, artificial intelligence models pay the limits of what the current infrastructure can support. Massachusetts Institute Technology Review Technology I mentioned This may be the reason why many databases in China are struggling and failing, after they were built according to the specifications that are not ideal for the current need, not to mention the specifications of the next few years.
Intelligence requires more fluctuations
Thinking models perform most of their work at the time of operation through a process known as inference. These models work on some of the most advanced applications and density of resources today, including deep research and wave of artificial intelligence systems.
While Deepseek-R1 had initially frightened the industry to think that artificial intelligence in the future may require less Computing power, CEO of Nvidia Jensen Huang pushed strongly. to talk For CNBC, this perception faced: “It was the completely opposite conclusion that everyone had.” He added that thinking about artificial intelligence consumes 100x computing more than the unusual artificial intelligence.
With the continued development of artificial intelligence from thinking models to independent factors and beyond, the demand for computing is likely to rise again. The following breakthroughs may not come in language or vision, but in the coordination of the artificial intelligence agent, the simulation of fusion or even digital twins, each of which is possible through a kind of the computer capacity that we have just seen.
Apparently, apparently on a braid, Openai just announced $ 40 billion in new financingThe largest special technical financing round ever. The company said in a Blog post Funding “enables us to pay the boundaries of artificial intelligence research further, expand the infrastructure of our calculation and provide increasingly strong tools for 500 million people who use Chatgpt every week.”
Why does a lot of capital flow into artificial intelligence? The reasons range from competitiveness to national security. Although one of the specific factors emerges, as shown by MCKINSEY the address: “Amnesty International can increase corporate profits by $ 4.4 trillion per year.”
What comes after that? It is guessing anyone
In essence, information systems revolve around extracting complexity, whether through the emergency vehicle guidance system, once written in Fortran, a student preparation tool created in Cobol, or modern artificial intelligence systems that accelerate the discovery of drugs. The goal has always been the same: to understand the greater in the world.
Now, with the start of strong artificial intelligence, we cross the threshold. For the first time, we may have a computing and intelligence power to address problems that were once alongside human birth.
New York Times Times Kevin Rose This moment was recently taken well: “Every week, I meet the engineers and businessmen who work on artificial intelligence who told me that change-great change, the change of the world, the type of transformation that we have not seen before-is just around the corner.” This is not counted even the breakthroughs that reach every week.
In the past few days, we have seen GPT-4O from Openai Almost perfect pictures From the text, Google Edit may be the most advanced Thinking model However, at Gemini 2.5 Pro and Runway unveiled a lump and scene video model, something Venturebeat Notes Most of the video generators have faded.
What comes after that is really a guess. We do not know whether the strong artificial intelligence will be a penetration or collapse, whether it will help solve the integration capacity or unleash new biological risks. But with more online fluctuations over the next five years, it seems that one thing is certain: innovation will come quickly – and began. It is also clear that with progress, we must talk about responsibility, organization and self -control.
Gary Grossman is EVP to practice technology in Edelman The global bullets of the Edelman AI Center for Excellence.
https://venturebeat.com/wp-content/uploads/2025/04/GG1.png?w=1024?w=1200&strip=all
Source link