using 100,000 NVIDIA H100 liquid-cooled GPUs connected through a single RDMA network.
RDMA (Remote Direct Memory Access) allows data to be transferred directly from one computer to another without involving the operating systems of either side. A single RDMA creates high-throughput, low-latency network communication, particularly suitable for use in large-scale parallel computer clusters.
In terms of scale, xAI's Memphis supercomputer center ### has already become the world's number one in computing power, far surpassing the 25,000 A100 GPUs used by OpenAI to train GPT-4, as well as Aurora (60,000 Intel GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs), and even exceeding the previous world's fastest supercomputer Frontier (37,888 AMD GPUs).
The H100 is a chip developed by NVIDIA specifically for processing large language model data, with each chip costing around $30,000. This means that ### the chip value alone for xAI's new supercomputer center is about $3 billion.
Previously, Musk's xAI had been relatively quiet, and the AI called Grok released by xAI was often criticized as not user-friendly. However, given the current situation, large model training is a game of computing power, and ultimately a game of energy. Musk seems unwilling to wait any longer and has directly maxed out the resources.
He stated that ### an improved large model (likely to be Grok3) will be completed by the end of this year, at which time it will be the world's most powerful AI.
In fact, NVIDIA has already launched the new generation H200 chip and the B100 and B200 GPUs based on the Blackwell new architecture. However, these more advanced chips won't be available until the end of this year, and tens of thousands of them can't be produced instantly. Perhaps to become the world's strongest before ChatGPT5, Musk is moving faster than usual this time.
According to Forbes, Musk only finalized this agreement in Memphis in March this year, after which the supercomputer base almost immediately began construction. To speed things up, Musk borrowed 24,000 H100s from Oracle.
However, as mentioned earlier, current large model training ultimately comes down to an energy game. The US power grid system is quite outdated and hasn't witnessed large-scale growth for decades. Especially the power consumption structure of AI training is very different from residential and commercial electricity, often suddenly appearing ultra-high power consumption peaks, greatly challenging the maximum load of the power grid. In this situation, there are few places left that can squeeze out power and water resources to support supercomputer centers.
According to estimates by the CEO of Memphis Light, Gas and Water, ### xAI's Memphis supercomputer cluster will use up to 150 megawatts of electricity per hour at its peak, equivalent to the power consumption of 100,000 households.
Currently, 32,000 GPUs are online at the Memphis factory, and it is expected that power supply construction will be completed in the fourth quarter of this year, and the factory will run at full speed.
It's no wonder that some people questioned whether Musk was lying, because these power requirements and construction speed are truly incredible.
In addition to electricity, ### xAI's supercomputer cluster is expected to need at least 1 million gallons (about 3.79 million liters) of water per day for cooling.
According to Forbes, Memphis City Council member Pearl Walker said last week: "People are scared. They're worried about potential water issues and energy supply problems." She said that currently, the data center is expected to draw 4.92 million liters per day from Memphis's underground aquifer, which is the city's main water source (the city consumes about 568 million liters of water in total per day). Although they say this is only temporary, plans for building a new greywater plant haven't been finalized yet. Memphis's utility department has also confirmed that Musk's supercomputer will be allowed to use water from the underground aquifer before the treatment plant is built and operational.
Besides Musk, OpenAI and Microsoft are also deploying larger-scale supercomputers. This supercomputer named "Stargate" will have millions of chips, with an estimated cost of $115 billion, planned to be launched in 2028.
In April this year, OpenAI crashed Microsoft's power grid. According to Microsoft engineers, they were deploying a training cluster of 100,000 H100s for GPT-6 at the time. Will Musk be the first person to get 100,000 H100s working together?