news

100,000 Nvidia GPUs! Musk plans to train the "world's most powerful AI" by the end of the year

2024-07-23

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Musk said that the world's most powerful AI will be trained by the end of this year.

On July 22, local time, Tesla CEO Elon Musk said on his social platform X that the xAI team, X team, NVIDIA and other supporting companies had started training on the "Memphis Supercluster" at 4:20 a.m. local time.

He introduced that the "Memphis Super Cluster" consists of 100,000 liquid-cooled H100 GPUs and runs on a single RDMA structure (i.e., remote direct data storage structure, which can solve the delay of server-side data processing in network transmission). It is "the most powerful artificial intelligence training cluster in the world."

Musk added that the goal is to train "the most powerful artificial intelligence in the world by every metric" by December this year.

Previously, Musk had revealed that the cluster would be used to train xAI's third-generation large language model Grok-3.

In May this year, Musk revealed that his artificial intelligence company xAI plans to build a supercomputer called the "Super Factory of Computing Power", which is expected to be four times the size of the strongest competitor in the market. The computer will use Nvidia H100 GPU.

A year ago, xAI announced its official establishment and stated that the company's purpose is to "understand the true nature of the universe." xAI stated on its official website, "We are a company independent of X Corp, but will work closely with X, Tesla and other companies to achieve our mission."

In November 2023, xAI released its first large model, Grok-1.

This month, Musk announced that Grok-2 will be launched in August and will bring more advanced AI capabilities. He also revealed that Grok-3 will be trained using 100,000 Nvidia H100 chips and is expected to be released at the end of the year. He believes it will be "very special."

In May this year, xAI announced that it had received $6 billion in Series B financing, with major investors including Andreessen Horowitz and Sequoia Capital. xAI's pre-investment valuation was $18 billion, and its post-investment valuation after this round of financing will reach $24 billion.

In the AI ​​trend, computing power has become a battleground for technology giants. Meta revealed in January this year that it plans to deploy 350,000 NVIDIA H100 GPUs by the end of this year, expanding Meta's computing power to the equivalent of 600,000 NVIDIA H100 GPUs; Microsoft andOpenAIPlans to build a new supercomputer called Stargate could cost as much as $100 billion and is scheduled to be fully developed by 2030.