Microsoft CEO Satya Nadella on Thursday he tweeted a video of the first huge AI system implemented by his company – or AI “factory”, as Nvidia likes to call it. He promised that this will be the “first of many” Nvidia AI factories that will be deployed in global Microsoft Azure data centers to run OpenAI workloads.
Each system is a cluster of more than 4,600 Nvidia GB300 rack computers equipped with the highly sought-after Blackwell Ultra GPU chip and connected via Nvidia’s super-fast networking technology called InfiniBand. (In addition to AI chips, Nvidia CEO Jensen Huang also had the foresight to corner the market on InfiniBand when his company acquired Mellanox for $6.9 billion in 2019.)
Microsoft promises it will ship “hundreds of thousands of Blackwell Ultra GPUs” as it rolls out these systems globally. While the size of these systems is mind-blowing (and the company has shared many of them more technical details hardware enthusiasts can read carefully), the timing of this announcement is also noteworthy.
It comes right after OpenAI, its partner and well-documented enemy-enemyhas signed two high-profile data center deals with Nvidia and AMD. In 2025, OpenAI has amassed, by some estimates, $1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more are coming.
Microsoft clearly wants the world to know that it already owns the data centers – more than 300 in 34 countries – and that they are “uniquely positioned” to “meet the needs of frontier AI today,” the company said. These monstrous AI systems are also capable of running the next generation of models with “hundreds of trillions of parameters,” he said.
We expect to hear more about how Microsoft is working to serve AI workloads later this month. Microsoft CTO Kevin Scott will speak TechCrunch interruptswhich will be held from 27 to 29 October in San Francisco.