Microsoft CEO Satya Nadella on Thursday tweeted a video of his firm’s first deployed huge AI system — or AI “manufacturing unit” as Nvidia likes to name them. He promised that is the “first of many” such Nvidia AI factories that can be deployed throughout Microsoft Azure’s world information facilities to run OpenAI workloads.
Every system is a cluster of greater than 4,600 Nvidia GB300s rack computer systems sporting the much-in-demand Blackwell Extremely GPU chip and linked by way of Nvidia’s super-fast networking tech known as InfiniBand. (In addition to AI chips, Nvidia CEO Jensen Huang additionally had the foresight to nook the market on InfiniBand when his firm acquired Mellanox for $6.9 billion in 2019.)
Microsoft guarantees that it is going to be deploying “tons of of hundreds of Blackwell Extremely GPUs” because it rolls out these techniques globally. Whereas the scale of those techniques is eye-popping (and the corporate shared lots more technical details for {hardware} fanatics to peruse), the timing of this announcement can be noteworthy.
It comes simply after OpenAI, its companion and well-documented frenemy, inked two high-profile information middle offers with Nvidia and AMD. In 2025, OpenAI has racked up, by some estimates, $1 trillion in commitments to construct its personal information facilities. And CEO Sam Altman stated this week that extra have been coming.
Microsoft clearly desires the world to know that it already has the information facilities — more than 300 in 34 nations — and that they’re “uniquely positioned” to “meet the calls for of frontier AI at present,” the corporate stated. These monster AI techniques are additionally able to operating the following era of fashions with “tons of of trillions of parameters,” it stated.
We anticipate to listen to extra about how Microsoft is ramping as much as serve AI workloads later this month. Microsoft CTO Kevin Scott can be talking at TechCrunch Disrupt, which can be held October 27 to October 29 in San Francisco.