
In posting its second straight quarter of 34% Q2 RPO growth, Microsoft has obliterated recent fear-mongering efforts about plunging AI demand and a resultant cratering of data center capacity and buildouts.
Even the most committed anti-growth doomsayers will have to skip a beat when they see these fiscal Q3 numbers from Microsoft for the quarter ended March 31:
- RPO: remaining performance obligation (contracted business not yet recognized as revenue) was up 34% to $315 billion, with $126 billion of that expected to be reported as revenue in the next 12 months;
- Azure “and other cloud services” jumped 33% (Microsoft does not release revenue figures for Azure), including 16 points from AI services;
- Total cloud revenue increased 20% to $42.4 billion, which for the sake of perspective equals the combined quarterly revenues of Google Cloud and AWS; and
- Looking ahead, on the Q3 earnings call last week, Microsoft CFO Amy Hood said, “In Azure, we expect Q4 revenue growth to be between 34% and 35% in constant currency, driven by strong demand for our portfolio of services. In our non-AI services, we expect focused execution to continue driving healthy growth. In our AI services, while we continue to bring data center capacity online as planned, demand is growing a bit faster. Therefore, we now expect to have some AI capacity constraints beyond June.”
Why All ‘The Sky Is Falling!’ Hysteria?
As I mentioned in a recent analysis headlined “Data Center ‘Bust’ Is Giant Hallucination: Oracle, Google Cloud Accelerate while AWS, Microsoft Roll On,” the planning and execution of data center buildouts is probably one of the most complex business challenges any industry has to confront and deal with.
So, as with any massive and complex initiative designed to meet the needs of the future, some adjustments along the way are not only possible but essential. Yet, as both Microsoft and AWS have tinkered with their long-range plans in order to optimally align data center capacity with customer demand, some of the doomsayers were compelled — for whatever goofy reason — to equate this to the rise of “data center overcapacity” and a “glut” of data centers triggered by tumbling demand for AI.
And that’s ridiculous, as the investment levels from the hyperscalers indicate, and as is further underscored by the fact that not even $350 billion invested in a single year is enough to keep up with demand.
It’s as if a sports team traded a wide receiver for a future draft pick, and the sportswriters concluded, “Team vows to never pass the ball again — will keep ball on the ground for all eternity.”

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Five Heavy-Duty Challenges
First, there’s the matter of having to invest staggering amounts of money: as I mention in that piece, the four hyperscalers — Microsoft, AWS, Google Cloud, and Oracle — are this year investing about $350 billion in these infrastructure facilities.
Second, the highly advanced stuff that goes into them changes and advances constantly, so the procurement and supply-chain issues give new meaning to “non-trivial.”
Third, customer requirements and expectations shift constantly as well — so the buildout game is nothing like a lather-rinse-repeat approach.
Fourth, these megafacilities require massive volumes of energy generation, water, fail-proof electricity, and regulatory expertise beyond what most mortals can handle.
Fifth, and perhaps most challenging of all, Microsoft and AWS and Google Cloud and Oracle have to anticipate customer demand years in advance. As Microsoft CEO Satya Nadella described it on the earnings call:
“The key thing for us is to have our builds and lease be positioned for what is the workload growth of the future. That’s what you have to go and seek to. There’s a demand part to it. There is the shape of the workload part to it, and there is a location part to it. You don’t want to be upside-down on having one big data center in one region, when you have a global demand footprint. You don’t want to be upside-down when the shape of demand changes, because, after all, with essentially pre-training plus test time compute, that’s a big change in terms of how you think about even what is training. Forget inferencing.
Fundamentally, given all of that, and then every time there’s great Moore’s Law, but remember, this is a compounding S-curve, which is, there’s Moore’s Law, there’s system software, there’s model architecture changes, there’s the app server efficiency. Given all of that, we just want to make sure we’re building, accounting for the latest and greatest information we have on all of that.”
Microsoft CFO Hood, who’s one of the top executives at one of the most-valuable and most-influential companies the world has ever known, is not given to hyperbole and always tries to deliver fact-based commentary. And here’s how she described the fact that mighty Microsoft, despite its very best efforts — including the investment this year of at least $80 billion — will still not be able to keep pace with surging customer demand:
“We had hoped to be in balance by the end of Q4. We did see some increased demand, as you saw through the quarter. We are going to be a little short, still, a little tight as we exit the year, but are encouraged by that.”
Final Thought
Since laughter is a good thing, we should all look forward to getting some nice chuckles from The Sky Is Falling crowd as they attempt to bend, fold, spindle, and mutilate these strikingly unambiguous comments from Hood and Nadella into some tortured permutation of a looming “yeah, but AI demand is still going bust!”
Meanwhile, back on Planet Earth, the Cloud and AI Wars are getting ratcheted up to levels that will make the past few years look like ballroom-dancing lessons.
Lace ’em up tight!
