Quarter 1
Q2 2026 Earnings Call
And
our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed. Thank you guys for taking the question. I'm looking at a Microsoft print where earnings is growing 24% year on year, which is a spectacular result. Great execution on your part. Top line growing well, margins expand. after hours trading and the stock is still down. And I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected and maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some How should we think about capacity expansion and what that can yield in terms of Azure growth going forward? More to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys. Thanks, Keith. And let me start, and Satya can add some broader comments, I'm sure. I think the first thing, I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number.
And, you know, we tried last quarter and I think, again, this quarter to talk more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU and where that will show up. Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions. And the first thing we're doing is solving for the increased usage and sales and the accelerating pace of M365 co-pilot, as well as GitHub co-pilot, our first-party apps. Then we make sure we're investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you've seen from us in products over the past bit is coming because we are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years. Then, when you end up, is that you end up with the remainder going towards serving the Azure capacity that continues to grow in terms of demand.
And a way to think about it, because I think I get asked this question sometimes, is if I had taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. And I think that's hopefully helpful in terms of thinking about Capital growth, it shows in every piece. It shows in revenue growth across the business and shows as OpEx growth as we invest in our people. Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot and you should think about GitHub Copilot. You should think about Dragon Co-Pilot, Security Co-Pilot. All of those have a GM profile and lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Co-Pilot, which are all, by the way, incremental businesses and dams for us.
And so we don't want to maximize just one business of ours. We want to be able to allocate capacity while we're sort of supply constrained in a way that allows to essentially build the best LTV portfolio. That's on one side. And the other one that Amy mentioned is also R&D. I mean, you've got to think about compute is also R&D, and that's sort of the second element of it. And so we're using all of that, obviously, to optimize for the long term. Thanks, Keith. Operator, next question, please.
The next question comes from the line of Mark Mortler with Bernstein Research. Please proceed. Thank you very much for taking my question, and congrats on the quarter. One of the other questions we believe investors want to understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalized servers over six years, but the average duration of your RPO is two and a half years, up from two years last quarter. How do investors get comfortable that since a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the six-year use of life of the hardware to deliver solid revenue and gross profit dollars growth, hopefully one similar to the CPU revenue? Thank you. Thanks, Mark. Let me start at a high level and something you can add as well. I think when you think about average duration, I think what you're getting to is, and we need to remember, is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or a BizApp portfolio are shorter dated, right? Three year contracts and so they have quite frankly a short duration.
The majority then that's remaining are Azure contracts that are longer duration and you saw that this quarter when we saw the extension of that duration from around two years to two and a half. And the way to think about that is the majority of the capital that we're spending today and a lot of the GPUs that we're buying are already contracted for most of their useful life. And so a way to think about that is, you know, much of that risk that I think you're pointing to isn't there, right? because they're already sold for the entirety of their useful life. And so part of it exists because you have this shorter-dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU basis. It's not just GPU. And on the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU. And so there's not the risk to which I think you may be referring. Hopefully that's helpful.
Yeah, and just one other thing I'd add to it is in addition to sort of what Amy mentioned, which is it's already contracted for the use for life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so At the end of the day, we want to have, that's why we even think about aging the fleet constantly. It's not about buying a whole lot of gear one year. It's about each year you write the Moore's Law, you add, you use software, and then you optimize across all of it. And Mark, maybe to... state this in case it's not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time. That's a great answer. I really appreciate it. Thank you. Thanks, Mark. Operator, next question, please.
The next question comes from the line of Brent Thill with Jefferies. Please proceed. Thanks, Amy. On 45% of the backlog being related to open AI, I'm just curious if you can comment. There's obviously concern about the durability, and I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure, and if you could maybe talk through your perspective and what both you and Satya are seeing. I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55%, or roughly $350 billion, is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers, and frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is I think what I get asked most frequently is, It's grown by customer segment, by industry, and by geo. And so it's very consistent.
And so then if you're asking me about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built. and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation. Thanks, Bryn. Operator, next question, please.
The next question comes from the line of Carl Kierstedt with UBS. Please proceed. Okay, thank you very much. Okay, Amy, regardless of how you allocate the capacity between first party and third party, can you comment qualitatively on the amount of capacity that's coming on? I think the one gigawatt added in the December quarter was extraordinary and hints that the capacity ads are accelerating. But I think a lot of investors have their eyes on Fairwater, Atlanta, Fairwater, Wisconsin, And would love some comments about the magnitude of the capacity ads, regardless of how they're allocated in the coming quarters. Thank you. Yeah, Carl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multi-year deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard at doing it, is adding capacity globally. A lot of that will be added in the United States. It's the locations you've mentioned.
But it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. We'll continue to add both long-lived applications The way to think about that is we need to make sure we've got power and land and facilities available, and we'll continue to put GPUs and CPUs in them when they're done as quickly as we can. And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that. and how we operate them so that they can have the highest possible utility. And so I think it's not really about, you know, two places. Carl, I would definitely abstract away from that. Those are multi-year delivery timelines. But really we just need to get it done every location where we're currently in a build or starting to do that. We're working as quickly as we can. Okay, got it. Thank you. Thanks, Carl. Operator, next question, please.
The next question comes from the line of Mark Murphy with JP Morgan. Please proceed. Thank you so much, Satya. The performance achievements of the Maya 200 accelerator, for instance, look quite remarkable, especially in comparison to TPUs and Tranium and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective in terms of how much of a core competency you think silicon might become for Microsoft and Amy? Are there any ramifications worth mentioning there in terms of supporting your gross margin profile for inference costs going forward? Yeah, no, thanks for the question. So a couple of things. One is we've been at this in a variety of different forms for a long, long time in terms of building our own silicon. And so we're very, very thrilled about the progress with Maya 200. And, you know, especially when we think about running a GPT-5-2 and the performance we're able to get in the gems at FP4 just proves the point that when you have a new workload, a new shape of a workload, you can start innovating end-to-end between the model and the silicon. And the entire system is just not even about just the silicon.
The way the networking works at rack scale that's optimized with memory for this particular workload. And the other thing is we're obviously round-tripping and working very closely with – our own super intelligence team with all of our models, as you can imagine, whatever we build will be all optimized for Maya. So, we feel great about it. And I think the way to think about all up is, we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation. Even since December, I think the new thing is everybody's talking about low latency inference, right? And so, One of the things we want to make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating. We're innovating. We want a fleet at any given point in time to have access to the best TCO. And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just remember you have to be ahead for all time to come. And that means you really want to think about having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level.
So that's kind of how I look at it, which is we are excited about Maya, we're excited about Cobalt, we're excited about our DPU, our NICs. So we have a lot of systems capability. That means we can vertically integrate. And because we can vertically integrate doesn't mean we just only vertically integrate. And so we want to be able to have the flexibility here, and that's what you see us do. Thanks, Mark. Operator, next question, please.
The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed. Great, thank you very much. Satya, we heard a lot about frontier transformations from Judson and Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys, and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks. Yeah, thank you for that. So I think one of the things that we are seeing is the adoption across the three major suites of ours, right? So if you take M365, you take what's happening with security and you take GitHub. In fact, it's fascinating. I mean, you know, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system, or defender as the protection system across all three was sort of super helpful. But so what now you're seeing is something like work IQ, right? So I mean, just to give you a flavor for it, The most important database underneath for any company that uses Microsoft today is the data underneath Microsoft 365.
And the reason is because it has all this tacit information, right? Who are your people? What are their relationships? What are their projects they're working on? What are their artifacts, their communications? So that's a super important asset for any business process, business workflow context. In fact, the scenario I even had in my transcript around you can now take WorkIQ as an MCP server and in a GitHub repo and say, hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it. I mean, that's a pretty high-level way to think about how what is happening previously, perhaps, with our tools business and our GitHub business are suddenly now being transformative, right? That agent black plane is is really transforming companies in some sense. That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprises. Then on top of it, of course, there's the transformation, which is what businesses are doing.
How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry and, of course, the GitHub tooling is helping them, or even the low-code, no-code tools. I have some stats on how much that's being used. But one of the more exciting things for me is these new agent systems M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think is probably the most transformative effect right now. Thank you. Very helpful. Thanks, Brad. Operator, we have time for one last question. And
the last question will come from the line of Ramo Lenshow with Barclays. Please proceed. Perfect. Thanks for leading me in. The last few quarters we talked, besides the GPU side, we talked about CPU as well on the Azure side, and you had some operational changes at the beginning of January last year. Can you speak what you saw there and maybe put it more in a bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI? So what are we seeing in terms of cloud transactions? Thank you. I didn't quite. Sorry, Raymond, you were asking about the SNC CPU side, or can you just repeat the question, please? Yeah, sorry. So I was wondering about the CPU side of Azure, because we had some operational changes there, and we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI, and if that's kind of driving momentum. Thank you. Yeah, I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, take any agent.
The agent will then spawn through tools used, maybe a container, which runs obviously on compute. In fact, whenever we think about even building out of the fleet, we think of in ratios. or even for a training job by the way an ai training job requires a bunch of compute and a bunch of storage very close to compute and so therefore uh i mean sending an inferencing as well so an inferencing with agent mode uh would require you to essentially provision a computer uh or computing resources to the agent uh so not they don't need gpus They're running on GPUs, but they need computers, which are compute and storage. So that's what's happening even in the new workload. The other thing you mentioned is the cloud migrations are still going on. In fact, one of the stats I had was our latest SQL server growing as an IaaS service in Azure. And so... That's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud because when clients bring their workloads and bring new workloads, they need all of these infrastructure elements in the region in which they're deploying. Okay, perfect. Thank you. Thanks, Raimo.
That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon. Thank you all. Thank you. Thank you. This concludes today's conference. You may disconnect your lines
at this time, and we thank you for your participation.
Have a great night.
Quarter 2
Q1 2026 Earnings Call
And
our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed. Excellent. Thank you guys for taking the question. And congratulations on another outstanding quarter. And if I'm looking at Microsoft, this is two quarters in a row. We're really seeing results that are well ahead of anybody's expectations when we're thinking about this company a year ago or five years ago. 111% commercial bookings growth was not on anybody's bingo card, if you will. Yet the stock is underperforming the broader market. And the question I have is kind of getting at the zeitgeist that I think is weighing on the stock. And it's something about to change. And I think AGI is kind of a nomenclature, a shorthand for that. And it's something that's still included in your guys' OpenAI agreements.
So, Sakia, when we think about AGI or we think about how application and computing architectures are changing, is there anything that you see on the horizon, whether it's AGI or something else, that could potentially change what appears to be a really strong positioning for Microsoft in the marketplace today where that strength will perhaps weaken on a go-forward basis? Is there anything that you're worrying about in that evolution, and particularly the evolution of these generative AI models? Thank you, Keith, for the question. So here's how I'd say that. I think there are two parts. We feel very, very good about even this, I'd say, the new agreement that we now have with OpenAI because I think even it just creates more certainty to all of the IP relationship we have as it relates to even this definition of AGI.
But beyond that, I think your question touches on something that's pretty important, which is how are these AI systems going to truly be deployed in the real world and make a real difference and make a return for both the customers who are deploying them and then obviously the providers of these systems? And I think the best way to characterize the situation is that even as the intelligence capability increases, let's even say exponentially, like model version over model version, The problem is it's always going to still be jagged, right? I think the term people use is the jagged intelligence, or spiky intelligence, right? So you may even have a capability that's fantastic at a particular task, but it may not uniformly grow. So what is required is, in fact, these systems, whether it is GitHub Agent HQ or the M365 Copilot system. Don't think of this as a product. Think of it as a system that in some sense smooths out those jagged edges and really helps the capability. I mean, just to give you a flavor for it, right? So if I am in M365 Copilot, I can generate an Excel spreadsheet. The good news is now an expert spreadsheet does understand OfficeJS, has the formulas in it.
It feels like, wow, it is a great spreadsheet created by a good modeler. The more interesting thing is I can go into agent mode in Excel and iterate on that model, and yet it will stay on Rails. It won't go off Rails. It will be able to do the iteration. Then I can even give it to the analyst agent, and then it will even make sense of it like a data analyst would of an Excel model. The reason I say all of that is because that's the type of construction that will be needed, even when the model is magical, all-powerful. I think we will be in this jagged intelligence phase for a long time. So one of the fundamental things that these, whether it's GitHub, whether it's security, whether it's M365, the three main domains we're in, we feel very, very good about building these as organizing layers for agents to help customers. And by the way, that's the same thing that we want to put into Foundry for our third-party customers. So that's kind of how people will build these multi-agent systems. So I feel actually pretty good about both the progress in AI.
I don't think AGI, as defined at least by us in our contract, is ever going to be achieved anytime soon. but I do believe we can drive a lot of value for customers with advances in AI models by building these systems. So it's kind of the real question that needs to be well understood, and I feel very, very confident about our ability to make progress. Excellent. That's super helpful. Thanks, Keith. Operator, next question, please.
The next question comes from the line of Brent Thill with Jefferies. Please proceed. Thanks. Amy, on the bookings blowout, I guess many are, you know, somewhat concerned about concentration risk. And I think you noted a number of $100 million contracts, not to go into a lot of detail, but can you just give us a sense of what you're seeing in that 51% RPO and 110 plus percent bookings growth that gives you confidence about what you're seeing in terms of the breadth and, you know, extent of some of these deals on a global basis. Thanks. Thanks, Brent. A couple things to maybe take a step back on RPO. With a nearly $400 billion balance, we've been trying to help people understand sort of how to think about really the breadth of that. It covers numerous products. It covers customers of all sizes. And that's been a balance that we've been growing, obviously, at a good clip. But what people need to realize is it sits across multiple products because of the things Satya's talking about around creating systems and where we're investing.
And if you're going to have that type of balance, and then more importantly, if have the weighted average duration be two years, it means that most of that is being consumed in relatively short order. People are not consuming, and I say this broadly, unless there's value. And I think this is why we keep coming back to are we creating real-world value in our AI platforms, in our AI solutions and apps and systems. And so I think sort of the way to think about RPO is it's been building across a number of customers, We're thrilled to have OpenAI be a piece of that. We're learning a ton and building leading systems because of it that are being used at scale that benefits every other customer. And so it's why we've tried to give a little bit more color to that RPO balance because I do understand that there have been a lot of concerns or questions about is it long dated? Is it coming over a long period of time? And hopefully this is helpful for people to realize that these are contracts being signed by customers who intend to use it in relatively short order. And at that type of scale, I think that's a pretty remarkable execution. Thank you. Thanks, Brent. Operator, next question, please.
The next question comes from the line of Mark Mortler with Bernstein Research. Please proceed. Thank you very much for taking my question, and congratulations on the quarter. It's pretty amazing what you guys are doing. Satya and Amy, I'd like to ask you the number one question I receive, whether from investors or at AI conferences I attend. How much confidence do you have that the software, even the consumer internet business, can monetize all the investments we're seeing globally, or frankly, are we in a bubble? In fact, Amy, what would be the factors you'd be watching to it? to assure that you're not overbuilding the current demand and that demand will sustain. Thank you. Maybe I'll start, Satya, and then you could add. Let me talk a little bit about maybe connecting a couple of the dots, because with $400 billion of RPO that's sort of short-dated, as we talked about, our needs to continue to build out the infrastructure is very high. And that's for booked business today.
That is not any new booked business we started trying to accomplish on October 1st, right? And so the way to think about that, and you saw it this quarter in particular, and as we talked about 26, the remainder, number one, we're pivoting toward increasingly, we talked about this, short-lived assets, both GPUs and CPUs. Again, we talk about all these workloads are burning both in terms of app building. Now, when that happens, short-lived assets generally are done to match sort of the duration of the contracts or the duration of your expectation of those contracts. And so I sometimes think when people think about risk, they're not realizing that most of the lifetimes of these and the lifetimes of the contracts are very similar. And so when you think about having revenue and the bookings and coming on the balance sheet and the depreciation of short-lived assets, they're actually quite matched, Mark. And as you know, we've spent the past few years not actually being short GPUs and CPUs per se. We were short the space or the power is the language we use to put them in. So we spent a lot of time building out that infrastructure. Now we're continuing to do that, also using leases.
Those are very long-lived assets, as we've talked about, 15 to 20 years. And over that period of time, do I have confidence that we'll need to use all of that? It is very high. And so when I think about sort of balancing those things, seeing the pivot to GPU-CPU short-lived, seeing the pivot in terms of how those are being utilized, we are, and I said this now, we've been short-lived. Now, for many quarters, I thought we were going to catch up. We are not. Demand is increasing. It is not increasing in just one place. It is increasing across many places. We're seeing usage increases in products. We are seeing new products launch that are getting increasing usage and increasing usage very quickly. You know, when people see real value, they actually commit real usage. And I sometimes think this is where this cycle needs to be thought through completely is that, you know, when you see these kind of demand signals and we know we're behind, we do need to spend. But we're spending with a different amount of confidence in usage patterns and in bookings, and I feel very good about that.
I have said we are now likely to be short capacity to serve the most important things we need to do, which is Azure, our first-party applications. We need to invest in product R&D, and we're doing end-of-life replacements in the fleet. So we're going to spend to make sure that happens. It's about modernization. It's about high quality. It's about service delivery, and it's about meeting demand. And so I feel good about doing that, and I feel good that we've been able to do it so efficiently and with a growing book of business behind it. Yeah, the only thing I would add to what Amy captured was if you sort of look out, there are two things that matter, I think, and that are critical in terms of how we think about our allocation of capital, also our R&D. One is how efficient is our planet-scale token factories? Right. I mean, that's at the end of the day what you have to do. And in order to do that, you have to start with building out a very fungible fleet. It's not like we're building one data center in one region in the world that's mega scale. We're building it out across the globe for inference, for pre-training, for post-training, for RL, for data center, or what have you.
So therefore, the fungibility is super important. The second thing that we're also doing is continually modernizing the fleet. It's not like we buy one version of, say, NVIDIA and load up for all the gigawatts we have. Each year you buy, you write the Moore's Law, you continuously modernize and depreciate it. And that means you also use software to grow efficiency. I talked about, I think, 30% improvement on both serving up GPT-4.1 and 5.0, right? That's software. And by the way, it's helpful on A100s, it's helpful on GB200s, and it'll be helpful on GB300s. That's the beauty of having the efficiency of the fleet. So keep improving utilization, keep improving the efficiency. So that's what you do in the token factory. The other aspect, which Amy spoke to, is we have some of the best agent systems that matter in the high-value domains. It's in information work. That's the co-pilot system. Coding. I mean, I should also say one of the things I like about co-pilot is, I mean, co-pilot ARPUs compared to M365 ARPUs. It's expansive. The same thing that happened between server and cloud.
Like we used to always say, well, is it zero sum? It turned out that the cloud was so much more expansive to the server market. The same thing is happening in AI because first you could say, hey, our ARPUs are too low. When it comes to M365, or you could say we have the opportunity with AI to be much more expansive. Same thing with tools, right? I mean, tools business was not like a leading business, whereas coding business is going to be one of the most. you know, expansive AI systems. And so we feel very good about being in that category. Same thing with security. Same thing with health. And in consumer, one of the things is it's not just about ads. It's ads plus subscriptions. That also opens up opportunity for us. So when I look at the entirety of these high-value agent systems and when we look at the efficiency and fungibility of our fleet, That's what gives us the confidence to invest both the capital and the R&D talent to go after this opportunity. That was pretty amazing. I really appreciate all the detail. Thanks, Mark. Operator, next question, please.
The next question comes from the line of Carl Kirstead with UBS. Please proceed. Okay, thank you. This one is for Amy. Amy, I certainly don't want to take you down too complex an accounting path with this question, but the investment in OpenAI that sits in other income at $4.1 billion is so large that I think the audience listening in could benefit from a little bit more color about what that is. It feels like it's so much larger than you were running through other income in prior quarters that it mustn't just be your share of the OpenAI losses. So could you just describe that and what we can expect in subsequent quarters and whether this signals any kind of accounting change. Thanks so much. The Q1 number was not impacted at all by the new agreement that was put in place. Let me first say that. Secondly, that increased loss was all due to our percentage of losses and opening up to the equity method. So just to be very clear. So there is not anything there that is not the increased losses from OpenAI. Okay, understood. Thank you. Thanks, Carl. Thanks, Carl. Operator, next question, please.
The next question comes from the line of Mark Murphy with J.P. Morgan. Please proceed. Thank you so much. So we seem to be entering into a new era where the contractual commitments from a small number of AI natives are just incredibly large, not only in absolute terms, but sometimes relative to the size of the companies themselves. For instance, contracts worth hundreds of billions of dollars that are 20 times their current revenue scale. Philosophically, how do you evaluate the ability of those companies to follow through on these commitments And how do you think about placing guardrails on customer concentration for any single entity? Yeah, maybe I'll start and then Amy, you can add. I mean, it goes back a little bit, Mark, to what I said about building first the asset itself such that it's most fungible. And then to recognize the strength of even sort of our portfolio. We have a third-party business. We have a first-party business. We have third-party also spread between enterprise, digital natives. I always felt that we need a balance there because it may start with digital natives. They're always going to be the early adopters. You always have the hit app of the generation.
And then essentially then it spreads throughout. The enterprise adoption cycle is just starting. And so, therefore, having over the arc of time, I think that third-party balance of customers will only increase. But it's great to have the hit first-party apps in the beginning because you can build scale that then if it's a fungible, and that's where the key is. You don't want to build for a digital native as if you were just doing hosting for them. You want to build. That's where I think some of the decision-making of ours is. It's probably getting better understood. What do we say yes to? What do we say no to? I think there was a lot of confusion. Hopefully by now anyone who switched on will figure this out. And so that's, I think, one thing we're doing on the third party. But the first party is probably where a lot of our leverage comes. And it's not even about one hit app on our first party even. Our portfolio stuff. which I just walked through in the earlier answer, gives us, again, the confidence that between that mix, we will be able to use our fleet to the maximum.
And remember, these assets, especially the data centers and so on, are long assets, right? There will be many refresh cycles for any one of these when it comes to the gear. So I feel that once you think about all those dimensions, The concentration risk gets, you know, mitigated by being thoughtful about how you really ensure the build is for the broad customer base. And maybe just to help with another angle, Zach, I think Satya's helped a lot, is that, you know, when you think about concentration risk or delivering to any customer, you have to remember that because we're talking about this very large flexible fleet that can be used for anyone and for any purpose, 1P, 3P, and including our commercial cloud, by the way, which I should be quite clear on, it is pretty flexible in every regard. You have to remember that the CPU and GPU and the storage gear doesn't come into play until the contracts start happening. And so you're right, some of these large contracts have delivery dates over time. So you get a lot of lead time in being able to say, oh, what's the status? So I think we're pretty thoughtful around what's always gone in our RPO balance and been considerate of that.
There's always been that taken into account when we publish that booking number and publish the RPO balance. Thank you very much. Thanks, Mark. Operator, next question, please.
The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed. Great. Thanks so much for taking the question, and I'll echo my congrats on an amazing start to the year. Amy, is there any way to quantify or frame the revenue impact of Azure being short on capacity? And while I appreciate the constraints you face are broad across the industry, is there risk of workloads going elsewhere, and how do you mitigate that? Yeah, Brad, it's a great question. You know, it's always hard to quantify precisely what would have been the revenue impact in quarter. But I would offer a way to think about it is, you know, Azure probably does bear most of the revenue impact. Because when you think about real priorities that you have to fill first, It's obviously the increasing usage and adoption and sales we've seen of M365 Copilot and the usage of Copilot Chat, which we've seen very different patterns, which we're encouraged by. It's the adoption of security features. It's the GitHub momentum. And so when you're thinking about it, that is where, and it is a priority for us, to allocate resourcing there first.
And so you're all right to ask, you know, how do I think about that We've worked very hard to try to mitigate it as best we can, but we have been short in Azure, and we've been clear on it. And I would say the other two priorities that I haven't mentioned maybe as much before is also just making sure our product teams and the AI talent that we've been able to hire into the company really over the past year and a half have access also to significant capacity because we're seeing it make the product better in a loop that is adding great benefit today into products people are using today for real-world work. And so we are making that a priority to make sure our research teams have that as well as our product engineering teams. And yes, it does impact Azure directly. That is the place where you see that prioritization. But I think it's probably hard for me to give an exact number, but it is safe to say that the number could be higher. Great. Thank you. Thanks, Brad. Operator, we have time for one last question.
The last question will come from the line of Kash Rangan with Goldman Sachs. Please proceed. Thank you very much, Amy. I just wanted to congratulate you. I think you said before that it is possible to accelerate Azure growth while getting efficient margins, and you've done it. Congrats on that. I have one for you, Satya. With respect to the elephant in the room, following this, being a little more direct, following up on Keith Wise's question, there's talk that another hyperscaler came in and took away the business that was rightfully Microsoft's. I'm sure that there is a different point of view here. I'm wondering if you could offer some perspective on your criteria to, is it about a certain volume of business that you wish to execute on the Microsoft paper? Or is it something broader than that that I don't think maybe people fully appreciate the terminal value that Microsoft will have on its balance sheet at the end of these contracts which I think is probably being underestimated as you have a full stack and you've got the multiple vectors to monetize the databases, foundry, and to your point that you are a platform company, not just a hyperscaler.
Maybe that's what it is all about. Or maybe there's another story about you letting the other hyperscaler company come in from nowhere and claiming a big piece of that four to five-year puzzle. Thank you so much once again. Really appreciate it. And congratulations. Well, thanks, Kash. I mean, for us, again, it just always goes back to, I think, the core principle, which is build a fleet that is fungible across the planet and works for all. third party and first party and research. So that's essentially what we have done. And so when some demand comes in shapes that don't fit that goal, where it's too concentrated, not just by customer, by location, by type of skewing, right? I think Amy mentioned some very key things. When you think about the margin profile of a hyperscaler, You've got to remember there's the AI accelerator piece, but there's compute, there's storage. And so if all of the demand just comes for just one meter, that's really not a long-term business we want to be in. That's even from a third party. We have to balance it with all of our first-party stuff because that's, after all, a different margin stack for us.
And then we have to fund our own R&D and model capability because in the long run, that's what's going to differentiate us. And so I look at all of those. We sort of use all of that to make sure we are saying yes to all the demand that we want. We say no to some of the demand that may be something that we could serve, but it's not in our long-term interest. And so that's sort of the decision-making we've done. And we feel very, very good about the decisions. In some sense, I feel even each time we say no to The day after, I feel better. And just, Cash, I think this is our last call with you, and I just want to say thanks and congratulations. It's been a privilege to work with you, and best of luck. Let me add to that. Best of luck, Cash. Thanks. Thank you so much. Very kind of you. Thanks, Cash. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with all of you soon. Thank you all. Thank you. This concludes today's conference. You may disconnect your lines
at this time, and thank you for your participation.