Quarter 1
Q4 2025 Earnings Call
Executive Name (Title): Thank you very much, Gene.
Management: Thank you, Matt. We will now be conducting the question and answer session.
Analyst Aaron Rakers (Wells Fargo): Yeah, thanks for taking the question. Lisa, at your analyst day back in November, you seemed to kind of endorse the high $20 billion AI revenue expectation that was out there on the street for 2027. I know today you're reaffirming that. Can you talk a little bit about what you've seen as far as customer engagements, how those might have expanded? I think you've alluded to in the past multiple multi-gigawatt opportunities. Just double-click on what you've seen for the MI455 and Helios platform from a demand-shaping perspective as we look into the back half of the year.
Executive Name (Title): Yeah, sure, Aaron. Thanks for the question. So first of all, I think the MI450 series development is going extremely well. So we're very happy with the progress that we have. We're right on track for a second half launch and beginning of production. And as it relates to sort of the shape of the ramp and the customer engagements, I would say the customer engagements continue to proceed very well. We have obviously a very strong relationship with OpenAI, and we're planning that ramp starting in the second half of the year, going into 2027. That is on track. We're also working closely with a number of other customers who are very interested in ramping MI450 quickly, just given the strength of the product, and we see that across both inference and training, and that is the opportunity that we see in front of us. So we feel very good about, you know, sort of the data center growth overall for us in 2026, and then certainly going into 2027, you know, we've talked about, you know, tens of billions of dollars of data center AI revenue, and we feel very good about that. Thank you.
Analyst Tim Arcuri (UBS): Thanks a lot. Gene, I'm wondering if you can maybe give us a little bit of detail under the hood for the March guidance. I know you basically told us that you told us about what embedded is going to be up a bit year over year. Client sounds like it's down seasonally, which I take to be maybe down 10. So can you give us a sense maybe of the other pieces? And then also, can you give us a sense of how data center GPU is going to ramp through the year? I know it's a backtrack over the year, but I think people are thinking these are somewhere in the $14 billion range this year. That's what investors are thinking. I'm not asking you to endorse that. But if you can give us a little flavor for sort of how the ramp will look for the year, that'd be great. Thanks.
Executive Name (Title): Hi, Tim. Thanks for your question. We're guiding one quarter at a time, but I can give you some color about our Q1 guide. First, it's right sequentially. We guided a decline around 5%. But data center is actually going to be up. And when you think about this, right, our CPU business seasonal actually in a regular seasonal pattern, it's going to be down high single digit. And in our current guide, we actually guide CPU revenue up sequentially very nicely. Also, with the data center GPU side, we also feel really good about GPU revenue, including China, will be also up. So very nice guide for the data center overall. On the client side, we do see seasonality sequentially decline. Embedded and gaming, they also have a seasonal decline. Maybe, Tim, if I just give you a little bit on the full year commentary. I think the important thing, as we look at the full year, we're very bullish on the year. If you look at the key themes, we're seeing very strong growth in the data center, and that's across two growth vectors. We see server CPU growth actually very strong.
We've talked about the fact that CPUs are very important as AI continues to ramp, and we've seen the CPU order book continue to strengthen as we go through the last few quarters and especially over the last 60 days. So we see that as a strong growth driver for us. As Gene said, we see server CPU growing from Q4 into Q1 in what normally is seasonally down. And that continues throughout the year. And then on the data center AI side, it's a very important year for us. It's really an inflection point. MI355 has done well and we were pleased with the performance in Q4 and we continue to ramp that in the first half of the year. But as we get into the second half of the year, the MI450 is really an inflection point for us. So that revenue will start in the third quarter, but it will ramp significant volume in the fourth quarter as we get into 2027. So that gives you a little bit of sort of what the data center ramp looks like throughout the year. Thank you, Lisa.
Analyst Vivek Arya (Bank of America): Thank you. First, just a clarification on what you're assuming for your China MI308 sales beyond Q1. And then, Lisa, specific to 2026, can your data center revenues grow at your target 60% plus growth rate? I realize that that's a multi-year target, but do you think that there are enough drivers, whether it's on the server CPU side or GPU side, for you to grow at that target base, even in 2026?
Executive Name (Title): Yeah, sure, Vivek. So let me talk a little bit about China first, because that's, I think, important for us to make sure that's clear. Look, we were pleased to have some MI308 sales in the fourth quarter. They were actually, you know, a license that was approved through, you know, work with the administration and those orders were actually from very early in 2025. And so we saw some revenue in Q4 and we're forecasting for about $100 million of revenue in Q1. We are not forecasting any additional revenue from China just because it's a very dynamic situation. So given that it's a dynamic situation, we're still waiting for – we've submitted licenses for the MI325 and we're continuing to work with customers and understanding sort of their customer demand. We thought it prudent not to forecast any additional revenue other than the $100 million that we called out in the Q1 guide. Now, as it relates to overall data center, as I mentioned in the question to Tim, we're very bullish about data center. I think the combination of drivers that we have across our CPU franchise, I mean, the Epic product line, both Turin and Genoa continue to ramp well.
And in the second half of the year, we will be launching Venice which we believe actually extends our leadership, and the MI450 ramp, which is also very significant in the second half of 2026. We're not obviously guiding specifically by segment, but the long-term target of let's call it greater than 60% is certainly possible in 2026. Thank you, Lisa.
Analyst CJ Muse (Cantor): Yeah, good afternoon. Thanks for taking the question. I'm curious on the server CPU side of the house, and given the dramatic tightness, curious, you know, your ability to source incremental capacity from TSMC and elsewhere, and I guess how long will it take for that to see wafers out, and how should we think about the implications for kind of the growth trajectory throughout all of calendar 26? And I guess as part of that, if you could speak to how we should be thinking about inflection and pricing as well, that would be very helpful.
Executive Name (Title): Sure, CJ. So a couple of points about the server CPU market. First of all, we think the overall server CPU TAM is going to grow, let's call it strong double digits in 2026, just given the, as we said, the relationship between CPU demand and overall AI ramp. So I think that's a positive. Relative to our ability to support that, we've been seeing this trend for the last couple of quarters. So we have increased our supply capacity capability for server CPUs. And that's one of the reasons we are able to increase our Q1 guide as it relates to the server business. And we see the ability to continue to grow that throughout the year. There's no question that the demand continues to be strong, and so we're working with our supply chain partners to increase supply as well. But from what we see today, I think the overall service situation is strong, and we are increasing supply to address that.
Analyst CJ Muse (Cantor): Hey, CJ, do you have a follow-up question?
Analyst CJ Muse (Cantor): I do. Maybe for Gene, if you could kind of touch on gross margins through the year and as you balance kind of strengthening server CPU with, you know, perhaps greater, you know, GPU accelerating in the second half. Is there kind of a framework that we should be working off of? Thanks so much.
Executive Name (Title): Yeah, thank you for the question. We are very pleased with our growth margin Q4 performance and the Q1 guide at 55%, which actually is 130 basis points up year-over-year. We continue to ramp our MI355 year-over-year very significantly. I think we are benefiting from a very favorable product mix across all our business. If you think about the in-data center, we're ramping our new product, new generation product, Turin, and the MI355, which helps the growth margin in client. We continue to move up the stack and also gaining momentum in our commercial business. Our client and business growth margin has been improving nicely. In addition, certainly we see the recovery of our embedded business, which is also margin accretive. So all those tailwinds we are seeing, we continue to see in the next few quarters. And my 450 ramp, of course, in Q4, our growth margin will be driven largely by mix, and I think it will give you more color when we get there. But overall, we feel really good about our growth margin progression this year.
Analyst Joe Moore (Morgan Stanley): Great. Thank you. On the MI-455 ramp, will 100% of the business be racks? Will there be kind of an eight-way server business around that architecture? And then is the revenue recognition when you ship to the rack vendor? Or is there something to understand about that? Thank you.
Executive Name (Title): Yes, Joe. So we do have multiple variants of the MI450 series, including an eight-way GPU form factor. But for 2026, I would say the vast majority of it is going to be, you know, rack scale solutions. And yes, we will take revenue when we ship to the rack, you know, builder.
Analyst Joe Moore (Morgan Stanley): Okay, great. And then can you talk to any risks that you may have in terms of, you know, once you get silicon out, turning that into racks, any potential issues as you ramp that? I know your competitor had some last year and you said you learned from that. You know, is there anything you've done with kind of pre-building racks to sort of ensure you won't have those issues? Just any risk that we need to understand around that?
Executive Name (Title): Yeah, I mean, I think, Joe, the main thing is the development's going really well. It is, we're right on track with the, you know, MI450 series as well as the Helios rack development. We've done a lot of testing already, both at the rack scale level as well as, you know, at the silicon level. So far, so good. We are getting, let's call it, a lot of input from our customers on, you know, things to test so that we can do a lot of testing in parallel. And, you know, our expectation is that we will be on track for our second half launch. Thank you.
Analyst Stacy Rasgan (Marine Research): Hi guys. Thanks for taking my questions. First of all, Lisa, I just wanted to ask about OPEX. Like every quarter you guys are guiding it up, and then it's coming in even higher, and then you're guiding it up again. And I understand, given the growth trajectory, that you need to invest. But how should we think about the ramp of that OPEX and that spending number, especially as the GPU revenue starts to inflect? Do we get leverage on that, or should we be expecting the OPEX to be growing even more materially as the AI revenue starts to ramp?
Executive Name (Title): Yeah, sure, Stacy. Thanks for the question. Look, I think in terms of OpEx, we're at a point where we have very high conviction in the roadmap that we have. And so in 2025, as the revenue increased, we did lean in on OpEx. And I think it was for all the right reasons. As we get into 2026 and as we see some of the significant growth that we're expecting, we should absolutely see leverage. And the way to think about it is we've always said in our long-term model that OpEx should grow slower than revenue, and we would expect that in, you know, 2026 as well, especially as we get into the second half of the year and we see, you know, inflection in the revenue. But, you know, at this point, I think the, you know, if you look at our free cash flow generation and the overall revenue growth, I think the investment in OPEX is absolutely the right thing to do. Thank you.
Analyst Stacy Rasgan (Marine Research): For my follow-up, I actually have two sort of one-line answers I'm looking for. Just first, the 100 million in China revenue in Q1, does that also drop through at zero cost basis like we had in Q4, and is that a margin headwind? And number two, I know you don't give us the AI number, but could you just give us the annual 2025 instinct number now that we're through the year? How big was it?
Executive Name (Title): So, Stacy, let me answer your first question on the 100 million revenue in Q1. Actually, the inventory reserve reversed in Q4, which was 360 million, not only associated with Q4 revenue, China revenue, but also covers the $100 million revenue we expect to ship in Q1 to China with our MI308. So the Q1 gross margin guide is a very clean guide. And Stacey, for your second question, as you know, we don't guide at the business level, but to help you with your models, I think you can, if you look at the Q4 data center AI number, even if you were to back out the China number, which was, you know, let's call it not a recurring number, you would still see growth. You'll see growth from Q3 to Q4. So that should help you a little bit with your modeling. Thank you.
Analyst Joshua Buckalter (TD Cowen): Hey, guys. Thanks for taking my question. I want to ask about clients. So the segment beat pretty handily in the fourth quarter. I recognize you guys have been gaining share with Ryzen, but I think given what we've been seeing in the memory market, there's a lot of concern about inflationary costs and the potential for pull-ins. Were there any changes in your order patterns during the quarter? And maybe bigger picture, how are you thinking about client growth and the health of that market into 2026?
Executive Name (Title): Yeah, thanks for the question, Josh. The client market has performed extremely well for us throughout 2025, very strong growth for us, both in terms of ASP mixing up the stack as well as just unit growth. Going into 2026, we are certainly watching the development of the business. I think the PC market is an important market. Based on everything that we see today, we're probably seeing the PC TAM down a bit just given some of the inflationary pressures of the commodities pricing including memory. The way we are modeling the year is let's call it second half a bit sub-seasonal to first half, just given everything that we see. Even in that environment with the PC market down, we believe we can grow our PC business. Our focus areas are enterprise. That's a place where we're making very nice progress in 2025, and we expect that in 2026, and just continuing to grow sort of at the premium, higher end of the market. Thank you for the call there.
Analyst Joshua Buckalter (TD Cowen): I want to ask about the Instinct family. So we've seen your big GPU competitor make a deal with an SRAM base. spatial architecture provider, and then OpenAI has reportedly been linked to one as well. Could you speak to the competitive implications of that? You've done well in inferencing, I think, partly because of your leadership in HBM content. So I was wondering if you could maybe address the poll seemingly motivated by lower latency inference and how Instinct is positioned to service this if you're indeed seeing it as well. Thank you.
Executive Name (Title): Yeah, I think, Josh, it's really, I think, the evolution that you might expect as the AI market matures. What we're seeing is as inference ramps, really the tokens per dollar or the efficiency of the inference stack becomes more and more important. As you know, with our triplet architecture, we have a lot of ability to optimize across inference training and even across sort of the different stages of inference as well. I think I view this as very much as you go into the future, you'll see more workload optimized products, and you can do that with GPUs as well as with other more ASIC-like architectures. I think we have the full compute stack to do all of those things, and from that standpoint, we're going to continue to lean into inference as we view that as a significant opportunity for us in addition to ramping our training capabilities. Thank you.
Analyst Ben Reitzis (Melius Research): Yeah, hey, thanks. Appreciate it. Hey, Lisa, I wanted to ask you about OpenAI. You know, I'm sure a lot of the volatility, you know, out there is not lost on you. Is everything on track for the second half for starting the six gigawatts and the three and a half year timeline as far as you know? And is there any other color that you'd just like to give on that relationship, and then I have a follow-up. Thank you.
Executive Name (Title): Yeah, I mean, I think, Ben, what I would say is, you know, we're very, very much working in partnership with OpenAI as well as our CSP partners to deliver on MI450 series and deliver on the ramp. The ramp is on schedule to start in the second half of the year. MI450 is doing great. Helios is doing well. We are in, let's call it, deep co-development across all of those parties. And as we look forward, I think we are optimistic about the MI450 ramp for OpenAI. But I also want to remind everyone that we have a broad set of customers that are very excited about MI450 series. And so in addition to the work that we are doing with OpenAI, there are a number of customers that we are working to ramp in that timeframe as well.
Analyst Ben Reitzis (Melius Research): All right, I appreciate that. And I wanted to shift to the server CPU, and just talk about x86 versus arm. You know, there's some view out there that x86 has particular edge and agents, big picture, you know, do you agree with that? And what are you seeing from customers and, in particular, you know, obviously, your big competitor is going to be selling an arm CPU separately now in the second half. So if there's just anything on that competitive dynamic versus ARM and what NVIDIA is doing and your views on that, that'd be great to hear. Thanks.
Executive Name (Title): Yeah, Ben, what I would say about the CPU market is there is a great need for high-performance CPUs right now, and that goes towards agentic workloads where when you have these AI processes or AI agents that are spinning off a lot of work, in an enterprise, they're actually going to a lot of traditional CPU tasks, and the vast majority of them are on x86 today. I think the beauty of Epic is that we've optimized. We've done workload optimization, so we have the best cloud processor out there. We have the best enterprise processor. We also have some lower cost variants for storage and other elements. And I think all of that comes into play as we think about the entirety of the AI infrastructure that needs to be put in place. I think the CPUs are going to continue to be as important as a piece of the AI infrastructure ramp and that's one of the things that we mentioned at our analyst day back in November, really this multi-year CPU cycle and we continue to see that. I think we've optimized Epic to satisfy all of those workloads and we're going to continue to work with our customers to expand our Epic footprint.
Analyst Tom O'Malley (Barclays): Hey, Lisa, how are you? I just wanted to ask, you mentioned on memory earlier as a sticking point in terms of inflationary cost. Different customers do this in different ways. Different suppliers do this in different ways. But can you maybe talk about your procurement of memory, when that takes place, particularly on the HBM side? Is that something that gets done a year in advance, six months in advance? Different accelerator guys have talked about different timelines. We'd be curious to kind of hear when you do the procurement.
Executive Name (Title): Yeah, I mean, given the lead times for things like HBM and wafers and these parts of the supply chain, I mean, we're working closely with our suppliers over a multi-year timeframe in terms of what we see in demand, how we ramp, how we ensure that our development is very closely tied together. So I feel very good about our supply chain capabilities. We have been planning for this ramp. So independent of the current market conditions, we've been planning for a significant ramp in our both CPU as well as our GPU business over the past, you know, couple of years. And so from that standpoint, I think we're well positioned to grow substantially in 2026. And now we're also doing, you know, multi-year agreements that, you know, extend beyond that given tightness of the supply chain.
Analyst Tom O'Malley (Barclays): Just as a follow-up, you've seen a variety of different things in the industry today. In terms of system accelerators, so KVCache offload, more discrete ASIC style compute, CPX. If you look at what your competitors are doing and you look at your first generation of system architecture coming out, maybe spend some time on, do you see yourself following in the footsteps of some of these different types of architectural technologies? changes? Do you think that you'll go in a different direction? Anything just on the evolution of your system-based architecture and then the adjoining products and or silicon within? Thank you.
Executive Name (Title): I think, Tom, what we have is the ability with a very flexible architecture, with our triplet architecture, and then we also have a flexible platform architecture that allows us to really have different system solutions for the different requirements. I think we're very cognizant that there will be different solutions. I've often said there's no one-size-fits-all, and I'll say that again. There's no one-size-fits-all, but that being the case, it's clear that the rack scale architecture is very, very good for the highest-end applications when you're talking about distributed inference and training. But we also see an opportunity with enterprise AI to use some of these other form factors. And so we're investing across that spectrum.
Analyst Ross Seymour (Deutsche Bank): Hi, thanks for my ask. A couple questions. I guess my first question is back on the gross margin side of things. As you go from the MI300 to the 400 to the 500 eventually, do you see any changes in the gross margin throughout that period? In the past, you've talked about optimizing dollars more so than percentages, but just on the percentage side, do they go up, down, or is there volatility as you go from one to the next for any reason? Just wondered on the trajectory there.
Executive Name (Title): Ross, thank you for the question. At a very high level, each generation, we actually provide much more capabilities for more memory, help our customer more. So in general, the growth margin should progress each generation when you offer more capabilities to your customers. But typically, when you first ramp, at the beginning of ramp over generation, it tends to be lower. When you get to the scale, get to the yield improvement, the testing improvement, and also overall performance improvement, that you will see growth margin improving within each generation. So it's a kind of dynamic growth margin, but in the longer term, you should expect each generation should have a higher growth margin.
Analyst Ross Seymour (Deutsche Bank): Thanks for that, Jean. And then one small segment of your business, but it seems quite volatile and you talked a little bit about further off than you usually do is the gaming side of things. What is the magnitude down you're talking about this year? Because in 2025, you thought it was going to be flat, and it ended up growing 50%, which was a nice positive surprise. But now that you're talking about this year being down, but then the next-gen Xbox ramping in 2027, I just hope to get some color on what you see as kind of the annual trajectory there.
Executive Name (Title): Yeah, so Lisa can add more. So 2026, actually, it's the seventh year of the current product cycle. Typically, when you're at this stage over the cycle, revenue tends to come down. We do expect the revenue on the semi-customer revenue side to come down significantly, double-digit, for 2026, as Lisa mentioned in her prepared remarks. For the next generation? Yeah, I think we'll certainly talk about that going forward, but as we ramp the new generation, you would expect a reversal of that.
Management: Thank you, everybody, for participating on the call.
Management: Operator, I think we can go ahead and close the call now. Thank you.
Management: Good evening. Thank you.
Management: And ladies and gentlemen, that does conclude the question and answer session, and that also concludes today's teleconference. You may disconnect your lines
at this time, and have a great rest of the day.
Quarter 2
Q3 2025 Earnings Call
Analyst Vivek Arya (Bank of America Security): Thank you for the question. I had a near-term and a medium-term question. For the near-term, Lisa, I was hoping if you could give us some sense of the CPU-GPU mix in Q3 and Q4, and just tactically, how are you managing this transition from your MI355 towards MI400 in the second half of next year? Can you continue to grow in the first half of next year from these Q4 levels, or should we expect some kind of pause or digestion before customers get on board the MI400 series?
Executive Lisa (Title): Sure Vivek, thanks for the question. So a couple of comments. We had a very strong Q3 for the data center business. I think we saw a strong outperformance in both the server as well as the data center AI business. And a reminder that that was without any MI308 sales. The MI355 has ramped really nicely. We expected a sharp ramp into the third quarter and that proceeded well. And as I mentioned, we've also seen some strengthening of the server CPU sales and not just, let's call it near term, but we're seeing our customers are giving us some visibility in the next few quarters that they see elevated demand, which is positive. Going into the fourth quarter, again, strong data center performance, up double digits sequentially and up in both server and data center AI. Again, on the strength of those businesses. And to your question, I mean, we're not guiding into 2026 yet, obviously, but given what we see today, we see a very good demand environment into 2026, so we would expect that MI355 continue to ramp in the first half of 2026 and then as we mentioned, MI450 series comes online in the second half of 2026, and we would expect a sharper ramp as we go into the second half of 2026 of our data center AI business.
For my follow-up, there is some industry debate, Lisa, about OpenAI's ability to kind of simultaneously engage with all merchant and the ASIC suppliers, just given the constraints around power and, you know, CapEx and their existing kind of CSP partners and so forth. So how are you thinking about that? Like, what is your level of visibility in the initial engagement? And then more importantly, how it kind of broadens out into 27? Is there a way that one can model what the allocation would be? Or just how should we think about the level of visibility in this very important customer?
Executive Lisa (Title): Yeah, absolutely, Vivek. Look, we're obviously very excited about our relationship with OpenAI. It's a very significant relationship. Think about it as it's a pretty unique time for AI right now. There's just so much compute demand across all of the workloads. I think in our work with OpenAI, we are planning multiple quarters out, ensuring that the power is available, that the supply chain is available. The key point is the first gigawatt we will start deploying in the second half of 26, and that work is well underway. And we continue, just given where lead times are and things like that, we are planning very closely with OpenAI as well as the CSP partners to ensure that we're all prepared with Helios so that we can deploy the technology as we stated. So I think overall, we're working very closely together. I think we have good visibility into the MI450 ramp, and things are progressing very well.
Analyst Thomas O'Malley (Barclays): Good morning. Thanks for taking my question, and congrats on the good results. I had a first question on Helios. Obviously, with the announcement at OCP, customer interaction has to be growing. Could you talk about into next year what your view is on discrete sales versus system sales? When do you see that crossover kind of happening? And just what initial responses have been from customers after getting a better look at it at the show?
Executive Lisa (Title): Yeah, sure. Tom, thanks for the question. There's a lot of excitement around MI450 and Helios. I think the OCP reception was phenomenal. We had numerous customers and, frankly, bringing their engineering teams to understand more about the system, more about how it's built. There's always been some discussion about just how complex these rack scale systems are, and they certainly are. And we are very proud of the Helios design. I think it has all the features, functions, reliability, performance, power performance that you would expect. I think the interest in MI450 and Helios has just expanded over the last number of weeks, certainly with some of the announcements that we've made with OpenAI and OCI, as well as the OCP show with Meta. I think overall, from our perspective, I think things are going really well in both the development as well as the customer engagement there. So in terms of rack scale solutions, we would expect that the early customers for MI450 will really be around the rack scale solutions. We will have other form factors as well for the MI450 series, but there's a lot of interest in the full rack scale solution.
Super helpful. And then as my follow-up, it's a broader question as well and similar to kind of what Vivek asked. But if you look at the power requirements that are out there for some of the early announcements into next year, they're pretty substantial. And then you also have component issues that you're seeing across interconnected memory. Just from your perspective as an industry leader, where do you think that the constraint will be? Will it come first with components not being available? Or do you think that both data center footprint in terms of infrastructure and or power is the gating factor to some of these deployments into next year, just as we release see some larger number of starts get deployed?
Executive Lisa (Title): Yeah, sure, Tom. I think what you're pointing out is what we as an industry have to do together. The entire ecosystem has to plan together, and that is exactly what we're doing. So we're working with our customers on their power plans over the next, actually, I would say two years from a silicon and a memory and a packaging to and a component supply chain. We're working with our supply chain partners to make sure all of that capacity is available. I can tell you from our visibility, we feel very good that we have a strong supply chain that is prepared to deliver these very significant growth rates and large amount of compute that is out there. And I think all of this is going to be tight. I think there is a desire to put on more compute, and we're working closely together. I will say that, you know, the ecosystem is very, I would say, you know, works very hard when there are these types of, you know, let's call it, you know, tightness out there. And so, you know, we also see things, you know, open up as that we're working, you know, getting more power, getting more supply, all of those things. So the net-net is I think we are well-positioned to grow significantly as we transition into the second half of 26 into 27 with the MI450 and Helios.
Analyst Joshua Buchwalter (DD Cowen): Hey, guys. Thank you for taking my question. Actually, I wanted to start on the CPU side. You and your largest competitor in that space have talked about near-term strength supporting AI workloads on general-purpose servers from Agentic. Maybe you could speak to the sustainability of these trends. They called out supply constraints. Are you seeing any of those in your supply chain? Are we in a period where we should think about the CPU business on the data center side as being aseasonal, or should we expect normal seasonality in the first half of next year?
Executive Lisa (Title): Sure, Josh. A couple of comments on the CPU server side. I think we've been watching this trend for the last couple of quarters, and we started seeing, let's call it, some positive signs in CPU demand actually a couple quarters ago. And what's happened as we've gone through 2025 is now we see sort of a broadening of that CPU demand. So we have a number of our large hyperscale clients are now forecasting significant CPU build into 2026. And so from that standpoint, I think it's a positive demand environment. And it is because AI is requiring quite a bit of general purpose compute, and that's great. It catches our cycle as we're ramping TURN. So the TURN ramp has gone extremely fast, and we see good pull for that product as well as you know, consistent, strong demand for our Genoa product line as well. So, you know, back to, you know, seasonality as we go into 2026, I think we expect that the CPU demand environment into 2026 is going to be, let's call it, you know, positive. And so, you know, we'll guide more as we get into the end of the year, but I would expect a positive demand environment for CPUs as we, you know, as we see this, you know, demand.
I do feel like it's durable. It is not a short-term thing. I think it is a multi-quarter phenomenon as we're seeing just much more demand as these AI workloads really turn into you have to do real work. So, Josh, on the supply side, we have supplies to support our growth, and especially in 2026, we're prepared for the rest.
Got it. Thank you both. And for my follow-up, you know, Lisa, in your prepared remarks, you highlighted progress you guys have made on Rockham 7. I know this has been an area of focus, and, you know, can you maybe spend a minute or two talking about where you feel you're at competitively with Rackham? You know, how wide is the breadth of support you're able to offer to the developer community? And, you know, what areas do you still have work to do to close any potential competitive gap?
Executive Lisa (Title): Yeah, Josh, thanks for the question. Look, we've made great progress with Rockham. Rockham 7 is a significant step forward in terms of performance and sort of all the frameworks that we support. It's been really, really important for us to get sort of day zero support of all the newest models and native support for all the newest frameworks. I would say most customers who are starting with AMD now have a very, you know, very smooth experience as they're, you know, bringing on their workload to AMD. You know, there's obviously always more work to do. We're continuing to, you know, augment the libraries and, you know, the overall environment that we have, especially as we go to some of the newer workloads where you see, you know, training and inference really coming together with reinforcement learning. But overall, I think very strong progress with Rockham. And by the way, we're going to continue to invest in this area because it's so important to really make our customer development experience as smooth as we can.
Analyst BJ Muse (Tanner Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I guess first question, as you think about the 355 to 400 transition and moving to full rack scale, is there a framework that we should be thinking about for gross margins throughout calendar 26?
Executive Lisa (Title): Yes, D.J., thanks for the question. I think in general, as we said in the past, for our data center GPU business, the gross margins continue to improve when we ramp a new generation of products, typically at the beginning of the ramp. You go through a transition period, then you will normalize the growth margin. We're not guiding 2026, but our priority in data center GPU business is to really expand the top line revenue growth and the growth margin dollars. And of course, at the same time, we'll continue to drive the growth margin percentage up, too.
Very helpful. And I guess maybe, Lisa, to kind of probe kind of your growth expectations through 26 and beyond, and you talked about tens of billions of dollars in 27. Can you kind of speak at a high level how you're thinking about OpenAI and other large customers and, you know, how we should be thinking about the breadth of your customer kind of penetration throughout calendar 26, 27? Any help on that would be super.
Executive Lisa (Title): Sure, CJ, and we'll certainly address this topic in more detail at our Analyst Day next week, but let me give you some maybe higher-level points. Look, I think we're really excited about our roadmap. I think we have seen great traction amongst the largest customers. The OpenAI relationship is extremely important to us, and it's great to be able to talk at the multi-gigawatt scale because I think that's really is what we believe we can deliver to the marketplace. But there are numerous other customers that we are in deep engagements with. We talked about OCI. We also announced a couple of systems with the Department of Energy that are significant systems. And we have many other engagements. So the way you should think about it is there are multiple customers that we would expect to have, let's call it, very significant scale customers in the MI 450 generation, and that's sort of the breadth of the customer engagement that we've built, and it's also how we're dimensioning the supply chain to ensure that we can supply certainly our OpenAI partnership as well as the numerous other partnerships that are well underway.
Analyst Stacy Raskin (Bernstein Research): Hi, guys. Thanks for taking my questions. My first one, for data center in the quarter, what grew more year over year on a dollar-to-percentage basis, the servers or the DPUs?
Executive Lisa (Title): So I think, yeah, Stacy, I think our commentary was, you know, data center, you know, grew nicely year over year in both of the areas, both for servers as well as data center AI.
Analyst Stacy Raskin (Bernstein Research): Yeah, but could you, I mean, just directionally, did one, which one grew more than the other? I'm not even asking for numbers. Just directionally?
Executive Lisa (Title): Directionally, they are similar, but the server is a little bit better. Server is a little bit better. Okay. And then on the guidance, you said that servers – I mean, data center overall up double digits. You said servers up strong double digits. What does that mean? Is that like more than 20%? Or like how do I – how do I think about what you mean by strong double digits? Because, again, I'm trying to – like, I mean, for the GPUs for the year, like, do you think you're – I mean, you were saying roughly like $6.5 billion or something last quarter for the year. Do you think it's still in that range?
Executive Lisa (Title): Stacy, here's what we guided. We guided sequentially data center will be up double digits, and we said the server will go up. And at the same time, we also said that MI350 also going to RAM. So we did not, I don't think what you just mentioned was what we guided.
Analyst Stacy Raskin (Bernstein Research): Oh, okay. So, I mean, if you say servers are up strongly, does that mean they're up more than the Instinct? Because you didn't really make that commentary on Instinct.
Executive Lisa (Title): No, look, Stacy, let me say it. So data center... sequentially, double-digit percentage, both server and data center AI are going to be up as well. And from the standpoint of where they are, I think we're pleased with how both of them are performing. The strong double-digit percentage comment perhaps was applying to the year-over-year commentary.
Analyst Timothy Arcuri (UBS): Thanks a lot. Lisa, I know it's only been a month since you announced this idea with OpenAI, but can you give us maybe some anecdotes of how this has influenced your position in the market with other customers? Like, are you engaged with customers that you wouldn't have been engaged with if you hadn't done this deal? That's the first part of the question. And then the second part relates to a prior question, which is that it looks like they could be something like half of your data center GPU revenue in the 2027, 2028 timeframe. So how much risk in your mind is there around that single customer for you?
Executive Lisa (Title): Sure, Tim. So let me say a couple things. First of all, the OpenAI deal has been in the works for quite some time. We're happy to be able to talk about it broadly and also talk about the scale of the deployments and the scale of the engagements being multi-year, multi-gigawatt. I think all those things were very positive. We've had a number of other engagements as well. I think over the last, if you were asked specifically over the last month, I would say that it's been a number of factors. I think the OpenAI deal was one of them. I think being able to show the Helios RAC in full force at Open Compute was also a very important milestone because people could see the engineering and sort of the capabilities of the Helios rack. And if you're asking whether we've seen an increase of interest or an acceleration of interest, I think the answer is yes. I think customers are broadly engaged and perhaps broadly engaged at a higher scale, which is a good thing. And then from the standpoint of customer concentration, I think a very key foundation for us in this business is to have a broad set of customers. We've always been engaged with a number of customers. I think we're dimensioning the supply chain in such a way that we would have ample supply to have multiple customers at similar scale as we go into the 27-28 timeframe, and that's certainly the goal.
Analyst Aaron Rakers (Wells Fargo): Yeah, thanks for taking the questions. I'm curious on the server strength that you're seeing, if there's a way to unpack how we think about unit growth versus ASP expansion as we move through the term product cycle, and how do you guys just kind of think about that going forward?
Executive Lisa (Title): Yeah, so Aaron, on the server CPU side, TURN certainly is more content, so we see ASPs grow as TURN ramps. But I also mentioned in the prepared remarks that we're actually seeing a very good mix of Genoa still there. So TURN is ramping very quickly, but we are also seeing Genoa demand continue well as the hyperscalers are not able to move everything to the latest generation immediately. So from our standpoint, I think it's broad-based CPU demand across a number of different workloads. A little bit of this is, let's call it server refresh, but it seems like from our customer conversations, the workloads are broadly due to the fact that AI workloads are spawning more traditional compute, so more build-out is necessary. I think going forward, one of the things that we see is there is more of a desire for the latest generation. And so as much as we're happy with how TURN is ramping, we're seeing actually a strong pull on Venice and a lot of early engagement in Venice, which kind of says a lot about kind of the importance of general purpose compute at this point in time.
Analyst Aaron Rakers (Wells Fargo): Yeah, thanks. As a quick follow-up, I'm curious, and not to steal maybe, you know, the discussion from next week, but, you know, Lisa, you've been very consistent, like, you know, 500 billion of total AI silicon TAM opportunity and obviously progressing above that. I'm curious, as we think about these large, you know, megawatt kind of deployments, how you think about, you know, the updated views on that AI silicon TAM as we look forward?
Executive Lisa (Title): Well, Erin, as you said, not to take too much away from what we're going to talk about next week. Look, we're going to give you a full picture of how we see the market next week, but suffice it to say, you know, from everything that we see, we see the AI compute TAM just going up. So, you know, we'll have some updated numbers for you, but the view is, you know, whereas $500 billion sounded like a lot when we first talked about it, we think there is a larger opportunity for us over the next few years, and that's pretty exciting.
Analyst Antoine Tchaikabon (New Street Research): Hi, thank you so much for taking my question. So I'd like to ask about whether the developing relationship with OpenAI could be a tailwind to the development of your software stack. Can you maybe tell us about how the collaboration works in practice and whether the partnership contributed in making Rackham more robust?
Executive Lisa (Title): Yeah, Antoine, thanks for the question. I think the answer is yes. I think all of our large customers contribute to, let's call it, a broadening and deepening of our software stack overall. I think the relationship with OpenAI is certainly one where our customers plans are to work deeply together on hardware as well as software, as well as systems and future roadmap. And from that standpoint, the work that we're doing together with them on Triton is certainly very valuable. But I will say beyond OpenAI, the work that we do with all of our largest customers are super helpful to strengthen the software stack. And we have put significant new resources into not just the largest customers, but we are working with a broad set of AI-native companies who are actively developing on the Rackham stack. We get lots of feedback. I think we've made significant progress in the training and inference stack, and we're going to continue to double down and triple down in this area. So more customers that use AMD, I think all of that goes to enhancing the Rackham stack. And we're actually, we'll talk a little bit more about this next week, but we're also using AI to help us accelerate the rate and pace of some of the Rackham kernel development and just the overall ecosystem.
Analyst Antoine Tchaikabon (New Street Research): Thanks, Lisa. Maybe as a quick follow-up, could you tell us about the useful lives of GPUs? I know that most CSPs appreciate them over five, six years, but in your conversations with them, I'm just wondering if you see or hear any early indication that in practice they may be planning to sweat those GPUs for longer than that.
Executive Lisa (Title): I think we have seen some early indications of that, Antoine. I think the key point being clearly there's a desire to get on the latest and greatest GPUs when you're building new data center infrastructure. And certainly when we're looking at MI355s, they're often going into new liquid-cooled facilities, MI450 series as well. But then we're also seeing the other trend, which is, there's just a need for more AI compute. And from that standpoint, some of the older generations, MI300X is still doing quite well in terms of just where we see people deploying and using, especially for inference. And from that standpoint, I think you see a little bit of both.
Analyst Joe Moore (Morgan Stanley): Great, thank you. You mentioned MI308. I guess what's your posture there to the extent that, you know, if there is some relief that you're able to ship, do you have readiness to do that? Can you give us a sense for how much of a swing factor that could be?
Executive Lisa (Title): Sure, Joe. So, look, it's still a pretty dynamic situation with MI308, so that's the reason that we did not include any MI308 revenue in the Q4 guide. We have received some licenses for MI308, so we're appreciative of the administration supporting some licenses for MI308. We're still working with our customers on the demand environment and sort of what the overall opportunity is, and so we'll be able to update that more in the next couple of months.
Analyst Joe Moore (Morgan Stanley): Okay, but you do have product to support that market if it does open up, or are you going to have to start to rebuild inventory for that?
Executive Lisa (Title): We've had some work in process. I think we continue to have that work in process, but we'll have to see how the demand environment shapes up.
Analyst: Okay. Thank you very much.
Analyst: Thank you very much.
Analyst Ross Seymour (Deutsche Bank): Thanks for squeezing me in. Lisa, this might take longer than the amount of time we have left before the top of the hour, but there's been so many of these multi-gigawatt announcements from OpenAI. How does AMD truly differentiate in there? When you see that big customer signing deals with other GPU vendors and ASIC vendors, et cetera, how do you attack that market differently than those competitors to not only get the six gigawatt initially, but hopefully more after that?
Executive Lisa (Title): Sure, Ross. Well, look, what I see is actually this environment where the world needs more AI compute. And from that standpoint, I think OpenAI has kind of led in the quest for more AI compute, but they're not alone. I think when you look across the large customers, there is really a demand for more AI compute as you go forward over the next couple of years. I think we each have our advantages in terms of how we are positioning our products. I think MI450 series in particular, I think is an extremely strong product, rack scale solution. Overall, when we look at compute performance, when we look at memory performance, we think it's extremely well positioned for both inference as well as training. The key here is time to market. It's total cost of ownership. It's deep partnership and thinking about not just MI450 series, but what happens after that. So we're deep in conversations on MI500 and beyond. And we certainly think we're well positioned to not only participate, but participate in a very meaningful way across the sort of the demand environment here. And I think we have certainly learned a ton over the last couple of years with our AI roadmap. We've made significant inroads in terms of just what the largest customer needs from a workload standpoint. So I'm pretty optimistic about our ability to capture a significant piece of this market going forward.
Analyst Ross Seymour (Deutsche Bank): Great, and I guess as my follow-up, it'll be a direct follow-on to that. You did a unique structure by granting some warrants with this deal, and I know they vest according to a price that would be very accretive and make everybody happy. Do you think that was a relatively unique agreement, or given that the world needs more processing power, that AMD is open to somewhat similar, conceptually similar creative ways to address that demand over time with other equity vehicles, etc.?
Executive Lisa (Title): Sure, Ross. So I would say it was a unique agreement from the standpoint that, you know, unique time in AI. What we wanted, what we prioritized was really deep partnership and, you know, multi-year, multi-generation, you know, significant scale. And I think we got that. We got a structure that has, you know, extremely aligned incentives. Everybody wins, right? We win, OpenAI wins, and, you know, our shareholder win, you know, sort of benefits from this and all of that accrues to the overall roadmap. I think as we look forward, I think we have a lot of very interesting partnerships that are developing, whether they're with the largest AI users or you think about sovereign AI opportunities. And we look at each one of these as a unique opportunity where we're bringing sort of the whole of AMD, both technically as well as all the rest of our capabilities to the parties. So I would say OpenAI was pretty unique, but I would imagine that there are lots of other opportunities for us to bring our capabilities into the ecosystem and participate in a significant way.
Analyst: Ladies and gentlemen, that does conclude the question and answer session, and that also concludes today's teleconference. We thank you for your participation. You may disconnect your lines
at this time.