Back
Earnings Call Transcripts

Advanced Micro Devices, Inc.

AMD
Quarters2 Quarters
ContentQ&A Sections
SourceEarnings Conference Call
Quarter 1

Q1 2026 Earnings Call — May 5, 2026

Analyst Joshua Buckwalter (TD Cowen): Hey, guys. Congrats on the results, and thanks for taking my question. I'm actually going to start with CPUs, which hasn't happened in a bit. You know, it hasn't been that long since you announced the $60 billion server CPU TAM for 2030 at the Analyst Day, and it's very quickly doubled. Agentic AI has obviously gotten a lot of attention in recent months, but it would be helpful to hear your thoughts on how this TAM is inflecting and changing so meaningfully in such a short amount of time. And maybe you could also speak to your confidence in hitting that greater than 50% share target from the analyst day, as your x86 competitor seems to be improving its supply, and also there seems to be more momentum on the merchant and custom ARM CPU side. Thank you.

Executive Name: Yeah, sure, Josh. Thanks for the question. So first of all, back to when we think about CPU, Tim, we've always said that CPUs are a very critical part of data center infrastructure, and that's been where we've invested. And we saw the first signs of, let's call it AI demand, really pulling CPU demand last year, and that was the reason we updated the TAM to, let's call it the 18% CAGR or approximately 60 billion. And what we've seen is all of the things that we believed in terms of agentic AI and inferencing and all the CPU compute that is required, is just happening, and it's happening at a much faster pace. So over the last, let's call it the last few months, as we've talked to our customers and we've seen how AI adoption is really unfolding, we're seeing a significant more CPU demand from really every major cloud provider as well as enterprise customers. ==At AI adoption scales, you need more inferencing. At inferencing scales, you have more agents and agentic AI.

They all require CPUs for all of the orchestration and the data processing and these other tasks. So with that, we've looked at it both bottoms up in terms of talking to customers and having them give us longer-term forecasts, as well as just doing some clear workload analysis. It's a very exciting TAM. I think it's exciting to see CPUs growing greater than 35% to over $120 billion.== And then when you think about AMD in the context of that, CPUs are critical for so many tasks that you are seeing a lot more discussion about CPUs in the market. But we actually view it in three categories. There's general purpose, there's the head nodes that really support the AI accelerators, and then there are CPUs just for all of the agentic AI work. And to do all of this, our belief is you need a broad portfolio of CPUs, and that's really what we have been focused on is building not just one type but really a broader in terms of throughput optimized, power optimized, cost optimized, AI infrastructure optimized, as we've done in the Venice family.

So when you put all that together, we're very excited about the larger TAM, and we're also very happy with the traction that we're getting. We're clearly feeling like we're seeing significant share gain as we're going into our TURN portfolio that was ranked very nicely. Venice is extremely well positioned, and we're working with customers right now beyond Venice and what we're doing in those architectures. So we feel really good about the market as well as our opportunity to grow to a greater than 50% share of that market.

Analyst Joshua Buckwalter (TD Cowen): Thank you for all the color there. I wanted to ask about the instinct side. So in the press release, you mentioned that MI450 and Helios engagements are strengthening with customer forecasts exceeding the expectations and the pipeline growing. You certainly have the big public open AI and meta deals. Was this comment referring to those engagements upsizing versus the announced initial deployments or was it other customers and maybe the increase on the MI450 timeline or is it MI500 and beyond? Thank you.

Executive Name: Sure, Josh. So, we are very excited about MI450 and Helios. We're seeing significant customer interest in those products as well. So, you know, we have certainly talked about our large partnerships with OpenAI and Meta and those are going really well. We appreciate the deep co-engineering that has gone on there. When we look at the totality of, let's call it based on our current visibility, how those forecasts are coming in with all of our customers, we are actually seeing it above our initial plans that we had planned for 2027. And I think the encouraging thing is we are seeing a breadth of customers who are now very interested in deploying at significant scale MI450 series. And those are for both training and inference workloads, although the largest deployments are for inference. And based on all of that and the scale of new customer interest, we see a path to really get to exceed our original targets of greater than 80% TAGR. And these are really 2027 timeframe, obviously. When we talk to customers, we're talking to them about MI355. There's a lot of good traction we're seeing there. MI450 and Helios, I think, for significant large-scale deployments. And then many customers are also very engaged with us on the MI500 series and all of the opportunities there. So we feel like very, very good progress. And the key is that we're continuing to broaden and widen the scope of both customers as well as workloads.

Analyst Thomas O'Malley (Barclays): Hey, guys. Thanks for taking my question. Lisa, if I get your numbers correct here in the March quarter, it sounds like you know, the server processor side of the CPU side grew over 50%. If you take it just at the word, it looks like maybe the data center GPU side actually grew in Q1. So I was curious around the cadence of this year, kind of previously you had talked about really a back half weighted and then kind of more so Q4 weighted year. Could you talk about if that's changed at all? And then the second part of the question is, as you go into 2027, clearly you're pointing out a lot of upside from the larger customers and then kind of the ecosystem around them with new customers as well. But when you look at supply, that's a major issue in the ecosystem today. Could you talk about where you're concerned on supply, if you are, and then any gating factors as you look into next year, whether that be power, data center build-outs, et cetera, or do you feel really good about the ability to grow? Thank you very much.

Executive Name: Yeah, okay. A lot of pieces of that question, Tom, so let me try to get through it. So first of all, on the data center segment in Q1, the server business was greater than 50% year over year, as we said in the prepared remarks. The data center AI was actually down modestly because of the China transition. We had more China revenue, I'm sorry, sequentially more China revenue in Q4 and it was less in Q1. But as we go forward, I think we see strong growth in both segments. So we guided data center Q2 up sequentially double digits and that's double digits in both server as well as data center AI. And progression as we go forward, so first on the server CPU side, we talked about growing to over 70% year-over-year in Q2, and that continuing into the second half of the year. And on the data center AI side, we will be ramping Helios in the second half of the year, so let's call it starting with initial volume in Q3, with a significant ramp in Q4, and then continuing to ramp in Q1. So that's kind of a little bit of progression. And then to your questions about customers and supply. I think I answered, Josh, the customer question.

I think we have very good visibility now into the deployments that are on track for 2027. And when I say good visibility, it's visibility down to which data centers are the GPUs going to be installed in. And so that's necessary, just given all of the constraints out there. We feel that there is tightness in the supply chain. There's certainly tightness in sort of data center build-outs, but we are confident in our ability to supply to the levels of growth that we're talking about and to exceed the levels of growth that we're talking about. And we're also working very closely with our customers and our partners to ensure that we have good visibility to data center power. And there is much more power that's coming online in 2027. And so with all those things in mind, I think, you know, again, lots of things to manage. It's a complex ramp, but we're very, very pleased with the progress on the ramp.

Analyst Ross Seymour (Deutsche Bank): Hi, thanks for letting me ask a couple questions. The first one is just on the Epic competition. Lisa, you went through some of the statistics of you versus x86 and you versus Arm, but I wanted to dive a little bit deeper into that. How do you see AMD truly differentiating, especially when you're signing, well, you see some of your competition signing up the same customers from the Arm side and the x86 competition having more supply. So I just wanted to see if you could dig a little bit deeper into how you think the market share is going to trend over time.

Executive Name: Look, we're very engaged with every major hyperscaler in terms of understanding their needs on the CPU side. I think we have very much wanted to, let's call it, optimize our CPU roadmap for the various workloads. I think we were early to call this AI component of CPUs, and so we've been actually optimizing very closely with those customers. The way to think about this, Ross, is that you're going to need a broad portfolio of CPUs. Not all CPUs are the same. Frankly, you're going to need different CPUs for whether you're talking about general purpose operations, or you're talking about head nodes, or you're talking about agentic AI tasks. They're going to be optimized differently. And we thought through that, and we are absolutely optimizing across the various workloads. So from a competitive standpoint, we feel very good about where things are. And from a deep relationship with the customer set, I think we feel very good about that. So from our current standpoint, I think the depth of our roadmap just expands as we go forward. And you shouldn't think about it as, people are going to do one or the other. I think you are going to see people actually use x86 and ARM for many of the large hyperscalers. And even for those who are developing their own, they are still buying lots of CPUs in the merchant market for the reason that I just stated which is you need different CPUs for the different types of workloads. And there is very high demand at the moment.

Analyst Ross Seymour (Deutsche Bank): Thanks for that. I guess for my followup, maybe more for Jean on the gross margin side of things. It's nice to see the gross margin popping up in the second quarter guy, but I just wanted to get some trends longer term, maybe not specific numbers, but how should we think about when Helios and the instinct side really ramps in the fourth quarter and more so next year, I could see some offsets with that carrying a below corporate average gross margin, but then everything that Lisa talked about with the Epic side of things being significantly stronger, might be more of an offset than it was in the past. So just walk us through the puts and takes of that and maybe directionally where you think gross margin goes over the next year or two.

Executive Name: Yeah, Ross, thanks for the question. We are very pleased with how our gross margin is trending. It came in really strong in Q1. And also, as you mentioned, we guided the Q2 higher at the 56%. I think as we think about the second half or Quarter over quarter, as you know, there are some puts and takes, right? I would just say from a tailwind perspective, we actually have multiple tailwinds really are going to help our growth margin. First is the server CPU. Lisa talked about the server CPU expected to grow more than 70% in Q2 and continue to be really strong in second half. That really helps our growth margin. Secondly, in the second half, gaming actually is going to come down, and our client business actually continues to go up the stack. So from client gaming segment growth margin actually is going to be also very helpful. Embedded actually is very accretive to our growth margin. Its momentum actually is continuing in the second half. So we are really pleased with all the tailwinds we have today. On the other side, MI450 will start ramping Q3 and then ramp significantly in Q4. That is below corporate average.

So that will have different puts and takes in Q4 in the gross margin side. But when we sit here, when we look at all the positive trends we have to really offset some of the gross margin dilution from MI450 side, we actually feel really good about the setup over the gross margin for 2026. And into next year, I think some of the tailwinds I talk about will actually continue. That's why we feel confident about continuing to drive the gross margin. We actually, during our financial analyst day, we outlined a long-term gross margin in the range of 55% to 58%. We think for the first year, we're making good progress there.

Analyst Timothy Arcuri (UBS): Thanks a lot. I wanted to ask about units versus ASP for server CPU. If I look at the June guidance, it sort of implies up 25 to 30% for server CPU. And, you know, Lisa, you had mentioned second half of the year. It sort of implies that server CPU could grow like 70%, you know, maybe a little more this year. And so I guess my question is, how much of that growth, either in June or for the year, is like units versus pricing? Are these price increases sort of mostly captured in June, or is that also helping in the back half of the year?

Executive Name: Yeah, Tim, the way I would say it is, maybe let me bring you back to Q1 for a moment. So if you look at our significant growth in the server business, it was actually, although we were up on a year-over-year basis, for both ASPs and units. It was actually much more unit-driven. So we are shipping more CPUs across not just the high-end Turin family, but we're actually shipping a lot of Genoa's Zen 4 family as well. As we go forward for Q2 and into the second half, we are guiding for a significant amount of growth. I think there's a little bit of ASP in there, but the way we're thinking about pricing to be fair, is we are in a range where the supply chain is tight, and so there are some inflationary pressures, costs have gone up a bit, and we are sharing some of that with our customers, but we are also being very thoughtful in, we're playing out for the long term, and that means that our goal is to ship more units, and a lot more units, and so from that standpoint, you should imagine that the majority of the growth is unit-driven, and the ASPs are just really to help cover some of the inflationary pressures. And just to add what Lisa said, our ASP is increasing because of the mix, where actually each new generation, the call counts, those are increasing. That actually drives the ASP up.

Analyst Timothy Arcuri (UBS): Thanks a lot for that. And then I guess, Lisa, also, so there's a lot of new architectures that are being used from, you know, multi-tenancy all the way to low latency and, you know, your competitor has talked about the low latency part of the market being, you know, 20 plus and they of course added to their portfolio there. Can you talk about how you see that part of the market? I mean, obviously you have enough business right now you don't need to worry about that probably for now, but can you talk about that?

Executive Name: Yeah, sure. So look, I think what we're seeing is what we expected in the sense that, you know, as you go, as the AI adoption continues and the volumes continue to go up and the overall market goes up, you are going to see, let's call it, different compute architectures being used because you want to get more cost optimization from that. So we expect that even in that situation, obviously the vast majority of the TAM is still going to be, let's call it, data center GPUs as the primary accelerator. But you may choose to do optimization around inference, around low latency, around certain parts of the stack, whether it's decode versus pre-fill. I think that's very natural. The way we look at it is we're developing a full compute portfolio. So that's CPUs, that's GPUs, that's the ability to connect to all accelerators, as well as the ability to do customization for certain customers. And we've also talked about our semi-custom capabilities. And with all of those sort of compute capabilities in our tool chest, I think we will be able to address very effectively a large portion of this market, including the low latency portion of the market. So from our standpoint, this is kind of a natural evolution. Now, how fast it goes depends a bit on the technology in terms of what share of the TAM these things become. But we should expect that there will be different variants, and we're well prepared to address those different variants.

Analyst Vivek Arya (Bank of America): Thanks for taking my question. Lisa, do you think agentic CPU growth is incremental or is it coming at the expense of GPUs conceptually? So if you're raising server CPU time, are you also implicitly kind of raising AI time? So just I'm interested in your perspective on what did you think a server CPU was as a percentage of AI TAM before and what is it now with this 120 billion number?

Executive Name: Sure, Vivek. So the way we're thinking about it is it's largely additive to the TAM. So you should think about we need all of the accelerators to run these foundational models. And then as these agents do work, they spawn more CPU tasks. So I would say largely incremental. The key is to make sure what we are seeing is in these deployments, the key is to make sure the ratio of CPUs to GPUs are the right ratio. So if you are installing a gigawatt of compute, the percentage of CPU as part of that gigawatt will increase. Some of the conversation in the industry has been about CPU to GPU ratios. It's very hard to call exactly, but we certainly see the movement towards where in the past the CPU to GPU ratio was primarily just as a host node in a 1 to 4 or 1 to 8 configuration. Now changing and getting closer to a 1 to 1 configuration, or you can even imagine if you get lots and lots of agents, that you could have more CPUs and GPUs. But all in all, to answer your question, I think it's largely additive to the TAM. And the key is that everyone is now planning and thinking about CPUs at the same time that they're thinking about their accelerator deployments, which is a good thing.

Analyst Vivek Arya (Bank of America): And for my follow-up, Lisa, we continue to see memory prices go up. I imagine that is both kind of a cost inflation for you, but perhaps an opportunity to take price as well. I'm curious, how is that dynamic playing out for AMD and especially for your customers? Because a greater part of their capex increase is really kind of this memory inflation tax that they have to pay. So how is this dynamic playing out for you and for your customers? And the part that I'm really interested in is that have you secured enough supply versus your other larger competitor who has disclosed a lot of prepayments and other things? So just how is this memory inflation dynamic playing out and are you kind of adequately supplied from that perspective?

Executive Name: Sure. So, Vivek, let me answer the second one first. I think from a supply standpoint, we are very happy with our partnerships with the memory vendors, and we have secured enough supply to certainly meet and exceed our targets. So it is a tight memory environment, let me be clear, but I think we have very deep partnerships with the memory providers. And then back to your comments on the inflationary pressures. I mean, look, this is something that everyone in the industry is working with. In the time of tight supply, we are seeing some costs increases on the memory side. I think we are all working through that. The way we're seeing it unfold in the market is actually on the data center side, you know, because of the, let's call it the demand for AI compute, I mean, people are largely, you know, focused on supply and ensuring that the supply assurance is there.

The corollary of that, you know, the larger impact that we're watching is, you know, the impact on the consumer markets and, you know, as we said, in the prepared remarks, we are expecting that there could be some demand impact as a result of the memory price increases on things like the PC business in the second half of the year as well as the gaming business. So we're taking that into account in our overall model. And we continue to work closely with the memory providers as well as our customers to ensure that, you know, every time we ship a CPU or GPU that it's paired with the memory on the other side so that we don't have, you know, compute that is not being deployed.

Analyst Aaron Rakers (Wells Fargo): Yeah, thanks for taking the question and congrats on the results. I want to stick on the topic of CPU to GPU. And as we think about the chart that you had outlined at the analyst day, there was obviously broken out between traditional CPUs and then the AI bucket on top of that. Obviously, I think the new forecast has a lot to do with the AI, you know, CPU expansion. I'm just curious, when you're doing a CPU in an AI workload, is there structurally a different level of ASP tied to that kind of CPU optimized for AI relative to a general purpose server CPU? Any kind of color or help on that would be useful.

Executive Name: Sure, Aaron. So let me start with the broader question. The broader question, you know, regarding, you know, the way we think about the CPU TAM is, again, think about it as three categories. So there is a traditional CPU, let's call it general purpose CPU TAM that is increasing, but let's call it increasing at a low rate, maybe let's call it low double digits. Then you have your AI head node, which is connecting to accelerators, which is also growing, but it's smaller. And then the largest piece of the growth is this agentic AI, you know, piece which, you know, we think is really stemming from all of the agentic processes. I don't have a number that I can tell you in terms of relative ASPs because it really depends on the workload that is being run. And, you know, what we see going forward is, you know, as core counts increase, you know, obviously we will see ASP increase. And, you know, that's the direction that we're going in. But the main point is the largest portion of this is the agentic AI, the CPUs that are serving these agentic AI workloads in terms of the TAM increase.

Analyst Aaron Rakers (Wells Fargo): And as a quick follow-up, I'm curious, you know, how do you characterize the competitive landscape as we see, you know, some of the ARM introductions in the market? Just curious of your views on the competitive landscape and server CPUs. Thank you.

Executive Name: Yeah, Aaron, the best way to think about the server CPU landscape is, you know, again, number one, everyone is talking about CPUs. So that tells you how, you know, critical they are for the AI infrastructure. And I think that's a good thing. We feel like we're very well positioned. No question ARM is good architecture. It has a place in the data center market. We view it as more point products relative to a portfolio where from an AMD standpoint we built this broad portfolio of CPUs going forward which you're going to need for all of these different workloads. And we have in the Venice timeframe added an AI-optimized CPU with Verano in addition to our throughput-optimized and cost-optimized points. So from that standpoint, I think we're very competitive. We're continuing to innovate on architecture. We're continuing to innovate on both advanced packaging as well as all of the architectural pieces. So we feel very well-positioned going forward. And the key is that TAM is much, much larger than anybody thought, and so there's a lot of opportunity for different products to be successful in this area.

Analyst CJ Muse (Cantor Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I guess first question was hoping to speak a bit more about client for all of calendar 26. You talked about growth, expected growth, but would love to hear, you know, your thoughts around seasonality in the second half. And I'm assuming that you are repurposing certain logic tiles from client over to the data center and would love to kind of better understand what the implications are for ASPs on the client side looking into the second half.

Executive Name: Sure. So CJ, I think the client business has performed really well for us. I think if we look at Q1, it actually was a little bit stronger than what we expected. We are seeing some mixed shifts in the client business. The mix that we're seeing is the MNC or the notebook business is actually growing, especially the premium portion. We're making very good progress in the commercial PC arena with our AI PCs. We did see desktops a little bit softer, just given desktop is a more consumer-focused market. And so in that market, it's more impacted by some of the memory pricing and the component price increases. When we look at the full year, our commentary is we are planning for some demand impact in the second half due to the memory pricing. But even in that environment, what we're focused on is ensuring that we continue to make good progress on the commercial business and continuing to focus on the premium segments of the market. So we believe that we will continue to grow on a year-over-year basis for the client business compared to last year. And as it relates to ASPs, again, it's a little bit of puts and takes between notebook and desktop. But overall, I think we're feeling good about our opportunity to outperform the market in clients going forward.

Analyst CJ Muse (Cantor Fitzgerald): That was perfect. Thank you. And then I guess a question on instinct gross margins. You know, with compute essentially sold out, and obviously you're building a business, so, you know, one has to be, I guess, conservative on that front. But I would think outside of kind of passing through HBM that, you know, given the very tight wafer environment, that this would be a place where, you know, you could look to drive your instinct margins closer to your corporate average. How are you thinking about that, you know, either today or, you know, in the coming one, two, three years?

Executive Name: Hi, CJ. You know, at this stage, we really focus on driving the top-line revenue growth on our instinct family of products. I think on the gross margin side, you're absolutely right. You know, it's really tied to the demand for compute is tremendous. We actually are very strategic how we think about how we work with the customers. And, of course, different customers also have different growth margins. I think over time, once we start to ramp our revenue, we'll have a lot of opportunities to improve growth margin, both on the ASP side, but also, more importantly, on the cost side when we scale our business.

Analyst Stacy Rasgan (Bernstein Research): Hi, guys. Thanks for taking my questions. For the first one, I just wanted to make sure I have the near-term AI GPU trajectory correct. So I know you said it was down sequentially in Q1 because of China. You had like $390 million of China revenue in Q4. Did the AI business in Q1 actually grow sequentially in because it doesn't feel like it given the server outlook? And then I look at what's maybe suggested for Q2. Are you thinking GPUs and servers kind of grow similar rates sequentially? Because it would probably put GPUs in Q2 below the overall level if you were in a Q4, which seems low to me. I'm just trying to tile that out. Could you help me with that, please?

Executive Name: Yeah, so I think, Stacy, I appreciate the question. I think if you look at the Q1, we did mention data center AI was down modestly sequentially, primarily due to lower China revenue in the quarter. I think on your second question regarding Q2, you're right. Both data center AI and the server will grow double digit in Q2.

Analyst Stacy Rasgan (Bernstein Research): Yeah, but you didn't answer my question. In Q1, did it grow sequentially X the China step down, I guess is what I'm asking. The China for our business in Q1, it's not material. So I think I will repeat what I just said. Yeah, the revenue, China revenue in Q1 is not material.

Analyst Stacy Rasgan (Bernstein Research): Okay. Okay, so you don't want to answer. Okay. Okay. Um, second question, OpEx, um, for spending, but it sort of continues to blow past the targets. You kind of give an OpEx guide and then it blows through it and then you guide higher. So again...

Quarter 2

Q4 2025 Earnings Call — February 3, 2026

Executive Name (Title): Thank you very much, Gene.

Management: Thank you, Matt. We will now be conducting the question and answer session.

Analyst Aaron Rakers (Wells Fargo): Yeah, thanks for taking the question. Lisa, at your analyst day back in November, you seemed to kind of endorse the high $20 billion AI revenue expectation that was out there on the street for 2027. I know today you're reaffirming that. Can you talk a little bit about what you've seen as far as customer engagements, how those might have expanded? I think you've alluded to in the past multiple multi-gigawatt opportunities. Just double-click on what you've seen for the MI455 and Helios platform from a demand-shaping perspective as we look into the back half of the year.

Executive Name (Title): Yeah, sure, Aaron. Thanks for the question. So first of all, I think the MI450 series development is going extremely well. So we're very happy with the progress that we have. We're right on track for a second half launch and beginning of production. And as it relates to sort of the shape of the ramp and the customer engagements, I would say the customer engagements continue to proceed very well. We have obviously a very strong relationship with OpenAI, and we're planning that ramp starting in the second half of the year, going into 2027. That is on track. We're also working closely with a number of other customers who are very interested in ramping MI450 quickly, just given the strength of the product, and we see that across both inference and training, and that is the opportunity that we see in front of us. So we feel very good about, you know, sort of the data center growth overall for us in 2026, and then certainly going into 2027, you know, we've talked about, you know, tens of billions of dollars of data center AI revenue, and we feel very good about that. Thank you.

Analyst Tim Arcuri (UBS): Thanks a lot. Gene, I'm wondering if you can maybe give us a little bit of detail under the hood for the March guidance. I know you basically told us that you told us about what embedded is going to be up a bit year over year. Client sounds like it's down seasonally, which I take to be maybe down 10. So can you give us a sense maybe of the other pieces? And then also, can you give us a sense of how data center GPU is going to ramp through the year? I know it's a backtrack over the year, but I think people are thinking these are somewhere in the $14 billion range this year. That's what investors are thinking. I'm not asking you to endorse that. But if you can give us a little flavor for sort of how the ramp will look for the year, that'd be great. Thanks.

Executive Name (Title): Hi, Tim. Thanks for your question. We're guiding one quarter at a time, but I can give you some color about our Q1 guide. First, it's right sequentially. We guided a decline around 5%. But data center is actually going to be up. And when you think about this, right, our CPU business seasonal actually in a regular seasonal pattern, it's going to be down high single digit. And in our current guide, we actually guide CPU revenue up sequentially very nicely. Also, with the data center GPU side, we also feel really good about GPU revenue, including China, will be also up. So very nice guide for the data center overall. On the client side, we do see seasonality sequentially decline. Embedded and gaming, they also have a seasonal decline. Maybe, Tim, if I just give you a little bit on the full year commentary. I think the important thing, as we look at the full year, we're very bullish on the year. If you look at the key themes, we're seeing very strong growth in the data center, and that's across two growth vectors. We see server CPU growth actually very strong.

We've talked about the fact that CPUs are very important as AI continues to ramp, and we've seen the CPU order book continue to strengthen as we go through the last few quarters and especially over the last 60 days. So we see that as a strong growth driver for us. As Gene said, we see server CPU growing from Q4 into Q1 in what normally is seasonally down. And that continues throughout the year. And then on the data center AI side, it's a very important year for us. It's really an inflection point. MI355 has done well and we were pleased with the performance in Q4 and we continue to ramp that in the first half of the year. But as we get into the second half of the year, the MI450 is really an inflection point for us. So that revenue will start in the third quarter, but it will ramp significant volume in the fourth quarter as we get into 2027. So that gives you a little bit of sort of what the data center ramp looks like throughout the year. Thank you, Lisa.

Analyst Vivek Arya (Bank of America): Thank you. First, just a clarification on what you're assuming for your China MI308 sales beyond Q1. And then, Lisa, specific to 2026, can your data center revenues grow at your target 60% plus growth rate? I realize that that's a multi-year target, but do you think that there are enough drivers, whether it's on the server CPU side or GPU side, for you to grow at that target base, even in 2026?

Executive Name (Title): Yeah, sure, Vivek. So let me talk a little bit about China first, because that's, I think, important for us to make sure that's clear. Look, we were pleased to have some MI308 sales in the fourth quarter. They were actually, you know, a license that was approved through, you know, work with the administration and those orders were actually from very early in 2025. And so we saw some revenue in Q4 and we're forecasting for about $100 million of revenue in Q1. We are not forecasting any additional revenue from China just because it's a very dynamic situation. So given that it's a dynamic situation, we're still waiting for – we've submitted licenses for the MI325 and we're continuing to work with customers and understanding sort of their customer demand. We thought it prudent not to forecast any additional revenue other than the $100 million that we called out in the Q1 guide. Now, as it relates to overall data center, as I mentioned in the question to Tim, we're very bullish about data center. I think the combination of drivers that we have across our CPU franchise, I mean, the Epic product line, both Turin and Genoa continue to ramp well.

And in the second half of the year, we will be launching Venice which we believe actually extends our leadership, and the MI450 ramp, which is also very significant in the second half of 2026. We're not obviously guiding specifically by segment, but the long-term target of let's call it greater than 60% is certainly possible in 2026. Thank you, Lisa.

Analyst CJ Muse (Cantor): Yeah, good afternoon. Thanks for taking the question. I'm curious on the server CPU side of the house, and given the dramatic tightness, curious, you know, your ability to source incremental capacity from TSMC and elsewhere, and I guess how long will it take for that to see wafers out, and how should we think about the implications for kind of the growth trajectory throughout all of calendar 26? And I guess as part of that, if you could speak to how we should be thinking about inflection and pricing as well, that would be very helpful.

Executive Name (Title): Sure, CJ. So a couple of points about the server CPU market. First of all, we think the overall server CPU TAM is going to grow, let's call it strong double digits in 2026, just given the, as we said, the relationship between CPU demand and overall AI ramp. So I think that's a positive. Relative to our ability to support that, we've been seeing this trend for the last couple of quarters. So we have increased our supply capacity capability for server CPUs. And that's one of the reasons we are able to increase our Q1 guide as it relates to the server business. And we see the ability to continue to grow that throughout the year. There's no question that the demand continues to be strong, and so we're working with our supply chain partners to increase supply as well. But from what we see today, I think the overall service situation is strong, and we are increasing supply to address that.

Analyst CJ Muse (Cantor): Hey, CJ, do you have a follow-up question?

Analyst CJ Muse (Cantor): I do. Maybe for Gene, if you could kind of touch on gross margins through the year and as you balance kind of strengthening server CPU with, you know, perhaps greater, you know, GPU accelerating in the second half. Is there kind of a framework that we should be working off of? Thanks so much.

Executive Name (Title): Yeah, thank you for the question. We are very pleased with our growth margin Q4 performance and the Q1 guide at 55%, which actually is 130 basis points up year-over-year. We continue to ramp our MI355 year-over-year very significantly. I think we are benefiting from a very favorable product mix across all our business. If you think about the in-data center, we're ramping our new product, new generation product, Turin, and the MI355, which helps the growth margin in client. We continue to move up the stack and also gaining momentum in our commercial business. Our client and business growth margin has been improving nicely. In addition, certainly we see the recovery of our embedded business, which is also margin accretive. So all those tailwinds we are seeing, we continue to see in the next few quarters. And my 450 ramp, of course, in Q4, our growth margin will be driven largely by mix, and I think it will give you more color when we get there. But overall, we feel really good about our growth margin progression this year.

Analyst Joe Moore (Morgan Stanley): Great. Thank you. On the MI-455 ramp, will 100% of the business be racks? Will there be kind of an eight-way server business around that architecture? And then is the revenue recognition when you ship to the rack vendor? Or is there something to understand about that? Thank you.

Executive Name (Title): Yes, Joe. So we do have multiple variants of the MI450 series, including an eight-way GPU form factor. But for 2026, I would say the vast majority of it is going to be, you know, rack scale solutions. And yes, we will take revenue when we ship to the rack, you know, builder.

Analyst Joe Moore (Morgan Stanley): Okay, great. And then can you talk to any risks that you may have in terms of, you know, once you get silicon out, turning that into racks, any potential issues as you ramp that? I know your competitor had some last year and you said you learned from that. You know, is there anything you've done with kind of pre-building racks to sort of ensure you won't have those issues? Just any risk that we need to understand around that?

Executive Name (Title): Yeah, I mean, I think, Joe, the main thing is the development's going really well. It is, we're right on track with the, you know, MI450 series as well as the Helios rack development. We've done a lot of testing already, both at the rack scale level as well as, you know, at the silicon level. So far, so good. We are getting, let's call it, a lot of input from our customers on, you know, things to test so that we can do a lot of testing in parallel. And, you know, our expectation is that we will be on track for our second half launch. Thank you.

Analyst Stacy Rasgan (Marine Research): Hi guys. Thanks for taking my questions. First of all, Lisa, I just wanted to ask about OPEX. Like every quarter you guys are guiding it up, and then it's coming in even higher, and then you're guiding it up again. And I understand, given the growth trajectory, that you need to invest. But how should we think about the ramp of that OPEX and that spending number, especially as the GPU revenue starts to inflect? Do we get leverage on that, or should we be expecting the OPEX to be growing even more materially as the AI revenue starts to ramp?

Executive Name (Title): Yeah, sure, Stacy. Thanks for the question. Look, I think in terms of OpEx, we're at a point where we have very high conviction in the roadmap that we have. And so in 2025, as the revenue increased, we did lean in on OpEx. And I think it was for all the right reasons. As we get into 2026 and as we see some of the significant growth that we're expecting, we should absolutely see leverage. And the way to think about it is we've always said in our long-term model that OpEx should grow slower than revenue, and we would expect that in, you know, 2026 as well, especially as we get into the second half of the year and we see, you know, inflection in the revenue. But, you know, at this point, I think the, you know, if you look at our free cash flow generation and the overall revenue growth, I think the investment in OPEX is absolutely the right thing to do. Thank you.

Analyst Stacy Rasgan (Marine Research): For my follow-up, I actually have two sort of one-line answers I'm looking for. Just first, the 100 million in China revenue in Q1, does that also drop through at zero cost basis like we had in Q4, and is that a margin headwind? And number two, I know you don't give us the AI number, but could you just give us the annual 2025 instinct number now that we're through the year? How big was it?

Executive Name (Title): So, Stacy, let me answer your first question on the 100 million revenue in Q1. Actually, the inventory reserve reversed in Q4, which was 360 million, not only associated with Q4 revenue, China revenue, but also covers the $100 million revenue we expect to ship in Q1 to China with our MI308. So the Q1 gross margin guide is a very clean guide. And Stacey, for your second question, as you know, we don't guide at the business level, but to help you with your models, I think you can, if you look at the Q4 data center AI number, even if you were to back out the China number, which was, you know, let's call it not a recurring number, you would still see growth. You'll see growth from Q3 to Q4. So that should help you a little bit with your modeling. Thank you.

Analyst Joshua Buckalter (TD Cowen): Hey, guys. Thanks for taking my question. I want to ask about clients. So the segment beat pretty handily in the fourth quarter. I recognize you guys have been gaining share with Ryzen, but I think given what we've been seeing in the memory market, there's a lot of concern about inflationary costs and the potential for pull-ins. Were there any changes in your order patterns during the quarter? And maybe bigger picture, how are you thinking about client growth and the health of that market into 2026?

Executive Name (Title): Yeah, thanks for the question, Josh. The client market has performed extremely well for us throughout 2025, very strong growth for us, both in terms of ASP mixing up the stack as well as just unit growth. Going into 2026, we are certainly watching the development of the business. I think the PC market is an important market. Based on everything that we see today, we're probably seeing the PC TAM down a bit just given some of the inflationary pressures of the commodities pricing including memory. The way we are modeling the year is let's call it second half a bit sub-seasonal to first half, just given everything that we see. Even in that environment with the PC market down, we believe we can grow our PC business. Our focus areas are enterprise. That's a place where we're making very nice progress in 2025, and we expect that in 2026, and just continuing to grow sort of at the premium, higher end of the market. Thank you for the call there.

Analyst Joshua Buckalter (TD Cowen): I want to ask about the Instinct family. So we've seen your big GPU competitor make a deal with an SRAM base. spatial architecture provider, and then OpenAI has reportedly been linked to one as well. Could you speak to the competitive implications of that? You've done well in inferencing, I think, partly because of your leadership in HBM content. So I was wondering if you could maybe address the poll seemingly motivated by lower latency inference and how Instinct is positioned to service this if you're indeed seeing it as well. Thank you.

Executive Name (Title): Yeah, I think, Josh, it's really, I think, the evolution that you might expect as the AI market matures. What we're seeing is as inference ramps, really the tokens per dollar or the efficiency of the inference stack becomes more and more important. As you know, with our triplet architecture, we have a lot of ability to optimize across inference training and even across sort of the different stages of inference as well. I think I view this as very much as you go into the future, you'll see more workload optimized products, and you can do that with GPUs as well as with other more ASIC-like architectures. I think we have the full compute stack to do all of those things, and from that standpoint, we're going to continue to lean into inference as we view that as a significant opportunity for us in addition to ramping our training capabilities. Thank you.

Analyst Ben Reitzis (Melius Research): Yeah, hey, thanks. Appreciate it. Hey, Lisa, I wanted to ask you about OpenAI. You know, I'm sure a lot of the volatility, you know, out there is not lost on you. Is everything on track for the second half for starting the six gigawatts and the three and a half year timeline as far as you know? And is there any other color that you'd just like to give on that relationship, and then I have a follow-up. Thank you.

Executive Name (Title): Yeah, I mean, I think, Ben, what I would say is, you know, we're very, very much working in partnership with OpenAI as well as our CSP partners to deliver on MI450 series and deliver on the ramp. The ramp is on schedule to start in the second half of the year. MI450 is doing great. Helios is doing well. We are in, let's call it, deep co-development across all of those parties. And as we look forward, I think we are optimistic about the MI450 ramp for OpenAI. But I also want to remind everyone that we have a broad set of customers that are very excited about MI450 series. And so in addition to the work that we are doing with OpenAI, there are a number of customers that we are working to ramp in that timeframe as well.

Analyst Ben Reitzis (Melius Research): All right, I appreciate that. And I wanted to shift to the server CPU, and just talk about x86 versus arm. You know, there's some view out there that x86 has particular edge and agents, big picture, you know, do you agree with that? And what are you seeing from customers and, in particular, you know, obviously, your big competitor is going to be selling an arm CPU separately now in the second half. So if there's just anything on that competitive dynamic versus ARM and what NVIDIA is doing and your views on that, that'd be great to hear. Thanks.

Executive Name (Title): Yeah, Ben, what I would say about the CPU market is there is a great need for high-performance CPUs right now, and that goes towards agentic workloads where when you have these AI processes or AI agents that are spinning off a lot of work, in an enterprise, they're actually going to a lot of traditional CPU tasks, and the vast majority of them are on x86 today. I think the beauty of Epic is that we've optimized. We've done workload optimization, so we have the best cloud processor out there. We have the best enterprise processor. We also have some lower cost variants for storage and other elements. And I think all of that comes into play as we think about the entirety of the AI infrastructure that needs to be put in place. I think the CPUs are going to continue to be as important as a piece of the AI infrastructure ramp and that's one of the things that we mentioned at our analyst day back in November, really this multi-year CPU cycle and we continue to see that. I think we've optimized Epic to satisfy all of those workloads and we're going to continue to work with our customers to expand our Epic footprint.

Analyst Tom O'Malley (Barclays): Hey, Lisa, how are you? I just wanted to ask, you mentioned on memory earlier as a sticking point in terms of inflationary cost. Different customers do this in different ways. Different suppliers do this in different ways. But can you maybe talk about your procurement of memory, when that takes place, particularly on the HBM side? Is that something that gets done a year in advance, six months in advance? Different accelerator guys have talked about different timelines. We'd be curious to kind of hear when you do the procurement.

Executive Name (Title): Yeah, I mean, given the lead times for things like HBM and wafers and these parts of the supply chain, I mean, we're working closely with our suppliers over a multi-year timeframe in terms of what we see in demand, how we ramp, how we ensure that our development is very closely tied together. So I feel very good about our supply chain capabilities. We have been planning for this ramp. So independent of the current market conditions, we've been planning for a significant ramp in our both CPU as well as our GPU business over the past, you know, couple of years. And so from that standpoint, I think we're well positioned to grow substantially in 2026. And now we're also doing, you know, multi-year agreements that, you know, extend beyond that given tightness of the supply chain.

Analyst Tom O'Malley (Barclays): Just as a follow-up, you've seen a variety of different things in the industry today. In terms of system accelerators, so KVCache offload, more discrete ASIC style compute, CPX. If you look at what your competitors are doing and you look at your first generation of system architecture coming out, maybe spend some time on, do you see yourself following in the footsteps of some of these different types of architectural technologies? changes? Do you think that you'll go in a different direction? Anything just on the evolution of your system-based architecture and then the adjoining products and or silicon within? Thank you.

Executive Name (Title): I think, Tom, what we have is the ability with a very flexible architecture, with our triplet architecture, and then we also have a flexible platform architecture that allows us to really have different system solutions for the different requirements. I think we're very cognizant that there will be different solutions. I've often said there's no one-size-fits-all, and I'll say that again. There's no one-size-fits-all, but that being the case, it's clear that the rack scale architecture is very, very good for the highest-end applications when you're talking about distributed inference and training. But we also see an opportunity with enterprise AI to use some of these other form factors. And so we're investing across that spectrum.

Analyst Ross Seymour (Deutsche Bank): Hi, thanks for my ask. A couple questions. I guess my first question is back on the gross margin side of things. As you go from the MI300 to the 400 to the 500 eventually, do you see any changes in the gross margin throughout that period? In the past, you've talked about optimizing dollars more so than percentages, but just on the percentage side, do they go up, down, or is there volatility as you go from one to the next for any reason? Just wondered on the trajectory there.

Executive Name (Title): Ross, thank you for the question. At a very high level, each generation, we actually provide much more capabilities for more memory, help our customer more. So in general, the growth margin should progress each generation when you offer more capabilities to your customers. But typically, when you first ramp, at the beginning of ramp over generation, it tends to be lower. When you get to the scale, get to the yield improvement, the testing improvement, and also overall performance improvement, that you will see growth margin improving within each generation. So it's a kind of dynamic growth margin, but in the longer term, you should expect each generation should have a higher growth margin.

Analyst Ross Seymour (Deutsche Bank): Thanks for that, Jean. And then one small segment of your business, but it seems quite volatile and you talked a little bit about further off than you usually do is the gaming side of things. What is the magnitude down you're talking about this year? Because in 2025, you thought it was going to be flat, and it ended up growing 50%, which was a nice positive surprise. But now that you're talking about this year being down, but then the next-gen Xbox ramping in 2027, I just hope to get some color on what you see as kind of the annual trajectory there.

Executive Name (Title): Yeah, so Lisa can add more. So 2026, actually, it's the seventh year of the current product cycle. Typically, when you're at this stage over the cycle, revenue tends to come down. We do expect the revenue on the semi-customer revenue side to come down significantly, double-digit, for 2026, as Lisa mentioned in her prepared remarks. For the next generation? Yeah, I think we'll certainly talk about that going forward, but as we ramp the new generation, you would expect a reversal of that.

Management: Thank you, everybody, for participating on the call.

Management: Operator, I think we can go ahead and close the call now. Thank you.

Management: Good evening. Thank you.

Management: And ladies and gentlemen, that does conclude the question and answer session, and that also concludes today's teleconference. You may disconnect your lines

at this time, and have a great rest of the day.