Q1 2026 Earnings Call — March 4, 2026
Analyst Blaine Curtis (Jefferies): Hey, good afternoon. Thanks for taking my question. It's just a clarification on the question. Just clarification, Hawk, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking and didn't know how RAC revenue fits in there. And then the question, I think the biggest overhang on the group here is that you grew roughly double in the quarter AI. I think that's what kind of Cloud CapEx is growing this year. I'm just kind of curious your perspective. I think given the outlook that you have for 27, you should be a share gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year, or if not the year after, I'm just kind of curious your perspective, how you factor that into your outlook.
Executive Hawk (Title): Well, what we see, what we have seen over the last few months and continue to see even more is, and it's really not so much talking about hyperscalers, our customers, Blaine is limited to those few players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have one thing in common, which is to create LLMs, productize it, and generate platforms, be it for enterprise consumption in code assistance or agentic AI, or be it for consumer subscription that we know about, whatever it is, is that few prospects and many of whom are customers now, who are creating this, whether it's generative AI, agentic AI, but creating a platform. That's our customer.
And with respect to each of those guys, we are seeing very stronger and stronger demand for compute capacity for training which is something they do need constantly but what is very very interesting and surprising to us is very much for inference in order to productize the LLMs their latest LLMs they create and monetize it and that inference is driving a substantial amount of compute capacity, which is great for us because these, all these players, this five, six customers of ours are on the path to creating their own custom accelerators. And beyond that, their own design architecture of networking clusters of those customer accelerators. So I think we're going to see demand keeps picking up as we've heard announcements in the past six months. Now, to clarify your first part, Blaine, when I say we forecast, we have a line of sight that our revenue in 27 will be significantly in excess of $100 billion I'm focusing on the fact that these are pretty much all based on chips. Whether they are XPUs, whether they are switch chips, DSPs, these are silicon content we're talking about.
Analyst Harlan Sir (JP Morgan): Yeah, good afternoon. Thank you for taking my question, and congratulations to the team on the strong results. Hawk, you know, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it CLT or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this CLT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry. And very few of these COT initiatives have ever been successful. Now on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. So maybe just a quick two-part question. Hawk, one for you is, given your visibility into next year, do you see these COT initiatives science projects taking any meaningful TPU, XPU share from Broadcom? And then maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these CLT programs, how does the Broadcom team widen this gap further?
Executive Hawk (Title): Well, that's a great question. And you know it fits into that I purposely took the time in my net in my opening remarks to say that when any of our any I guess hyperscaler or LLM developer tries to create become self-sufficient entirely in creating what you call a customer-owned tooling or COT model they face tremendous challenges. One is technology, which is technology as it relates to creating the silicon chips, and particularly in XPUs, that they need to do the computing and that it's needed to optimize and run the train and inference on the workloads they produce or the LLM. It's that technology we talked about comes from different dimensions you need the best silicon design team around you need cutting edge really cutting edge studies very advanced packaging and just as much you need to understand how to network clusters of them together we've been doing this for 20 years more than 20 years in silicon and in this particular space today in generative AI, if you're trying to as an LLM player to do your own chip, you cannot afford to have a chip that is just good enough.
You need the best chips that is around because you're competing against other LLM players and most of all, you're also competing against Nvidia. who is by no means letting down their guard. They are producing better and better chips with every passing generation. So you have to, as an LLM, trying to establish your platform in the world, have to create chips that are better than, if not competitive with not just Nvidia, but all the other LLM platform players that you're competing against. And for that, you really need our belief, and we see that firsthand, a partner in silicon with the best technology, IP, and execution around. And very modestly, I would say we are by far way out there. And we will not see competition in COT for many years to come. It will come eventually, but we're still a long way off because the race which we see continues. And one thing I add in there that is particularly unique to us, when you create the silicon, you really have to get it up and running in high volume in production very quickly, time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well.
Can you produce 100,000 of those chips quickly at yields that you can afford? And we don't see too many players in the world that can do that. Charlie?
Executive Charlie (Title): I think you covered it very well, Hawk. Thank you, Hawk.
Analyst Ross Seymour (Deutsche Bank): Hi, thanks for having me ask a question. Huck, in your script, you leaned a little bit more into the networking differentiation than you have in the past. So I guess kind of a short-term and a longer-term question. The short-term is what's driving that up to 40% of the AI revenues? And the longer-term question is, is that percentage mix in that $100 billion plus, is that changing now? What sort of leadership do you expect to maintain in that business, whether it's scale out or scale up? And is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?
Executive Hawk (Title): Well, let's address the first part of that fairly complex question first, Ross. Yes, in networking, especially with the new generation of GPUs, XPUs that are coming out there, we're running at 200 gigabit out there in terms of bandwidth. And the Tomahawk 6 that we introduced over six months ago, in fact, closer to nine months ago, we're the only one out there. And our customers and the hyperscalers wants to run with the best networking and with the most bandwidth out there for their clusters. So we are seeing huge demand for this only 100 terabit per second switch out there. So that's driving a lot of demand. And couple that with running bandwidth on scaling out optical transceivers at 1.6 terabit. We are, again, the only player out there doing DSP at 1.6 terabits. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable. So that's what you're seeing. But at some point, I would think these things will settle down, though we're not slowing down the pace because as I said, next year in 27, we'll launch next generation Tomahawk 7, 2X the performance, and we'll probably be by far the first out there, and then we'll continue to sustain that momentum. But at the end of the day, to answer your question, yeah, I expect as a composition of our total AI revenue in any quarter, that we'll be ranging between probably 33% to 40% AI networking components.
Analyst CJ Muse (Cancer Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I'm curious, you know, how are you thinking about the move to disaggregate, pre-fill, and decode from the GPU ecosystem and the impact of custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and customer silicon?
Executive Hawk (Title): I'm not sure I fully understand your question, CJ. Could you clarify what I mean disaggregate?
Analyst CJ Muse (Cancer Fitzgerald): Sure, you know, pushing off workloads to CPX for pre-fill and working off of Grok for decode and, you know, having that disaggregated kind of world. And does that put, you know, any pressure in terms of the demand for custom versus going with, you know, a full GPU stack?
Executive Hawk (Title): Okay, I get what you mean. That's what disaggregation kind of threw me off. In a way, what you're really saying is how is the architecture of AI accelerator, be it GPU or XPU, evolving as workloads start to evolve? That's what we are seeing very much in particular. The one-size-fits-all of a general purpose GPU gets you only that far. You can still keep going on because you can still run different workloads like you run make sure of experts even though you have you want to run make sure experts with Sparse calls to be very effective you hear the term but in a GPU you're designed for dense matrix multiplication. So you do it with software kernels, but it's not as effective as you'd had coded in silicon and make those XPUs purposely designed to be much more performing for mixture of expert workloads, say. The same applies for inference. And what that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours.
And the design starts to depart from what is the traditional standard GPU design, which is why, as we always indicated before, XPUs will eventually be more the choice, simply because it will allow flexibility in making designs that work with particular workloads, one for training even and one for inference. And as you say, one perhaps would be better at pre-filling and one to be better at post-training or reinforced learning or test time scaling. You can tweak your GPUs towards the XPU, sorry, Freudian slip, to a particular kind of workload LLM that you want. And we're seeing that roadmap in all our five customers.
Analyst Timothy Arcuri (UBS): Thanks a lot. I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously it's going to pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this. It seems like the racks are maybe 45, 50% gross margin. So I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? And I guess, you know, part of that, Hawk, is there some like floor to the gross margin, you know, below which you wouldn't be willing to do, you know, more racks?
Executive Hawk (Title): I hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten reports. We will not be affected by the gross one and by more and more AI products going out. We've gotten our yields, we've gotten our costs to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business.
Executive Kirsten (Title): I would agree with that I think on further study relative to even comments that I did make last quarter the impact relative to our overall mix is actually not going to be substantial at all so I wouldn't worry about it.
Analyst Stacey Raskin (Bernstein): Hi, guys. Thanks for taking my question. I don't know if this is for Hawk or Kirsten, but I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the gigawatts. I counted, I don't know, eight or nine. You have three from Anthropic, one from OpenAI, so that's four. You said meta was multiple, so at least two. That gets you to six. Google, I figure should be bigger than Meta. So like at least three, you know, that's, that's nine. And then you got a few others. I thought that your content per gigawatt was sort of, you know, call it in a $20 billion per gigawatt range. I guess what I'm asking is, is my math around the gigawatts you plan to ship in 27, correct? And how do I think about your content per gigawatt as that ships?
Executive Hawk (Title): Um, maybe it will be quote unquote, substantially more than a hundred billion. Stacy, you have a very interesting perspective, and I got to admire you for that. But you're right. You can look at it at gigawatts, which is the right way to look at it instead of dollars, because that's how we sell our chips. So you have to realize, depending on our LLM customer, our six customers now, sorry, not five, six, six, the dollars per gigawatt chip dollars varies. sometimes quite dramatically. It does vary. But you're right. It's not far from the dollars you're talking about. And if you look at it by gigawatt in 27, we are seeing it getting close to 10 gigawatts.
Analyst Ben Reitzis (Mellius Research): Hey, thanks. Hach, great to be speaking with you. Wanted to ask you about your commentary about supply visibility on those four major components through 2028. You know, A, how'd you do it? This is probably the, you know, you're the first one to kind of go out through the 2028 timeframe. And secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028 based on the supply that you see in that kind of commentary? Thanks a lot.
Executive Hawk (Title): The best answer is, yeah, you're right. We anticipate this sharp, accelerated growth. Now nobody can anticipate the rate of growth it's showing, but we kind of anticipate a large part of it, I guess, for longer than six months. We were early in being able to lock up tea glass, the infamous tea glass you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about. And so the answer to
your question is it's somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that yes.
Executive Charlie (Title): Yeah, just maybe a couple of quick ones. I think you covered that piece really well. I think Ben, the other piece that's really important, as Hawk said, we build custom silicon for six customers. We have very deep strategic multi-year engagement with them. They share with us because of this custom capability, exactly what they anticipate at least over the next two to three years, sometimes four years. And so because of that, that's exactly why we went and secured all the elements Hawk talked about. And when we secure this, it requires investments with these partners, sometimes developing not just more capacity, but the right technology and capacity for that. So we have to go secure it for multiple years and we're probably, you're right, we're probably the first one to secure that up to 28 or beyond. And can you grow in 28 with what you see in supply? Sorry to sneak that in. Yes.
Analyst Vivek Arya (Bank of America Securities): Thanks for taking my question. Hawkeye, I just wanted to first clarify, the anthropic project you're doing, the 20 billion or so for a gigawatt this year, how much of that is chips and how much of that is kind of racks? I just wanted to understand when you say 100 billion in chips, is there a distinction between chips versus your rack scale projects? Because just that project is supposed to triple. And then my question is, you know, your AI business is transitioning from kind of one large customer that was, you know, where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. So how do you get the visibility and the confidence about, you know, how your share will progress at these multiple customers? Because, you know, it's a very kind of fragmented industry.
Executive Hawk (Title): Vivek, you have to understand one thing about, first, as Charlie correctly put down, very nicely, we only have very few customers, to be precise, six. For the volume we're driving, the revenue we're driving, we only have just six. Prior to that, even less recently. And number two, also I have to understand with the dollars each of them spend and the criticality of the nature of what they're embarking on, and that's why I throw out this term. Meta has MTIA. That's the custom accelerator program. To them, as to every one of my customers in this space, it's a strategic play. It's not optionality. To them, long-term, short-term, medium-term is strategic. Extremely strategic. They don't stop. And they are very clear, each of them, of where they want to position this custom silicon within the trajectory of the LLM development and the trajectory of how they develop inference for productizing those LLMs. That part we have very clear visibility. Anything else on GPU, using new cloud, using cloud business, these are all transactional and optionality. So you have to, you point out very correctly, it seems very confusing.
Trust me, not for us. nor those customers we have. They're very strategic, they're very targeted, and they know exactly what they're building up and how much capacity they want to build up each year. And the only thing they think about is can they do it faster. Otherwise, it's very strategic and targeted on a projected roadmap. Anything else you see in the mix is pure, I call it, opportunistic for these guys the optionality so it's very clear on the clarification hook anthropic racks versus chips.
Executive Hawk (Title): Thank you. I'd rather not answer that but we're okay as Kirsten said we're good on our dollars and margin.
Analyst Tom O'Malley (Barclays): Hey, guys, thanks for taking my questions. I have one for Hawk and one for Charlie. So, Hawk, I know you're very specific in particular about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400 gig 30s. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you're adding more customers here, I would imagine customers with design ASICs with you are going to use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well.
Executive Hawk (Title): Okay, no unless I'm just highlighting the fact that we're on networking our technology is really very very uniquely positioning us to help our customers and more than our customers even customers using general purpose GPUs not just XPUs which is that you know if you are running and trying to create LLMs and running creating your own AI data centers and designing it architecting it you truly want larger and larger domains or clusters for and you really want to connect XPUs to XPUs directly where you can and the best way to do that is use direct attached copper that's the lowest latency lowest power and lowest cost so you want to keep doing that especially in scale up as long as possible in scaling out we're past that we use optical that's fine but i'm talking about scaling up in a rack in a cluster domain you really want to use direct attached copper as long as you can and we are still based on our technology that broadcom has especially on connecting XPU to XPU or even GPU to GPU, we can do it with copper and we can push the envelope from 100G to 200G to even to 400G. We have studies now running 400G that can drive distance on a rack to run copper. Well all I'm trying to say is you don't need to go run into some bright shiny objects called CPO even as we are the lead in CPOs. CPOs will come in its time, not this year, maybe not next year, but in its time.
Executive Charlie (Title): Yeah, no, well, well said Hawk. And on the question of ethernet, um, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades. If you look at the debut of the backend networks, as Hawk articulated, there was two years ago a big fight about what protocol should be used to achieve the latency, the scale necessary on scale out. And the industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be. And again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale out of choice. Checkmark. Today, everyone is talking about scaling out with Ethernet. Now, when it comes to scale up, yes, exactly like what happened three, four years ago on scale up now, what's the right answer for this? And what we're hearing consistently and what we're seeing is the right answer is Ethernet, and as you know, last year we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell, but a lot of the XPU designs we're doing, we're being asked to scale-up through Ethernet, and we're happy to enable that.
Analyst Jim Schneider (Goldman Sachs): Good afternoon. Thanks for taking my question. Hawk, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale? Thank you.
Executive Hawk (Title): Thanks. It's, you know, most of our customers begin with inference simply because that tends to be, you know, that tends to be the easiest path to start on not necessarily from anything else than the fact that you know when you do inference it's much it's less compute, but also then
the question is do you need this general-purpose? Massive dense matrix multiplication GPUs when you can do it more efficiently effectively with customs inference silicon XPUs that do the job better or just as well much cheaper cost lower power and that's what we find this customer starting with but they are now in training and many of our XPUs are used both in training as well as inference and by the way they are interchangeable just a GPU can be used not just for training which they are perhaps more perfectly suited to but they can be used for inference. What we're seeing is our XPUs are used for both. And we're seeing that going on. But we're also seeing very rapidly for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips each year simultaneously. One for training, one for inference to be specialized. Why? Because what we're seeing very clearly for these LLM players is you do the training to achieve a higher level of intelligence, smarts for your LLM. So great, you get yourself a great LLM state of the art or more. Now you've got to productize it, which means inference.
Well, you can't then decide at that time you got your model going as the best. Because if you decide then to do your inference productization, it'll take you a year at least to productize at which time somebody else is going to create an LLM better than yours. So you, that's a leap of faith here that when you do training to create the next level of super intelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip, and the capacity. So our visibility is really coming out better and better as we find those six customers get more matured in their progression towards better and better LLMs. So yeah, that is the trend we are seeing. It's not happening to all our six customers yet, but we are seeing a majority of them headed in that way right now.
Analyst Joshua Buckhalter (TD Cowan): Hey, guys. Thanks for taking my question and congrats on the results. I appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last one to two quarters that gave you the confidence to give us more details. And then on a specific one, you mentioned greater than a gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? And was that sort of always the plan?
Executive Hawk (Title): Yes. Well, yeah, this, as you all seen, and you all know in this generative AI race that we are in now, and I shouldn't use the word race, let's call it progression among the few players we see here. I mean, it's a competition. Each is trying to create an LLM, better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search. Each one is trying to create it more and more. And all of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLMs. And we are getting, and probably call it the fact that we've been engaged with some of them now for more than a couple years. We're getting better and better visibility as they have more and more confidence that the XPUs they are working on with us is achieving what they're getting at. As they get the sense that the XPUs they are working on with the software, with the algorithms they needed, that they are having more confidence that this XPU silicon is what they need. And it gets better and better. And it gets better, we get more visibility as Charlie puts up perfectly.
Because at the end of the day, we only have six guys to work on. And these six guys are all, as I said, look at XPUs and AI in a very strategic manner. They don't think one generation at a time. They think multiple generations, multiple years. And in spite of all the hubris, noise out there on what's available, they think very long term on how they deploy the XPUs they develop with us, how they deploy in achieving better and better LMS that they want to create. And more than that, how they deploy in monetizing. So it's, we are part of their strategic roadmap. We are not in just optionality of, Oh, shall I use a GPU? Shall I use in the cloud because I need to train for six months. No, this is more than that. The investment these guys are making are long-term, and it's great to be part of that long-term roadmap as opposed to a transactional roadmap. And the noise, as I answered an earlier question, is there's a lot of noise that makes up short-term transactions with what is long-term strategic positioning of our business and our product. And to sum it all, I think our business in XPUs is a strategic, sustainable play for all the six customers we have today.
Executive GU (Title): Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to GU for any closing remarks.
Executive GU (Title): Thank you, Cherie. Broadcom currently plans to report its earnings for the second quarter of fiscal year 2026 after the close of market on Wednesday, June 3rd, 2026. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Cherie, you may end the call. This concludes today's program. Thank you all for participating. You may now disconnect.