Back
Earnings Call Transcripts

Broadcom Inc.

AVGO
Quarters2 Quarters
ContentQ&A Sections
SourceEarnings Conference Call
Quarter 1

Q1 2026 Earnings Call — March 4, 2026

Analyst Blaine Curtis (Jefferies): Hey, good afternoon. Thanks for taking my question. It's just a clarification on the question. Just clarification, Hawk, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking and didn't know how RAC revenue fits in there. And then the question, I think the biggest overhang on the group here is that you grew roughly double in the quarter AI. I think that's what kind of Cloud CapEx is growing this year. I'm just kind of curious your perspective. I think given the outlook that you have for 27, you should be a share gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year, or if not the year after, I'm just kind of curious your perspective, how you factor that into your outlook.

Executive Hawk (Title): Well, what we see, what we have seen over the last few months and continue to see even more is, and it's really not so much talking about hyperscalers, our customers, Blaine is limited to those few players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have one thing in common, which is to create LLMs, productize it, and generate platforms, be it for enterprise consumption in code assistance or agentic AI, or be it for consumer subscription that we know about, whatever it is, is that few prospects and many of whom are customers now, who are creating this, whether it's generative AI, agentic AI, but creating a platform. That's our customer.

And with respect to each of those guys, we are seeing very stronger and stronger demand for compute capacity for training which is something they do need constantly but what is very very interesting and surprising to us is very much for inference in order to productize the LLMs their latest LLMs they create and monetize it and that inference is driving a substantial amount of compute capacity, which is great for us because these, all these players, this five, six customers of ours are on the path to creating their own custom accelerators. And beyond that, their own design architecture of networking clusters of those customer accelerators. So I think we're going to see demand keeps picking up as we've heard announcements in the past six months. Now, to clarify your first part, Blaine, when I say we forecast, we have a line of sight that our revenue in 27 will be significantly in excess of $100 billion I'm focusing on the fact that these are pretty much all based on chips. Whether they are XPUs, whether they are switch chips, DSPs, these are silicon content we're talking about.

Analyst Harlan Sir (JP Morgan): Yeah, good afternoon. Thank you for taking my question, and congratulations to the team on the strong results. Hawk, you know, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it CLT or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this CLT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry. And very few of these COT initiatives have ever been successful. Now on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. So maybe just a quick two-part question. Hawk, one for you is, given your visibility into next year, do you see these COT initiatives science projects taking any meaningful TPU, XPU share from Broadcom? And then maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these CLT programs, how does the Broadcom team widen this gap further?

Executive Hawk (Title): Well, that's a great question. And you know it fits into that I purposely took the time in my net in my opening remarks to say that when any of our any I guess hyperscaler or LLM developer tries to create become self-sufficient entirely in creating what you call a customer-owned tooling or COT model they face tremendous challenges. One is technology, which is technology as it relates to creating the silicon chips, and particularly in XPUs, that they need to do the computing and that it's needed to optimize and run the train and inference on the workloads they produce or the LLM. It's that technology we talked about comes from different dimensions you need the best silicon design team around you need cutting edge really cutting edge studies very advanced packaging and just as much you need to understand how to network clusters of them together we've been doing this for 20 years more than 20 years in silicon and in this particular space today in generative AI, if you're trying to as an LLM player to do your own chip, you cannot afford to have a chip that is just good enough.

You need the best chips that is around because you're competing against other LLM players and most of all, you're also competing against Nvidia. who is by no means letting down their guard. They are producing better and better chips with every passing generation. So you have to, as an LLM, trying to establish your platform in the world, have to create chips that are better than, if not competitive with not just Nvidia, but all the other LLM platform players that you're competing against. And for that, you really need our belief, and we see that firsthand, a partner in silicon with the best technology, IP, and execution around. And very modestly, I would say we are by far way out there. And we will not see competition in COT for many years to come. It will come eventually, but we're still a long way off because the race which we see continues. And one thing I add in there that is particularly unique to us, when you create the silicon, you really have to get it up and running in high volume in production very quickly, time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well.

Can you produce 100,000 of those chips quickly at yields that you can afford? And we don't see too many players in the world that can do that. Charlie?

Executive Charlie (Title): I think you covered it very well, Hawk. Thank you, Hawk.

Analyst Ross Seymour (Deutsche Bank): Hi, thanks for having me ask a question. Huck, in your script, you leaned a little bit more into the networking differentiation than you have in the past. So I guess kind of a short-term and a longer-term question. The short-term is what's driving that up to 40% of the AI revenues? And the longer-term question is, is that percentage mix in that $100 billion plus, is that changing now? What sort of leadership do you expect to maintain in that business, whether it's scale out or scale up? And is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?

Executive Hawk (Title): Well, let's address the first part of that fairly complex question first, Ross. Yes, in networking, especially with the new generation of GPUs, XPUs that are coming out there, we're running at 200 gigabit out there in terms of bandwidth. And the Tomahawk 6 that we introduced over six months ago, in fact, closer to nine months ago, we're the only one out there. And our customers and the hyperscalers wants to run with the best networking and with the most bandwidth out there for their clusters. So we are seeing huge demand for this only 100 terabit per second switch out there. So that's driving a lot of demand. And couple that with running bandwidth on scaling out optical transceivers at 1.6 terabit. We are, again, the only player out there doing DSP at 1.6 terabits. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable. So that's what you're seeing. But at some point, I would think these things will settle down, though we're not slowing down the pace because as I said, next year in 27, we'll launch next generation Tomahawk 7, 2X the performance, and we'll probably be by far the first out there, and then we'll continue to sustain that momentum. But at the end of the day, to answer your question, yeah, I expect as a composition of our total AI revenue in any quarter, that we'll be ranging between probably 33% to 40% AI networking components.

Analyst CJ Muse (Cancer Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I'm curious, you know, how are you thinking about the move to disaggregate, pre-fill, and decode from the GPU ecosystem and the impact of custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and customer silicon?

Executive Hawk (Title): I'm not sure I fully understand your question, CJ. Could you clarify what I mean disaggregate?

Analyst CJ Muse (Cancer Fitzgerald): Sure, you know, pushing off workloads to CPX for pre-fill and working off of Grok for decode and, you know, having that disaggregated kind of world. And does that put, you know, any pressure in terms of the demand for custom versus going with, you know, a full GPU stack?

Executive Hawk (Title): Okay, I get what you mean. That's what disaggregation kind of threw me off. In a way, what you're really saying is how is the architecture of AI accelerator, be it GPU or XPU, evolving as workloads start to evolve? That's what we are seeing very much in particular. The one-size-fits-all of a general purpose GPU gets you only that far. You can still keep going on because you can still run different workloads like you run make sure of experts even though you have you want to run make sure experts with Sparse calls to be very effective you hear the term but in a GPU you're designed for dense matrix multiplication. So you do it with software kernels, but it's not as effective as you'd had coded in silicon and make those XPUs purposely designed to be much more performing for mixture of expert workloads, say. The same applies for inference. And what that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours.

And the design starts to depart from what is the traditional standard GPU design, which is why, as we always indicated before, XPUs will eventually be more the choice, simply because it will allow flexibility in making designs that work with particular workloads, one for training even and one for inference. And as you say, one perhaps would be better at pre-filling and one to be better at post-training or reinforced learning or test time scaling. You can tweak your GPUs towards the XPU, sorry, Freudian slip, to a particular kind of workload LLM that you want. And we're seeing that roadmap in all our five customers.

Analyst Timothy Arcuri (UBS): Thanks a lot. I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously it's going to pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this. It seems like the racks are maybe 45, 50% gross margin. So I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? And I guess, you know, part of that, Hawk, is there some like floor to the gross margin, you know, below which you wouldn't be willing to do, you know, more racks?

Executive Hawk (Title): I hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten reports. We will not be affected by the gross one and by more and more AI products going out. We've gotten our yields, we've gotten our costs to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business.

Executive Kirsten (Title): I would agree with that I think on further study relative to even comments that I did make last quarter the impact relative to our overall mix is actually not going to be substantial at all so I wouldn't worry about it.

Analyst Stacey Raskin (Bernstein): Hi, guys. Thanks for taking my question. I don't know if this is for Hawk or Kirsten, but I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the gigawatts. I counted, I don't know, eight or nine. You have three from Anthropic, one from OpenAI, so that's four. You said meta was multiple, so at least two. That gets you to six. Google, I figure should be bigger than Meta. So like at least three, you know, that's, that's nine. And then you got a few others. I thought that your content per gigawatt was sort of, you know, call it in a $20 billion per gigawatt range. I guess what I'm asking is, is my math around the gigawatts you plan to ship in 27, correct? And how do I think about your content per gigawatt as that ships?

Executive Hawk (Title): Um, maybe it will be quote unquote, substantially more than a hundred billion. Stacy, you have a very interesting perspective, and I got to admire you for that. But you're right. You can look at it at gigawatts, which is the right way to look at it instead of dollars, because that's how we sell our chips. So you have to realize, depending on our LLM customer, our six customers now, sorry, not five, six, six, the dollars per gigawatt chip dollars varies. sometimes quite dramatically. It does vary. But you're right. It's not far from the dollars you're talking about. And if you look at it by gigawatt in 27, we are seeing it getting close to 10 gigawatts.

Analyst Ben Reitzis (Mellius Research): Hey, thanks. Hach, great to be speaking with you. Wanted to ask you about your commentary about supply visibility on those four major components through 2028. You know, A, how'd you do it? This is probably the, you know, you're the first one to kind of go out through the 2028 timeframe. And secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028 based on the supply that you see in that kind of commentary? Thanks a lot.

Executive Hawk (Title): The best answer is, yeah, you're right. We anticipate this sharp, accelerated growth. Now nobody can anticipate the rate of growth it's showing, but we kind of anticipate a large part of it, I guess, for longer than six months. We were early in being able to lock up tea glass, the infamous tea glass you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about. And so the answer to

your question is it's somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that yes.

Executive Charlie (Title): Yeah, just maybe a couple of quick ones. I think you covered that piece really well. I think Ben, the other piece that's really important, as Hawk said, we build custom silicon for six customers. We have very deep strategic multi-year engagement with them. They share with us because of this custom capability, exactly what they anticipate at least over the next two to three years, sometimes four years. And so because of that, that's exactly why we went and secured all the elements Hawk talked about. And when we secure this, it requires investments with these partners, sometimes developing not just more capacity, but the right technology and capacity for that. So we have to go secure it for multiple years and we're probably, you're right, we're probably the first one to secure that up to 28 or beyond. And can you grow in 28 with what you see in supply? Sorry to sneak that in. Yes.

Analyst Vivek Arya (Bank of America Securities): Thanks for taking my question. Hawkeye, I just wanted to first clarify, the anthropic project you're doing, the 20 billion or so for a gigawatt this year, how much of that is chips and how much of that is kind of racks? I just wanted to understand when you say 100 billion in chips, is there a distinction between chips versus your rack scale projects? Because just that project is supposed to triple. And then my question is, you know, your AI business is transitioning from kind of one large customer that was, you know, where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. So how do you get the visibility and the confidence about, you know, how your share will progress at these multiple customers? Because, you know, it's a very kind of fragmented industry.

Executive Hawk (Title): Vivek, you have to understand one thing about, first, as Charlie correctly put down, very nicely, we only have very few customers, to be precise, six. For the volume we're driving, the revenue we're driving, we only have just six. Prior to that, even less recently. And number two, also I have to understand with the dollars each of them spend and the criticality of the nature of what they're embarking on, and that's why I throw out this term. Meta has MTIA. That's the custom accelerator program. To them, as to every one of my customers in this space, it's a strategic play. It's not optionality. To them, long-term, short-term, medium-term is strategic. Extremely strategic. They don't stop. And they are very clear, each of them, of where they want to position this custom silicon within the trajectory of the LLM development and the trajectory of how they develop inference for productizing those LLMs. That part we have very clear visibility. Anything else on GPU, using new cloud, using cloud business, these are all transactional and optionality. So you have to, you point out very correctly, it seems very confusing.

Trust me, not for us. nor those customers we have. They're very strategic, they're very targeted, and they know exactly what they're building up and how much capacity they want to build up each year. And the only thing they think about is can they do it faster. Otherwise, it's very strategic and targeted on a projected roadmap. Anything else you see in the mix is pure, I call it, opportunistic for these guys the optionality so it's very clear on the clarification hook anthropic racks versus chips.

Executive Hawk (Title): Thank you. I'd rather not answer that but we're okay as Kirsten said we're good on our dollars and margin.

Analyst Tom O'Malley (Barclays): Hey, guys, thanks for taking my questions. I have one for Hawk and one for Charlie. So, Hawk, I know you're very specific in particular about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400 gig 30s. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you're adding more customers here, I would imagine customers with design ASICs with you are going to use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well.

Executive Hawk (Title): Okay, no unless I'm just highlighting the fact that we're on networking our technology is really very very uniquely positioning us to help our customers and more than our customers even customers using general purpose GPUs not just XPUs which is that you know if you are running and trying to create LLMs and running creating your own AI data centers and designing it architecting it you truly want larger and larger domains or clusters for and you really want to connect XPUs to XPUs directly where you can and the best way to do that is use direct attached copper that's the lowest latency lowest power and lowest cost so you want to keep doing that especially in scale up as long as possible in scaling out we're past that we use optical that's fine but i'm talking about scaling up in a rack in a cluster domain you really want to use direct attached copper as long as you can and we are still based on our technology that broadcom has especially on connecting XPU to XPU or even GPU to GPU, we can do it with copper and we can push the envelope from 100G to 200G to even to 400G. We have studies now running 400G that can drive distance on a rack to run copper. Well all I'm trying to say is you don't need to go run into some bright shiny objects called CPO even as we are the lead in CPOs. CPOs will come in its time, not this year, maybe not next year, but in its time.

Executive Charlie (Title): Yeah, no, well, well said Hawk. And on the question of ethernet, um, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades. If you look at the debut of the backend networks, as Hawk articulated, there was two years ago a big fight about what protocol should be used to achieve the latency, the scale necessary on scale out. And the industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be. And again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale out of choice. Checkmark. Today, everyone is talking about scaling out with Ethernet. Now, when it comes to scale up, yes, exactly like what happened three, four years ago on scale up now, what's the right answer for this? And what we're hearing consistently and what we're seeing is the right answer is Ethernet, and as you know, last year we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell, but a lot of the XPU designs we're doing, we're being asked to scale-up through Ethernet, and we're happy to enable that.

Analyst Jim Schneider (Goldman Sachs): Good afternoon. Thanks for taking my question. Hawk, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale? Thank you.

Executive Hawk (Title): Thanks. It's, you know, most of our customers begin with inference simply because that tends to be, you know, that tends to be the easiest path to start on not necessarily from anything else than the fact that you know when you do inference it's much it's less compute, but also then

the question is do you need this general-purpose? Massive dense matrix multiplication GPUs when you can do it more efficiently effectively with customs inference silicon XPUs that do the job better or just as well much cheaper cost lower power and that's what we find this customer starting with but they are now in training and many of our XPUs are used both in training as well as inference and by the way they are interchangeable just a GPU can be used not just for training which they are perhaps more perfectly suited to but they can be used for inference. What we're seeing is our XPUs are used for both. And we're seeing that going on. But we're also seeing very rapidly for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips each year simultaneously. One for training, one for inference to be specialized. Why? Because what we're seeing very clearly for these LLM players is you do the training to achieve a higher level of intelligence, smarts for your LLM. So great, you get yourself a great LLM state of the art or more. Now you've got to productize it, which means inference.

Well, you can't then decide at that time you got your model going as the best. Because if you decide then to do your inference productization, it'll take you a year at least to productize at which time somebody else is going to create an LLM better than yours. So you, that's a leap of faith here that when you do training to create the next level of super intelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip, and the capacity. So our visibility is really coming out better and better as we find those six customers get more matured in their progression towards better and better LLMs. So yeah, that is the trend we are seeing. It's not happening to all our six customers yet, but we are seeing a majority of them headed in that way right now.

Analyst Joshua Buckhalter (TD Cowan): Hey, guys. Thanks for taking my question and congrats on the results. I appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last one to two quarters that gave you the confidence to give us more details. And then on a specific one, you mentioned greater than a gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? And was that sort of always the plan?

Executive Hawk (Title): Yes. Well, yeah, this, as you all seen, and you all know in this generative AI race that we are in now, and I shouldn't use the word race, let's call it progression among the few players we see here. I mean, it's a competition. Each is trying to create an LLM, better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search. Each one is trying to create it more and more. And all of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLMs. And we are getting, and probably call it the fact that we've been engaged with some of them now for more than a couple years. We're getting better and better visibility as they have more and more confidence that the XPUs they are working on with us is achieving what they're getting at. As they get the sense that the XPUs they are working on with the software, with the algorithms they needed, that they are having more confidence that this XPU silicon is what they need. And it gets better and better. And it gets better, we get more visibility as Charlie puts up perfectly.

Because at the end of the day, we only have six guys to work on. And these six guys are all, as I said, look at XPUs and AI in a very strategic manner. They don't think one generation at a time. They think multiple generations, multiple years. And in spite of all the hubris, noise out there on what's available, they think very long term on how they deploy the XPUs they develop with us, how they deploy in achieving better and better LMS that they want to create. And more than that, how they deploy in monetizing. So it's, we are part of their strategic roadmap. We are not in just optionality of, Oh, shall I use a GPU? Shall I use in the cloud because I need to train for six months. No, this is more than that. The investment these guys are making are long-term, and it's great to be part of that long-term roadmap as opposed to a transactional roadmap. And the noise, as I answered an earlier question, is there's a lot of noise that makes up short-term transactions with what is long-term strategic positioning of our business and our product. And to sum it all, I think our business in XPUs is a strategic, sustainable play for all the six customers we have today.

Executive GU (Title): Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to GU for any closing remarks.

Executive GU (Title): Thank you, Cherie. Broadcom currently plans to report its earnings for the second quarter of fiscal year 2026 after the close of market on Wednesday, June 3rd, 2026. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Cherie, you may end the call. This concludes today's program. Thank you all for participating. You may now disconnect.

Quarter 2

Q4 2025 Earnings Call — December 11, 2025

Analyst Vivek Arya (Bank of America): Thank you. I just wanted to clarify, Hawk, you said $73 billion over 18 months for AI. That's roughly $50-ish billion-plus for fiscal 26 for AI. I just wanted to make sure I got that right. And then the main question, Hawk, is that there is sort of this emerging debate about customer-owned tooling, your ASIC customers potentially wanting to do more things on their own. How do you see your XPU content and share at your largest customer evolve over the next one or two years? Thank you.

Executive Hawk (Title): Well, to answer your first question, what we said is correct that as of now, we have $73 billion of backlog in place of XPUs, switches, DSPs, lasers for AI data centers that we anticipate shipping over the next 18 months. And obviously, this is as of now. I mean, we fully expect more bookings to come in over that period of time, and so don't take that 73 as that's the revenue we ship over the next 18 months. I'm just saying we have that now, and the bookings have been accelerating. And frankly, we see that bookings not just in XPUs, but in switches, DSPs, all the other components that go into AI data center. We have never seen bookings of the nature that what we have seen over the past three months. Particularly with respect to Tomahawk 6 switches. This is one of the fastest growing products in terms of deployment that we've ever seen of any switch product that we put out there. It is pretty interesting. And partly because it's the only one of its kind out there at 102 terabits per second. And that's the exact product needed to expand the clusters of the latest GPU and XPUs out there. So that's great.

But as far as what is the future of XPU is your broader question. My answer to you is don't follow what you hear out there as gospel. It's a trajectory. It's a multi-year journey. And many of the players, and not too many players doing LLMs, wants to do their own thing. custom AI accelerator for very good reasons. You can put in hardware, if you use a general purpose GPU, you can only do in software and kernels and software. You can achieve performance-wise so much better in a custom purpose design, hardware-driven XPU. And we see that in the TPU And we see that in all the accelerators we are doing for our other customers. Much, much better in areas of sparse call, training, inference, reasoning, all that stuff. Now, will that mean that over time they all want to go do it themselves? Not necessarily. And in fact, because the technology in silicon keeps updating, keeps evolving. And if you are an LLM player, where do you put your resources in order to compete in this space? Especially when you have to compete at the end of the day against merchant GPU who are not slowing down in their rate of evolution.

So I see that as this concept of customer tooling is an overblown concept. hypothesis, which frankly, I don't think will happen. Thank you.

Analyst Ross Seymour (Deutsche Bank): Hi, thanks for the question. I wanted to go to something you touched on earlier about the TPUs going a little bit more to like a merchant go to market to other customers. Do you believe that's a substitution effect for customers who otherwise would have done ASICs with you, or do you think it's actually broadening the market? And so what are kind of the financial implications of that from your perspective?

Executive Hawk (Title): That's a very good question, Ross. And what we see right now is the most obvious move it does is it goes, the people who use TPUs, The alternative is GPUs, merchant bases. That's the most common thing that happens. Because to do that substitution for another custom, it's different. To make an investment in custom accelerator is a multi-year journey, it's a strategic directional thing. It's not necessary a very transactional or short-term move. Moving from GPU to TPU is a transactional move. Going into AI accelerator of your own is a long-term strategic move and nothing will deter you from that to continue to make that investment towards that end goal of successfully creating and deploying your own custom AI accelerator. So, that's the motion we see. Thank you.

Analyst Harlan Sir (JP Morgan): Yeah, good afternoon. Thanks for taking my question and congratulations on the strong results guidance and execution. Again, I just want to read or I just want to sort of verify this, right? Total AI backlog of 73 billion over the next six quarters, right? This is just a snapshot of your order book like right now. But given your lead times, I think customers can and still will place orders for AI in quarters four, five, and six. So as time moves forward, that backlog number for more shipments in second half of 2016 will probably still go up, right? Is that the correct interpretation? And then given the strong and growing backlog, right? The question is, does the team have three nanometer, two nanometer wafer supply, co-op substrate, HBM supply commitments to support all of the demand in your order book? And I know one of the areas where you are trying to mitigate this is in advanced packaging, right? You're bringing up your Singapore facility. Can you guys just remind us what part of the advanced packaging process the team is focusing on with the Singapore facility? Thanks.

Executive Hawk (Title): Well, to answer your first simpler question, you're right. You can say that $73 billion is the backlog we have today to ship over the next six quarters. You might also say that given our lead time, we expect more orders to be able to be delivered absorbed into a backlog for shipments over the next six quarters. So taking that, we expect revenue, a minimum revenue, one way to look at it, of 73 billion over the next six quarters, but we do expect much more as more orders come in for shipments within the next six quarters. The hourly time, depending on the particular product it is, can be anywhere from six months to a year. With respect to supply chain, is what you're asking, critical supply chain on silicon and packaging? Yeah, that's an interesting challenge that we have been addressing constantly and continue to. And with the strength of the demand and the need for more innovative packaging, advanced packaging, because you're talking about multi-chips, multi-chips in creating every custom accelerator now. The packaging becomes a very interesting technical challenge.

And building our Singapore PEP is to really talk about partially insourcing those advanced packaging. We believe that we have enough demand, we can literally insource not from the viewpoint of not just cost, but in the viewpoint of supply chain security and delivery. And we're building up a fairly substantial facility for packaging, advanced packaging, Singapore as indicated, purely for that purpose, to address the advanced packaging side. Silicon-wise, now we go back to the same precious source in Taiwan, TSMC. And so we keep going for more and more capacity in two nanometers, three nanometers. And so far, we do not have that constraint. But again, time will tell as we progress and as our backlog builds up. Thank you, Har.

Analyst Blaine Curtis (Jefferies): Hey, good afternoon. Thanks for taking my question. I want to ask, with the original $10 million deal, you talked about a rack sale. I just wanted to, with the follow-on order as well as the fifth customer, can you just maybe describe how you're going to deliver those? Is it an X view or is it a rack? And then maybe you can kind of just walk us through the math and kind of what the deliverable is. Obviously, Google uses its own networking, so I'm kind of curious, too, would it be a copy exactly what Google does, now that you can talk to it to name, or would you have your own networking in there as well? Thanks.

Executive Hawk (Title): That's a very complicated question, Blaine. Let me tell you what it is. It's a system sale. How about that? It's a real system sale. We have so many components beyond XP used, custom accelerators in any system, in AI system, any AI system. used by hyperscalers that yeah we believe it begin to make sense to do it as a system sales and be responsible but be fully responsible for the entire system or rank as you call it i think people understand it as a system still better and so on this customer number four we are selling it as a system with our key components in it and that's no different than selling a chip We certify and final ability to run as part of the whole selling process. Okay, thanks, Austin.

Analyst Stacy Raskolm (Bernstein): Hi, guys. Thanks for taking my question. I wanted to touch on gross margins, and maybe it feeds into a little bit the prior question. So I understand why the AI business is somewhat diluted to gross margins. We have the HBM pass-through, and then presumably with the system sales, that will be more diluted. And you've hinted at this in the past, but I was wondering if you could be a little more explicit. As this AI revenue starts to ramp, as we start to get system sales, how should we be thinking about that gross margin number, say, if we're looking out, you know, four quarters or six quarters? Is it low 70s? I mean, could it start with a six at the corporate level? And I guess I'm also wondering, I understand how that comes down, but what about the operating margins? Do you think you get enough operating leverage on the OPEC side to keep operating margins flat, or do they need to come down as well?

Executive Kirsten (Title): I'll let Kirsten give you the details, but enough for me to broadly high-level explain to you, Stacey. Good question. Phenomenal. You don't see that impacting us right now, and we have already started that process. of some systems there. You don't see that in our numbers. But it will. And we have said that openly. The AI revenue has a lower gross margin than our, obviously, the rest of our business, including software, of course. But we expect the rate of growth of, as we do more and more AI revenue, to be so much that we get the operating leverage on our operating spending, that operating margin won't deliver dollars that are still a high level of growth from what it has been. So we expect operating leverage to benefit us at the operating margin level, even as gross margin will start to deteriorate. I love it. Now, I think Hock said that fairly. And the second half of the year, when we do start shipping more systems, The situation is straightforward. We'll be passing through more components that are not ours. So think of it similar to the XPUs where we have memory on those XPUs and we're passing through those costs.

We'll be passing through more costs within the RAC. And so those gross margins will be lower. However, overall, the way Hawk said it, gross margin dollars will go up. Margins will go down. Operating margins, because we have leverage. Operating margin dollars will go up, but the margin itself as a percentage of revenues will come down a bit. But we're not – I mean, we'll guide closer to, you know, the end of the year for that. Thank you, guys.

Analyst Jim Schneider (Goldman Sachs): Good afternoon. Thanks for taking my question. I was wondering if you might care to calibrate your expectations for AI revenue in fiscal 26 a little bit more closely. I believe you talked about acceleration in fiscal 26 off of the 65% growth rate you did in fiscal 25, and then you're guiding to 100% growth for Q1. So I'm just wondering if the Q1 is a good jumping off point for the growth rate you expect for the full year or something maybe a little bit less than that. And then maybe if you could separately clarify whether your $1 billion of orders for the fifth customer is indeed OpenAI, which you made a separate announcement about. Thank you.

Executive Hawk (Title): Wow, there's a lot of questions here. Let me start off with 26%. You know, our backlog is very dynamic these days, as I said, and it is continuing to ramp up. And you're right. We originally, six months ago, said maybe year-on-year AI revenues would grow in 26, 60, 70%. Q1, we doubled. And Q1 26 today, we're saying it doubled. we're looking at it because all the thresholders keeps coming in and we give you a milestone of where we are today which is 73 billion of backlog to be shipped over the next 18 months and we do fully expect as I answered the earlier question for that 73 billion over the 18 months to keep growing now it's a moving target it's a moving number as we move in time but it will grow And it's hard for me to pinpoint what 26 is going to look like precisely. So I'd rather not give you guys any guidance. And that's why we don't give you guidance, but we do give it for Q1. Give it time. We'll give it for Q2. And you're right. To us, is it an accelerating trend? And my answer is it's likely to be an accelerating trend as we progress through 26. Hope that answers your question.

Analyst Ben Reitzes (Mellius Research): Hey, guys. Thanks a lot. Hey, Hawk. I wanted to ask, I'm not sure if the last caller said something on it, but I didn't hear it in the answer, was I wanted to ask about the OpenAI contract. It's supposed to start in the second half of the year and go through 2029 for 10 gigawatts. I'm going to assume that that's the fifth customer order there. And I was just wondering if you're still confident in that being a driver. Are there any obstacles to making that a major driver? And when you expect, you know, that to contribute and your confidence in it? Thanks so much, Hawk.

Executive Hawk (Title): You didn't hear that answer from my last caller, Jim's question, because I didn't answer it. I did not answer it, and I'm not answering it either. It's a fifth customer, and it's a real customer, and it will grow. They are on their multi-year journey to their own XPUs, and let's leave it at that. As far as the OpenAI view that you have, We appreciate the fact that it is a multi-year journey that will run through 29 as our press release with OpenAI showed. 10 gigawatts between 26, more like 27, 28, 29, Ben, not 26. It's more like 27, 28, 29, 10 gigawatts. That was the OpenAI discussion. And that's, I call it an agreement, an alignment of where we're headed with respect to a very respected and valued customer open AI. But we do not expect much in 26. Ah, okay. Thanks for clarifying that. That's real interesting. Appreciate it.

Analyst CJ Muse (Cantor Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I guess, Hawk, I wanted to talk about custom silicon and maybe speak to how you expect content to grow for Broadcom generation to generation. And as part of that, you know, your competitor announced CPX offering essentially accelerator for an accelerator program for massive context windows. I'm curious if you see a broadening opportunity, you know, for your existing five customers to have multiple XPU offerings. Thanks so much.

Executive Hawk (Title): Thank you. No, yeah, you hit it right on. I mean, the nice thing about a custom accelerator is you try not to do one size fits all and generationally. Each of these five customers now can create their version of an XPU customer accelerator for training and inference. And basically, it's almost two parallel tracks going on almost simultaneously for each of them. So I would have plenty of versions to deal with. I don't need to create any more versions. We've got plenty of different content out there just on the basis of creating these custom accelerators. And by the way, when you do custom accelerators, you tend to put more hardware in that are unique, differentiated versus trying to make it work on software. and creating kernels into software. I know that's very tricky too, but think about the difference where you can create in hardware those sparse core data routers versus the dense matrix multipliers, all in one same chip.

And that's just one example of what creating custom accelerators is letting us do. for that matter a variation in how much memory capacity or memory bandwidth from for the same customer from chip to chip just because even in inference you want to do more reasoning versus decoding versus something else like pre-fill so you literally start to create different hardware and for different aspects of how you want to train or inference and run your workloads. It's a very fascinating area, and we are seeing a lot of variations and multiple chips for each of our customers.

Analyst Harsh Kumar (Piper Sandler): Yeah, Hawk and team, first of all, congratulations on some pretty stunning numbers. I've got an easy one and a more strategic one. The easy one is your guide in AI, Hawk and Kirsten, is calling for almost $1.7 billion of sequential growth. I was curious, maybe if you talk about the diversity of the growth between the three existing customers, is it pretty well spread out, all of them growing, or is one sort of driving much of the growth? And then, Huck, strategically, one of your competitors bought a photonic fabric company recently. I was curious about your take on that technology and if you think it's disruptive or you think it's just gimmickry at this point in time.

Executive Hawk (Title): I like the way you address this question because the way that you address the question to me is almost hesitant. Thank you. I appreciate that. But on your first part, yeah, we're driving growth, and it begins to feel like this thing never ends. And it's a real mix bag of existing customers and on existing XPUs. And a big part of it is XPUs that we're seeing. And that's not to slow down the fact that, as I indicated in my remarks and commented on, the demand for switches, not just some of six, some of five switches, the demand for our latest 1.6 terabits per second DSPs that enables optical interconnects for scale-out particularly. It's just very, very strong. and and by extension demand for the optical components like lasers pin dies just going nuts all that come together now all that is smaller on the relatively lesser dollars when it comes to xpus as you probably guess you know of this to give you a sense maybe let me look at it on a backlog site of the 73 billion or ai revenue backlog over the next 18 months i talked about maybe 20 billion of it is everything else. The rest is XPUs. Hope that gives you a sense of what the mix is.

But the rest is still 20 billion. That's not small by any means. So we value that. So when you talk about your next question of silicon photonics as a means to create basically much better, more efficient, lower power, interconnects in not just scale out, but hopefully scale up. Yeah, I could see a point in time in the future when silicon photonics matters as the only way to do it. We're not quite there yet, but we have the technology and we continue to develop the technology. And even as each time we develop it first for 400 gigabits, bandwidth going on to 900, 800 gigabit bandwidth, not ready for it yet. And even with the product, we're now doing it for 1.6 terabit bandwidth to create silicon photonics switches, silicon photonics interconnects.

Not even sure it will get fully deployed because, you know, engineers, our engineers, our peers, and the peers we have out there, will somehow try to find a way to still do try to do scale up within a rack in copper as long as possible and in scale up in you know pluggable optics the final final straw is when you can't do it well in pluggable optics and of course when you can't do it even in copper then you're right you go to silicon photonics And it will happen. And we're ready for it. Just saying, not anytime soon.

Analyst Carl Ackerman (BNP Paribas): Yes, thank you. Hawk, could you speak to the supply chain resiliency and visibility you have with your key material suppliers, particularly co-ops, as you not only support your existing customer programs, but the two new custom compute processors that you announced in your quarter? I guess what I can get at is you also happen to address the very large subset of networking and compute AI supply chains. You talked about record backlogs. If you were to pinpoint some of the bottlenecks that you have, the areas that you're aiming to, you know, address and mitigate some supply chain bottlenecks, what would they be and how do you see that ameliorating into 26? Thank you.

Executive Hawk (Title): It's across the board, typically. I mean, we are very fortunate in some ways that we have the product technology, and the operating business lines to create multiple key leading edge components that enables today's state of the art AI data centers. I mean, our DSP, as I said earlier, is now at 1.6 terabits per second. That's the leading edge. connectivity, check for bandwidth, for the top of the hip, XPU, and even GPU. And we intend to be that way. And we have the lasers, EMLs, VXLs, CW lasers that goes with it. So it's fortunate that we have all this, and the key active components that go with it, And we see very quick, early, and we expand the capacity as we do the design to match it. And this is a long answer to what I'm trying to get at, which is I think we are, of any of this data center suppliers of the system regs, not counting the PowerShell and all that, now that starts to get beyond us. on the PowerShell and the transformers and the gas turbines. If you just look at the racks, the systems on AI, we probably have a good handle on where the bottlenecks are because sometimes we are part of the bottlenecks, which we then want to get to resolve. So we feel pretty good about that through 2026.

Analyst Christopher Rowland (Susquehanna): Hi. Thanks for the question. Just first a clarification and then my question. And sorry to come back to this issue, but if I understand you correctly, Hawk, I think you were saying that OpenAI would be a general agreement, so it's not binding to maybe similar to the agreements with both NVIDIA and AMD. And then secondly, you talked about flat non-AI semiconductor revenue. Maybe what's going on there, is there still an inventory overhang, and what do we need to get that going again? Do you see growth eventually in that business? Thank you.

Executive Hawk (Title): Well, on the non-AI semiconductor, we see broadband growth. literally recovering very well and we we don't see the others no we see stability we don't see a sharp recovery that is sustainable yet so i guess uh given a one a couple more quarters but we don't see any further deterioration in demand and it's more i think maybe dogs and ai sucking the oxygen a lot out of enterprise spending elsewhere and hyperscalers spending elsewhere we don't see getting any worse we don't see it recovering very quickly with the exception of broadband that's a simple summary of non-ai with respect to open ai Now, before diving into this, I'm just telling you what that 10 gigawatt announcement is all about. Separately, the journey with them on the custom accelerator progresses at a very advanced stage and will happen very, very quickly. And we will have a committed element to this whole thing. And I won't, but What I was articulating earlier was the 10 gigawatt announcement. And that 10 gigawatt announcement is an agreement to be aligned on developing 10 gigawatts for OpenAI over 27 to 29 timeframe. That's it. That's different from the XPU program we're developing with them. I see. Thank you very much.

Analyst Joe Moore (Morgan Stanley): Great. Thank you very much. So if you have $21 billion of RAC revenue in the second half of 26, I guess do we stay at that run rate? Beyond that, are you going to continue to sell RACs or does that sort of that type of business make shift over time? And I'm really just trying to figure out the percentage of your 18-month backlog that's actually full systems at this point?

Executive Hawk (Title): Well, it's an interesting question. And that question basically comes to how much compute capacity is needed by our customers over the next, as I say, over the period beyond 18 months. And your guess is probably as good as mine based on what we all know out there, which is really what it relates to. But if they need more, then you see that continuing even larger. If they don't need it, then probably it won't. But what we're trying to indicate is that's the demand we're seeing over that period of time right now.

Executive GU (Title): I would now like to turn the call back over to GU for any closing remarks. Thank you, operators. This quarter, Broadcom will be presenting at the New Street Research Virtual AI Big Ideas Conference on Monday, December 15, 2025. Broadcom currently plans to report its earnings for the first quarter of fiscal year 2026 after close of market on Wednesday, March 4, 2026. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call. This concludes today's program. Thank you all for participating. You may now disconnect.