Q4 2026 Earnings Call — February 25, 2026
Vivek Arya (Bank of America Securities): Thanks for taking my question. I think you mentioned that you now have growth visibility into calendar 27 also, and I think your purchase commitments kind of reflect that confidence. But, Jensen, I'm curious. You know, when you look at your top cloud customers, cloud capex close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed. So I know you're very confident about your roadmap, right, and your purchase commitments and whatnot. But how confident are you about your customers' ability to continue growing to grow their CapEx? And if their CapEx doesn't grow, can NVIDIA still find a way to grow in that envelope? Thank you.
Jensen Huang (CEO): I am confident in their cash flow growing. And the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You're seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues. So in this new world of AI, compute equals revenues. And I am certain that at this point with the productive use of Codex and Cloud Code and the excitement around Cloud Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them, all of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I'm certain at this point that we are at the inflection point. We've reached the inflection point and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers. And so the simple logic of it, the simple way to think about it is computing has changed. What used to be software running on computers, modest amount of computers, call it $300 or $400 billion worth of CapEx each year, has now gone into AI. And AI, in order to generate tokens, you need compute capacity. And that translates directly to growth and that translates directly to revenues.
Joe Moore (Morgan Stanley): Great, thank you, and congratulations on the numbers. You talked about some of the strategic investments that you've made into Anthropic and potentially OpenAI, core as well, but also partners, Intel, Nokia, Synopsys, you know, you're clearly at the center of everything. Can you talk about the role of those investments and kind of how do you view the balance sheet as a tool to kind of grow NVIDIA's position in the ecosystem and participate in that growth?
Jensen Huang (CEO): As you know, fundamentally at the core of everything NVIDIA is our ecosystem. That's what everybody loves about our business, the richness of our ecosystem. Just about every startup in the world is working on NVIDIA's platform. We're in every cloud. We're in every on-prem data center. We're all over the world's edge and robotic systems. Thousands of AI natives are built on top of NVIDIA. We want to take the great opportunity that we have as we're in the beginning of this new computing era, this new computing platform shift, to put everybody on NVIDIA. Everything is already built on CUDA. And so we're starting from a really terrific starting point. But as we build out the entire AI ecosystem, whether it's in AI for computers, language or physical AI or AI physics or biology or robotics or manufacturing, we want all of these ecosystems to be built on top of NVIDIA. And this is such a wonderful opportunity for us to invest into the ecosystem across the entire stack. Our ecosystem is also richer today than it used to be.
We used to be largely a computing platform on GPUs, but now we're a computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. And everything from computing to AI models to networking to our DPU, all of that has computing stacks on top of it. And as I mentioned before, whether it's an enterprise or in manufacturing, industrial or science or robotics, each one of these ecosystems have different stacks. And we want to make sure that we continue to invest into our ecosystem. So our investments are focused very squarely, strategically on expanding and deeply our ecosystem reach.
Harlan Suhr (JP Morgan): Good afternoon. Thanks for taking my question. Networking continues to rise as a percentage of your overall data center profile, right? Through fiscal 26, your networking revenues accelerated on a year-over-year basis every single quarter, right, with 3.6x growth, as you guys mentioned, year-over-year growth in Q4. Obviously on the strength of your scale up and scale out networking product portfolio. I seem to remember that first half of last year, your annualized run rate on your SpectrumX Ethernet switching platform was around 10 billion annualized. It looks like that may have stepped up to around 11, 12 billion in the second half of last year. Vincent, looking at your order book, especially with SpectrumX GS, upcoming 102T Spectrum 6 switching platforms launching soon, where is the spectrum runway trending now and as you foresee exiting sort of this calendar year?
Vincent (Management): Yeah, you know, as you know, we see ourselves as an AI infrastructure company and the AI computing infrastructure includes CPUs, GPUs, and we invented MVLink to scale up the one computing node into a giant computing rack. We invented the idea of a rack scale computer. We don't ship nodes of computers, we ship racks of computers. And that MV-Link switch scale up system is then scaled out using SpectrumX and InfiniBand. We support both. And then further, we also scale across data centers using SpectrumX Scale Across. And so the way we think about networking is really an extension. We offer everything openly so that people could decide to mix and match in different scale and, you know, however they would like to integrate it into their bespoke data center. But in the final analysis, it's all one big part of our platform. And the invention of MVLink, again, really turbocharged our networking business. Every rack comes with nine nodes of switches. And each one of them has two chips in it. And in the future, they'll have more. And so the amount of switching that we do per rack is really quite incredible.
We're also now the largest networking company in the world. And if you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching. And I think that we're probably the largest Ethernet networking company in the world today and surely will be soon. And so SpectrumX Ethernet has been a home run for us. But, you know, we're open to however people want to do networking. Some people just really love the low latency and the scale up capability of InfiniBand. And we will continue to support that, of course. And some people love to integrate their networking across their data center based on Ethernet. And we created an Ethernet capability that extends Ethernet with artificial intelligence way of processing in the data center. And we're incredibly good at that. And our Spectrum X performance really shows it. You know, the difference of, when you built a $10 billion or $20 billion AI factory, the difference of 10%, and it could be easily 20% on the effectiveness and the utilization of your network for your data center, that translates to real money. And so NVIDIA's networking business is really, really growing fast.
And I think it's just because we built the AI infrastructure so effectively that the AI infrastructure business is growing incredibly fast.
CJ Muse (Cantor Fitzgerald): Yeah, good afternoon. Thank you for taking the question. I guess with CPX for large context, Windows and Grok likely adding a decode-specific solution, curious how we should think about your future roadmap. Truly, do you think about customized silicon either by workload or customer as an increasing focus by NVIDIA, particularly helped by your move to a dilate architecture? Thanks so much.
Jensen Huang (CEO): We don't use... We want to... Everybody should want to extend, push out dialet as long as they can. And the reason for that is because every time you cross a dialet, you have a dialet, you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We're not allergic to dialet. We use dialets already, but we try to use dialets only when we absolutely have no choice but to do so. And so we if you look at the Grace Blackwell architecture and the Rubin architecture, we use two giant reticle limited dies and we have bottom and that reduces the amount of architecture crossing. The dilate tax shows up in the architecture effectiveness of the competitors. If you look at it, people call it our software advantage. But, you know, where software starts and architecture starts and ends, it's kind of hard to tell. It's, you know, our software is effective because our architecture is so good. And so the CUDA architecture is unquestionably more effective, more efficient, delivers more performance per flop per watt than any computing architecture out there. And it's because of the way we architect.
With respect to how we think about Grok and the low latency decoder, I've got some great ideas that I'd like to share with you at GTC. But the simple idea is that our infrastructure is incredibly versatile because of CUDA, and we're going to continue to do that. All of our GPUs are architecturally compatible, which means that when I'm working on optimizing models today for Blackwell, all of that work and all that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It's the reason why A100 continues to feel fresh and continues to stay performant years after we've deployed it into the world. Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization knowing that our entire install base in the cloud, on-prem, everywhere, from generations of architectures of GPUs will all benefit. And so we'll continue to do that and allows us to extend the useful life, allows us to have innovation, flexibility, and velocity, which translates to performance, and very importantly, performance per dollar and performance per watt for our customers.
And so what we'll do with Grok is you'll come to see GTC, but what we'll do is we'll extend our architecture with Grok as an accelerator in very much the way that we extended NVIDIA's architecture with Mellanox.
Stacy Raskin (Bernstein Research): Hi, guys. Thanks for taking my questions. Colette, I wanted to dig a little bit into the call for sequential growth through the year. So, I mean, you grew this quarter more than 10 billion sequentially in data center, and the guide seems to imply, you know, the bulk of the increased 10 billion sequential in data center. How do you see that as we go through the year, especially as Rubin ramps into the backup? Blackwell has been a pretty massive acceleration for sequential growth. Should we expect something similar as we get to Ruben? And then I was also just hoping you could comment on your expectations for gaming. I understand the memory issues and everything else. Do you think gaming can still grow year over year in fiscal 27, or will that be under more pressure given memory? So those two questions, please.
Colette Kress (CFO): Thanks, Stacey. Let me start with the revenue going forward. Again, we're trying to look at revenue quarter by quarter. As you think about the full year, we are absolutely going to be still selling and providing Blackwell, probably at the same time that we're also seeing Vera Rubin come to market. This is a very great architecture that helps them just today, quickly standing up and have already planned on many different orders across the different customers to provide that. It's too early yet to determine how much in terms of that Vera Rubin, that beginning ramp. We'll start in the second half and we'll get through it. But no confusion in terms of the strong demand and the interest. We do expect pretty much every single customer to be purchasing Vera Rubin. The question is, how soon are we in market and how soon are they able to stand that up in terms of in their data centers? That was your first part. The second part was focusing on our gaming. As much as we would love to have additional more supply, we do believe for a couple quarters, it is going to be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. But it's still too early for us to know
at this time, and we'll get back to you as soon as we can.
Atif Malik (Citi): Thank you for taking my question. Jensen, I'm curious if you can touch on the importance of CUDA as now more of the investment dollars in AI are coming from inference workloads.
Jensen Huang (CEO): Without CUDA, we wouldn't know what to do with inference. The entire stack, from Tensor RT-LLM that we introduced a few years ago, which is still the most performant inference stack in the world. Optimizing it for MVLink requires us to discover and invent new parallelization algorithms that sits on top of CUDA to distribute the workload and the inferencing to take advantage of the aggregate bandwidth across MVLink 72. MVLink 72 has enabled us to deliver generationally 50 times more performance per Y. It's just an incredible lead. And it's sensible. MVLink 72 is a great invention. It was hard to do. The creation of the switching technology, disaggregating the switches, building the system racks, all of that, you know, we did it all in plain sight and everybody knew how hard it was for us to do. But the results are incredible. So performance per watt is 50 times, performance per dollar, 35 times. And so the leap in inference is incredible. It's really important to realize that inference equals revenues now for our customers.
Because agents are generating so many tokens and the results are so effective, when the agents are coding, it's off generating thousands, tens of thousands, hundreds of thousands because they're running for, you know, minutes to hours. And so these systems, these agentic systems are spawning off different agents working as a team. The number of tokens that are being generated has really, really gone exponential. And so we need to inference at a much higher speed. And when you're inferencing at a much higher speed and each one of those tokens are dollarized, it directly translates into revenues. And so inference performance equals revenues for our customers. For the data centers, inference tokens per watt translates directly to the revenues of the CSPs. And the reason for that is because everybody is power limited. And so, I mean, no matter how many data centers you have, each data center, you know, 100 megawatts or one gigawatt has power limits. So the architecture that has the best performance per watt translates because each token, the performance tokens per watt, each token is dollarized.
Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues. And so you could see that every CSP understands this now, every hyperscaler understands this, that CapEx translates to compute. Compute with the right architecture translates to maximizing revenues, and compute equals revenues. Without investing capacity today, without investing in compute, there cannot be revenue growth. And that I think everybody understands. Compute equals revenues. Choosing the right architecture is incredibly important. It's more than strategic now. It directly affects their earnings. And choosing the right architecture, the one with the best performance per watt, is literally everything.
Ben Reitzes (Milius Research): Yeah, hey, thanks. First, let me say kudos on including the stock comp in non-GAAP. I think that's a great move, but that isn't my question. My question is around gross margins and the sustainability of the mid-70s long-term. Should we read into the visibility on supply being available into calendar 27, that it's sustainable until then, and then Jensen, what about after that? Are there innovations in memory consumption you can unveil that makes us feel better about the ability to keep margins at that level for a long time? Thanks.
Jensen Huang (CEO): The single most important lever of our gross margins is actually delivering generational leads to our customers. That is the single most important thing. If we could deliver generationally performance per watt that exceeds dramatically what Moore's Law can do, if we can deliver performance per dollar dramatically more than the cost of our systems, than the price of our systems, then we can continue to sustain our gross margins. That's the simple, most important concept. The reason why we're moving so fast is because, number one, the demand for tokens in the world as a result of the inflection points that we've gone through has now has gone completely exponential. I think we're all seeing that to the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up. And so we know that the amount of computation necessary, the amount of compute necessary for the modern way of doing software is growing exponentially. And so our strategy is to deliver an entire AI infrastructure every single year. This year, we introduced six new chips. Ruben, next generation, will do many new chips as well. And every single generation, we are committed to deliver many X factors of performance per watt and performance per dollar. And that pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers. And that is the single most vital thing as it relates to our value delivery.
Antoine Chikaban (New Street Research): Hi, thanks a lot for those questions. I'd like to ask about space data centers, which some of your customers are considering. How feasible do you think that is and on what kind of horizon? And what do the economics look like today? And how do you think that could evolve over time? Thank you.
Jensen Huang (CEO): Well, the economics are poor today, but it's going to improve over time. As you know, the way that space works is radically different than how it works down here. There's an abundance of energy, but solar panels are large, but there's plenty of space in space. The heat dissipation, it's cold in space. However, there's no airflow. And so the only way to dissipate heat is through conduction. And the radiators that you need to create are fairly large. Liquid cooling is obviously out of the question because it's heavy and freezes. And so the methods that we use here on Earth are a little different than the way we would do it in space. But there are many different computing problems that really wants to be done in space. And so NVIDIA is already the world's first GPU in space. Hopper is in space. And one of the best use cases of GPUs in space is imaging, to be able to image at extremely high resolutions using, of course, optics and artificial intelligence.
And to be able to do that computation of reprojection of different angles and be able to up res and do noise reduction and just be able to see, be able to image at very large, very high resolutions, extremely large scales and very, very fast. It's hard to do that by sending petabytes and petabytes of imaging data back here on Earth and doing that work. It's easier just to do it out in space. And then ignore all of the data collected and processed until you see something interesting. And so artificial intelligence in space will have very good, very interesting applications.
Mark Lipakis (Evercore ISI): Hi, thanks for taking my question. I want to take up with the comment you made on the script about revenue diversification. I believe, Colette, you said that hyperscalers were over 50% of revenues, but growth was led by the rest of your data center customers. And, you know, as a clarification, I just want to make sure I understood that. Does that imply your non-hyperscale customers grew faster? And if so, what are the, can you help us understand, what are the non-hyperscalers doing different? Are they doing different things than the hyperscalers or the same things on a different scale? And does this, do you expect this trend to continue? Do you expect your customer base to evolve to a point where non-hyperscalers become a bigger part of your, the larger part of your business? Thank you.
Colette Kress (CFO): Yes, let's see if we can help on this question. So when you think about our top five, as we articulated as being our CSPs, our hyperscalers, and they have right now sat at about 50% of our total revenue. There's a big organization, therefore, of diversity of all different other types of companies that we are working with. That it goes through our AI model makers, that goes through our enterprises, that goes to supercomputing, it goes to our sovereigns. There's a lot of other different facts on there. But you are correct. It's a very fast-growing area as well. We have a strong position in terms of all of our different cloud providers on our platform. And now we also have an extreme diversity of different customers that we are seeing all the way across the world. And this will really benefit seeing that diversity and being able to serve all of those parts. Let me see if Jensen wants to add a bit more.
Jensen Huang (CEO): Yeah, this is one of the advantages that we have with our ecosystem. I'll build on Tapakuta. We're the only accelerated computing platform that is in every cloud, that's available through every single computer maker, available at the edge. We're now cultivating telecommunications. Obviously, the future radios will all be AI-driven radios, and the future wireless network will also be a computing platform. That is a foregone conclusion, but somebody has to go and invent the technologies to make that possible. And we created a platform called Ariel to go do that. We're out in just about every single robot, every single self-driving car. Our ability, CUDA's ability, to have the benefit of the performance of specialized processors on the one hand, with the tensor cores inside our GPUs. On the other hand, the flexibility of CUDA allows us to solve language problems, computer vision problems, robotics problems, to biology problems, physics problems, and just about all kinds of AI and all kinds of computation algorithms. And so, the diversity of our customer base is one of the greatest strengths that we have.
The second thing, of course, is without our own ecosystem, even if our processor was programmable, if we didn't cultivate our ecosystem and talking about some of the things that we're doing today, investing in our future ecosystem and continue to enhance our ecosystem, without our ecosystem, it's hard for us to grow beyond what design wins we capture for somebody else's ecosystem. And so we could grow and expand our ecosystem very naturally because of the platform that we created. And then lastly, one of the things that's really important is the partnerships that we have with OpenAI and Anthropic, with XAI, with Meta, now makes, and of course, just about every single open source in the world, there's one and a half million AI models on Hugging Face. All of it runs on NVIDIA CUDA. And so an open source in totality probably represents the largest, second largest model in the world. OpenAI is the largest. Second largest is probably all the collection of all the open sources. And so NVIDIA's ability to run all of that makes our platform super fungible, super easy to use, and really safe to invest into.
And so that creates the diversity of customers and the diversity of the platforms available in every single country because we support the whole world's ecosystem.
Aaron Rakers (Wells Fargo): Yeah, thanks for taking the question. I guess sticking with the idea of the platform and extreme co-design, some of the news over this last quarter has obviously been NVIDIA's ability or push to bring Vera CPUs to market on a standalone solution basis. So I guess, Jensen, I'm curious, what's the importance that Vera plays in this architecture evolution as we move forward? Is this being driven more by the proliferation or the heterogeneity of inference workloads? I'm just curious of how you see that evolving for NVIDIA, particularly on a standalone CPU basis. Thank you.
Jensen Huang (CEO): Yeah, thanks. And I'll tell you some more about it at GTC. But at the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world's CPUs. It's the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. And the reason for that is because most of the computing problems that we're interested in are data-driven, artificial intelligence being one. And, and yeah, the single threaded performance in this ratio with bandwidth is just off the charts. And we made those architectural decisions because in the entire phase, the different phases of AI from data processing, before you even do training, you have to do data processing. So you have data processing, pre-training, and in post-training now, the AIs are learning how to use tools and the usage of tools, many of those tools run in CPU only environments or they run in CPU with GPU accelerated environments. And Vera was designed to be an excellent CPU for post-training. And so some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs. We love CPUs as well as GPUs. And when you accelerate the algorithms to the limit as we have, Amdahl's law would suggest that you need really, really fast single-threaded CPUs. And that's the reason why we built Grace to be extraordinarily great at single-threaded performance. And Vera is off the charts better than that.
Tim Arcuri (UBS): Thanks a lot. Collette, I was wondering if you can talk about the deployment of capital. I know that you really jacked up the purchase commits, but it sounds like maybe you're over the hump on this, and you're going to probably generate about $100 billion in cash this year, so, and, you know, pretty much no matter how good the results have been, the stock hasn't really gone up much. So I think it's that you probably feel like this is a pretty good price to be, you know, buying back a bunch of it here. So I was wondering if you can talk about that, like, you know, question being, why not put a big stake in the ground and just, you know, have a huge share of repo here? Thanks.
Colette Kress (CFO): So thanks for the question. We look at our capital return very, very carefully. And we do believe that one of the most important things that we can do is really supporting the extreme ecosystem that's in front of us that stems from everywhere from our suppliers and the work that we need to do to assure that we can have the supply that's needed and help them from a capacity all the way that we are in terms of the early developers of the AI solutions that will be on our platform. So we will continue to make this a very important part of our process and strategic investments. But of course, we are still repurchasing our stock. We are still with our dividend as well. And we will continue to find the right unique opportunities within the year for doing those different purchases.
Jim Schneider (Goldman Sachs): Thank you for taking my question. Justin, you've previously outlined the potential to get to $3 to $4 trillion of data center capex by 2030, which implies a potential acceleration in growth rates, which you've sort of guided to at least this next quarter. The question is, what are some of the key application areas that you believe are most likely to drive that inflection? Is that physical AI, agentic, or something else? And do you still feel good about that $3 to $4 trillion envelope? Thank you.
Jensen Huang (CEO): Yeah, let's back that up and just reason through it from a few different ways. So the first way is, on first principles, the way that software is done in the future using AI is token-driven. And I think everybody talks about tokenomics and talks about data centers generating tokens. And inference is about generating tokens. And we generate tokens. We were just talking about tokens, how NVIDIA's NVLink 72 enabled us to generate tokens at 50 times better performance per unit energy than the previous generation. And so token generation is at the center of almost everything that relates to software in the future and relates to computing. If you look at the way we use computing in the past, however, the amount of computation demand for software in the past is a tiny fraction of what is necessary in the future. And AI is here. AI is not going to go back. AI is only going to get better from here. And so if you think about it, you said, OK, well, the world was investing about three to four hundred billion dollars a year in classical computing. And now AI is here and the amount of computation necessary is a thousand times higher than the way we used to do computing. The computing demand is just a lot higher. And so if, if we continue to believe there's value in it, and we'll talk about that in a second, then the world will invest to produce that.