HomeArticle

Jensen Huang challenges "China's NVIDIA": Looking forward to competition. You are world-class, but you must work hard.

36氪的朋友们2026-01-07 12:08
Lao Huang responds to everything.

On the evening of January 5th local time, Jensen Huang, wearing his iconic black leather jacket, stood on the stage of CES 2026 in Las Vegas. The head of NVIDIA officially launched the Rubin platform composed of six brand - new chips and announced that it would start supplying to partners in the second half of 2026. Even more significant was the self - driving AI software codenamed "Alpamayo" - which Jensen Huang called "the ChatGPT moment for robotics".

One day later, at the Q&A session on January 6th, Jensen Huang continued his "preaching journey", and the amount of information was overwhelming.

When talking about the market, he set extremely high expectations: "Since last October, there have been many new developments, which should raise our expectations for the $500 billion in data - center sales." He particularly emphasized an underestimated trend - open - source models now account for a quarter of all AI token generation. In his words, the world is undergoing a $10 - trillion computing modernization upgrade, and the labor industry is worth as much as $100 trillion. "For the first time, technology is serving the entire global economy."

When it came to China, Jensen Huang's attitude was practical and candid. In response to repeated questions about the export of H200 and the Chinese market, he simply replied: "I'm just looking forward to purchase orders. When the purchase orders come, they will imply everything else." Regarding Chinese competitors, he both recognized that "Chinese entrepreneurs, engineers, and AI researchers are among the best in the world" and reminded: "I'm looking forward to your competition. You have to work hard."

In terms of technical details, Jensen Huang was very knowledgeable: plug - and - play NVLink switches, power - smoothing technology, end - to - end confidential computing, and an innovation that reduced assembly time from two hours to five minutes. He even predicted that NVIDIA might become "one of the world's largest CPU manufacturers" and "one of the largest storage companies".

Most interesting was the discussion about robots. When someone asked, "When can we get robots with human - level capabilities?", Jensen Huang's answer was concise and powerful: "Next year? Oh, it's this year." He didn't think robots would take people's jobs. Instead, he said, "Having robots will create jobs" because "we need more AI immigrants to help us".

Regarding the safety philosophy of self - driving, Jensen Huang also gave his answer: "It's best to never have a handover between humans and vehicles. So even if you don't promise L4, you should have L4 capabilities."

During his two - day CES trip, Jensen Huang not only presented NVIDIA's technology roadmap but also his grand vision for the AI era: from chips to the ecosystem, from data centers to robots, from the current hundreds of billions to the future trillions. In this multi - trillion - dollar technological revolution, NVIDIA clearly doesn't want to miss any battlefield.

About the AI Bubble

Question: In October 2025, you gave a shocking figure at the time, saying that about $500 billion worth of data centers would be sold in the next four or five quarters. You've been saying in the past few days that the demand is very strong. Should we expect a higher figure?

Jensen Huang: I hope not to update this figure regularly.

But what I can say is that since then, there have been many new developments, which should raise our expectations for this figure. Last year, we had some very exciting news. We've always been a good partner for model builders and AI builders. OpenAI, xAI, and Gemini have been running on NVIDIA for a long time. Last year, we announced that Anthropic would also run on NVIDIA in the future, which was big news.

One of the major surprises in the world in 2025 was the success of open - source models - DeepSeek R1, followed by Qwen, Nemotron, and Cosmos. All these models really took off, so that now one out of every four generated tokens comes from open - source models. I think this has been underestimated. This has greatly boosted the demand for NVIDIA and public clouds.

This also explains why the pricing of Hopper has actually increased in the cloud. All Hoppers are being consumed in the cloud, and now the spot prices are starting to rise.

This shows the global demand that is being generated.

In addition, it seems that we will re - enter the Chinese market. So the H200 will also contribute. Overall, I think we'll have a very good year.

Exports and China

Question: It's almost been a month since the Trump administration announced the approval of the export license for the H200 in China. When do you expect to ship these chips to Chinese customers? What's the probability of the H200 getting large orders in China?

Jensen Huang: We've already started the supply chain, and the H200 is moving through the production line. We're finalizing the last details of the license with the US government. That's the three - part process. After that, I think we'll do our best.

Ultimately, my expectation is that we'll learn everything through purchase orders.

There won't be any press releases or major announcements, just purchase orders. If the purchase orders come, it's because they're able to place them.

I think it's that simple. So I'm looking forward to the arrival of purchase orders.

Question: Last year, you achieved a major victory when Congress didn't pass the GenAI Act. But Foreign Affairs Committee Chairman Brian Mast has a new bill that would empower Congress to disapprove of any export licenses issued by the Commerce Department. Do you think Congress has the ability to block NVIDIA's export licenses for chips to China?

Jensen Huang: There's a good reason why export controls have been assigned to the Commerce Department. So I think it's enough to have one source in the government to enforce the law.

But ultimately, no matter what laws come up, we'll comply.

Question: Do you think the H200 is still competitive in the Chinese market? Because you yourself said that Huawei is such a strong competitor, and there are so many startups in China also developing alternative products.

Jensen Huang: The H200 is still competitive in the market. But it won't be competitive forever. So hopefully, we'll be able to release other competitive products in the future.

To keep the US competitive in the market, regulations also have to continue to evolve. It can't be static regulations; that doesn't make sense.

Currently, the H200 will be competitive. When it's no longer competitive, hopefully, we'll have something new. And over time, we'll continue to release (new products) to stay competitive in the market.

Question: There are many emerging AI chip players in China. Against this backdrop, looking ahead to 2026, how do you view the evolution of the competitive landscape? What do you think is NVIDIA's most defensive moat today?

Jensen Huang: I think the number of startups emerging in China is large, and many of them have gone public and are doing very, very well. This shows the vitality and capabilities of China's technology industry.

I think it's no exaggeration to say that the group of Chinese entrepreneurs, engineers, technologists, and AI researchers is among the world's top - tier. It's safe to say that China's technology ecosystem is developing very rapidly. Engineers work very hard, and they're very entrepreneurial.

They have such smart ideas, so I'm fully confident that the Chinese technology market will continue to thrive and develop.

For us, to make contributions and offer something to the Chinese market, we'll have to compete, and we'll have to continue to advance our technology.

NVIDIA is innovating in AI at a scale that no one in the world can match today. We're the only company in the world that builds everything from CPUs to current storage. We develop every software stack on it and also innovate at the model level and infrastructure level.

We cooperate with almost every AI company in the world. Our marketing and channels convey technology to partners in the end - market. The end - market includes various industries, whether it's manufacturing, like our partnership with Siemens, or healthcare, our partnership with the world's largest pharmaceutical company, Eli Lilly, to automotive, and financial services.

NVIDIA is really deeply involved in all these different industries.

So I think this is an industry we should continue to lead, which is why we have to work so hard.

Our company is very good at this. We're innovating at an unprecedented speed, but we can't take anything for granted.

This industry is going to develop significantly. How big is this industry? The world has invested about $10 trillion in the chip field in the past 15 years. This $10 trillion is in the process of being completely modernized from classical computing to AI.

So, first, the $10 trillion has to be re - modernized. Currently, we have hundreds of billions of the $10 trillion.

The second thing is software. For the first time, AI technology is not just a tool; it's also a workforce. As we mentioned before, in the future, there will be humanoid robots and self - driving cars. There will be software - coding agents and chip - design agents that enhance the workforce.

The labor industry is a $100 - trillion market. For the first time, technology is serving the world's overall economy. So it's reasonable to think that this will be an extremely large market. I'm not surprised that so many entrepreneurs want to come and compete.

I think my last words to them are, I'm looking forward to your competition. You have to work hard.

About Investment

Question: NVIDIA is currently sitting on a huge pile of cash. Can you talk about how you're thinking about allocating this capital going forward? Such as acquisitions, recruitment, etc.?

Jensen Huang: We invest in the ecosystem in several dimensions.

The first way we think about investment is to build things that the world can't build or that don't exist. Unless we build them, they won't come into being.

For example, NVLink is a good example. If we didn't build it, it wouldn't exist. For example, the Grace CPU has a unique architecture.

Now everyone understands its benefits. You can use it to store long - context memory because not all the memory on HBM is large enough.

And we expect the context memory to continue to grow. So the Vera memory in Grace, which no one knew what it was for at first, is now known to be usable as AI memory.

So we have to build our own CPUs. Our preference is to focus on investing in building things that the world can't or won't do.

The second way we invest is to invest in our ecosystem.

We look backward and forward at our supply chain. If you look at NVIDIA's supply chain, it includes memory suppliers. We've allocated a large amount of capital to support our memory partners and system partners.

If you look at our supply chain, NVIDIA has made multi - billion - dollar commitments to the upstream supply chain. If we can't ensure that the downstream supply chain is also taken care of, what's the point of all these supplies coming in?

Our downstream supply chain is basically the most diverse and large - scale marketing among any company on the planet. Every cloud - service provider in the world, almost every country in the world, every computer manufacturer like HP, Dell, and other incredible partners, regional cloud - service providers, and super - computing centers are all part of it.

If you look at our downstream supply chain, sometimes we invest in companies that really excite us. This can open up new specific categories of customers that may be important to the ecosystem one day, like companies such as CoreWeave and Lambda. We might invest in them.

The way we think systematically is along the supply chain, both upstream and downstream. We consider technology, scale, and so on.

Another way is to invest in the ecosystem across the five layers of the AI cake. The first layer is land, power, and the enclosure. The second layer is chips. Now, maybe we might even cooperate with or invest in other chip companies. We've always cooperated with other chip companies. We cooperate with MediaTek, which is a great partnership. We cooperate with AWS and bring NVLink to them.

We might invest, cooperate, or even acquire some semiconductor companies.

Then there are the system and infrastructure aspects. This is an area with many rich opportunities. Above that are models, and above that are applications. You'll also see us investing across the entire stack.

What we're trying to do is nurture and accelerate AI development. Investment also builds closer working relationships and partnerships, so it's incredible to be able to invest in some of these startups. These are the most important companies in the future.

Question: Over the past few years, NVIDIA's performance has consistently exceeded and raised expectations. How do you handle the pressure of running the world's largest company, a company that many say should be even bigger than it is today, and be able to continue to exceed expectations?

Jensen Huang: First of all, I'm not doing it alone. I have an amazing team around me who help share the responsibility. There's no doubt that NVIDIA today has a huge influence in the world's technology industry, supply chain, and end - market. But along with that comes a huge responsibility. We take it very seriously.

One of the main things we can do is build the best technology. By doing so, we not only stay relevant ourselves, all our partners continue to succeed, but we also ensure that AI advances in a continuously scalable way.

If AI continues to become smarter, we'll be able to use it more effectively. This continues to scale. If we reduce costs, that continues to scale. If we improve energy efficiency, that continues to scale. All these are related to technology. This is our top - priority responsibility.

The second part is to ensure that we have a rich ecosystem of other companies that can benefit from this industrial revolution, which is why I'm always on stage with CEOs. Today, I'm on stage with Roland Busch, the CEO of Siemens.

We're cooperating with Siemens in different areas. They're the world's largest industrial - software company. They're present in almost every factory and every industry. We're working together to bring in AI and completely transform factory automation. From software acceleration to physical AI, to AI physics to the Omniverse digital twin.

The work we're doing there is really extensive.

The second part of our responsibility is to get everyone involved. We have partnerships with Snowflake, ServiceNow, Palantir, Cadence, Synopsys, Siemens, you name it. We want to make sure everyone is involved.

I think if we don't go alone but are somehow integrated into this ecosystem network around the world and move forward with technology, the resilience of this industry will be stronger.

Question: With your investment in Groq, what can we expect? Will there be dedicated inference chips based on their LPU architecture?

Jensen Huang: What NVIDIA and Groq do is very, very different.

I don't expect anything there to replace what we're doing with Vera Rubin and the next - generation chips. As far as we know, there's no reasonable, better way to do things better than Vera Rubin.

However, we might be able to add their technology in an incremental way. More information might be revealed at the next GTC.

But I'm very excited about Groq joining NVIDIA. Their team and technology have come to us. There's still a company running their cloud business. I'll save that for next time.

About Storage

Question: A long time ago, Intel emerged and completely changed the way storage - system design was done with industry - standard processors. This led to a significant change in the way storage was developed. Do you think NVIDIA will play a similar key role in completely changing the long - term design goals and architecture of storage?

Jensen Huang: I think over time, we'll have to play an increasingly important role because we're pushing the limits of computing in every dimension.

AI is a platform shift because it's a new platform, and new applications are being built on AI for the first time.

This is not only a technological shift but also a platform shift, and this platform shift is reinventing the entire computing stack. We've recognized and now understand that we're moving from classical computing running on CPUs to AI running on GPUs. We're reinventing the stack on top of it.

Computing is infrastructure, and computing also includes networking, which is why we developed Spectrum X. Our acquisition of Mellanox and the team that reinvented AI networking brought about a huge revolution. This has made us the world's largest networking company today.

AI workloads are so different from classical database processing and SQL processing. Therefore, it's obvious that storage will also undergo a revolution. The concept of key - value caching, semantic memory, meaningful memory, semantic memory, and the way AI uses key - value caching are obviously very different from the way IT systems use SQL queries.

So it's reasonable to think that we'll have to reinvent the storage system.