HomeArticle

Jensen Huang Sets the Tone at the Start of the Year: True AI Upgrades Rely on Industrialization

AI深度研究员2026-01-06 09:46
NVIDIA Unveils Blueprint for AI Industrialization at CES. Rubin Chip, Open-Source Toolchain, and Physical AI to Reshape Industries.

At the beginning of 2026, the most significant signal in the AI industry didn't come from a model press conference but from an industrial declaration by a chip CEO.

On January 5th, at the CES in Las Vegas, Jensen Huang, the CEO of NVIDIA, said as soon as he took the stage:

Every layer of the computing industry needs to be rewritten.

Jensen Huang didn't make model upgrades the main focus. Instead, he emphasized: The real leap in AI doesn't rely on single - point breakthroughs but on a whole set of industrial capabilities.

What are industrial capabilities?

It's not about presenting a more powerful demo but making AI replicable, deployable, acceptable, and ultimately scalable.

At this press conference, NVIDIA demonstrated this complete industrial system:

Hardware layer: The Rubin platform is in full - scale production, with a 4 - fold increase in training speed and a 10 - fold reduction in cost.

Application layer: The standard path of Physical AI, from Cosmos simulation to Alpamayo autonomous driving, will be on the road in Q1 of 2026.

Ecosystem layer: A full - stack open - source toolchain, from models to data to tools, is all open to the industry.

Jensen Huang said that the "ChatGPT moment" in the robotics field is coming.

This is not just a metaphor but a new starting point for the entire AI industrialization.

Section 1 | Application Architecture: Agents Replace Code

In Jensen Huang's speech, there is a sentence worth noting:

You no longer write software; you train software.

This means that AI applications are no longer about hanging a model on the original program. Instead, it's about changing from writing code to teaching an agent how to do things.

In the past, applications were pre - written with a set of processes, pre - compiled, and deployed on devices to run.

Now, AI applications are generated, understood, and responded to in real - time. Even every frame and every word is generated on - the - spot.

Behind this, three things have changed in the underlying logic:

From programming to training: Developers no longer tell the program what to do but train it to understand what to do.

From CPU to GPU: Computing in the AI era is no longer a task that general - purpose chips can support. It must rely on accelerated computing to support generation, understanding, and reasoning.

From invoking models to architecting agents: A single model is no longer sufficient. A whole set of working agents that can invoke multiple models, break down problems, and use tools needs to be established.

The programming method currently used within NVIDIA is based on such an architecture.

In his speech, he mentioned Cursor, an agent model that can help engineers write code. It receives tasks, analyzes intentions, and invokes tools to complete programming.

Jensen Huang called this entire structure the AI Blueprint:

It's not a single model or a product but a whole set of general methods. You can build a customer service assistant on it, a personal butler, or even control a home robot.

More importantly, this architecture can be replicated and customized.

Enterprises can teach it exclusive skills;

Engineers can insert their own data;

Each industry can build its own self - collaborative AI.

Therefore, the real leap in AI applications is no longer about replacing with a larger model but changing the construction method from the source: from how to access models to how to rebuild the toolchain with AI.

This is the change in the foundation of AI applications: from a software architecture to an intelligent architecture.

Section 2 | Computing Infrastructure: Rubin Reduces Training Costs by 10 Times

When it comes to AI, many people think it mainly depends on how powerful the model is. But in Jensen Huang's eyes, what really determines whether AI can be put into use is how powerful and reliable the underlying "power plant" is.

This "power plant" is the Rubin AI platform he released this time.

This is not an ordinary hardware update but a complete overhaul of the computing method:

Collaborative design of six chips: Vera CPU, Rubin GPU, ConnectX - 9 network card, BlueField - 4 DPU, NVLink 6 switch, and Spectrum - X optical switch. Each one is redesigned from scratch;

Complete reconstruction of the physical structure: No cables, two pipes, and 5 - minute assembly (previously it took 2 hours);

Revolutionary improvement in energy efficiency: Doubled performance, but cooled with 45°C hot water. Data centers don't even need to install chillers.

Why did NVIDIA go to such great lengths?

Because AI is experiencing a computing power crisis, which Jensen Huang calls "Token inflation":

The model scale grows 10 times every year (from 1 trillion parameters to 10 trillion parameters)

The generation volume of inference Tokens grows 5 times every year (inference models like o1 need to "think" instead of giving one - time answers)

The training volume continues to explode (pre - training + post - training + scaling during testing)

Meanwhile, the Token price drops by 10 times every year. What does this mean?

It means that for AI companies to remain competitive, they must:

Generate more Tokens with less cost

Train the next - generation models at a faster speed

Support more complex inferences with greater computing power

This is the core problem that Rubin aims to solve.

The data Jensen Huang presented on - site is clear:

Training speed: To train a 10 - trillion - parameter model, Rubin only needs 1/4 of the system of Blackwell

Factory throughput: The performance per watt is 10 times that of Blackwell

Token cost: The generation cost is 1/10 of Blackwell

Behind these numbers is a change in business logic.

A data center worth $50 billion and consuming 1 gigawatt of power can generate 10 times more revenue with Rubin than with Blackwell. It's not just a performance improvement but a doubling of revenue.

Rubin is already in full - scale production. This represents NVIDIA's ability to pull back the performance improvement curve through extreme collaborative design in an era when Moore's Law is no longer effective.

More importantly, such hardware is not customized for a single company but is a standard foundation prepared for the entire industry:

Cloud platforms can use it to train models;

Large enterprises can use it to develop AI products;

Start - ups can also rent it to access AI infrastructure.

The foundation of AI industrialization is not just writing a model and uploading it to the cloud but having a computing "power plant" that can operate continuously, with controllable costs and scalable.

Rubin is the core engine of this "power plant".

Section 3 | Physical AI: The Industrialization Path of Robots

Many people think that robots are just exhibition items at technology fairs and are still far from our lives. But Jensen Huang made it clear this time: Robots are becoming the first batch of mass - produced products after AI industrialization.

He directly classified them into the Physical AI category.

What is Physical AI?

It's not just about being able to move and see but an AI that understands how the physical world works: understanding gravity, friction, inertia, and causal relationships, just like the common sense humans learn from childhood.

But this kind of common sense is extremely difficult for AI. You can't just tell it that "balls roll" or "heavy vehicles are hard to stop". It has to learn from data.

The problem is that data in the real world is both scarce and expensive. Is it possible to let autonomous driving AI learn by crashing in the real world? Obviously not.

So, what NVIDIA has been doing for eight years is to establish a complete Physical AI training system. Three computers work in collaboration:

Training computer: Trains the AI model with GPUs

Inference computer: Runs the AI on the robot body

Simulation computer: Rehearses repeatedly in the virtual world and then conducts practical operations

This third computer is the core breakthrough.

NVIDIA has created two key tools for this purpose:

Cosmos: A world - based model that can predict the physical consequences of an action. It doesn't understand language but physical laws.

Omniverse: A physical simulation platform that realistically reproduces gravity, friction, materials, and lighting, allowing AI to practice billions of kilometers in the virtual world first.

The power of this methodology has been fully verified in Alpamayo. Alpamayo is the newly released autonomous driving AI by NVIDIA and the world's first end - to - end autonomous driving system capable of reasoning.

Where is its breakthrough?

It doesn't just tell you "I'm going to turn left" but explains:

There is a pedestrian crossing ahead, so I need to slow down

There is a vehicle changing lanes in the left lane, so I choose to stay in the lane and adjust the speed

Why is this reasoning ability important?

Because there are infinite long - tail scenarios. You can't collect all the training data for all countries, all weather conditions, and all unexpected situations. But if AI can reason, it can break down unfamiliar scenarios into familiar sub - scenarios. For example, "pedestrian + slow down" and "lane change + avoidance", which it has been trained on.

The training data for Alpamayo comes from:

A large amount of human driving mileage

Billions of kilometers of virtual data generated by Cosmos*

Finely labeled edge cases

Moreover, it adopts a dual - stack safety design:

Alpamayo is responsible for complex reasoning scenarios

The classic AV stack is responsible for backup (switches when Alpamayo is unsure)

More importantly, in Q1 of 2026, this system will be on the road in the Mercedes - Benz CLA, and Alpamayo has been open - sourced.

This represents that NVIDIA didn't just build an autonomous driving car but verified a complete industrialization path for Physical AI:

Generate training data with Cosmos → Solve the problem of data scarcity

Conduct virtual rehearsals with Omniverse → Reduce the cost of trial and error

Use reasoning ability to handle long - tail scenarios → Break through the bottleneck of data coverage

This path is not only applicable to cars but also to all robots.

The Groot humanoid robot and the Jetson small robot demonstrated by Jensen Huang on - site were all trained in the Omniverse. They will be deployed in warehouses, hospitals, hotels, and construction sites to replace humans in handling some real - world tasks.

Robots are not the last step of AI but the first batch of mass - produced physical products after AI industrialization.

Being able to adapt to the environment, understand physics, and learn to reason is the process of AI moving from the screen to the real world.

Generating data with Cosmos, conducting simulation rehearsals with Omniverse, and using reasoning to handle the unknown. This methodology is becoming the standard process for Physical AI.

Section 4 | Open - Source Strategy: What's NVIDIA's Game Plan?

The perception that AI has a high threshold and can only be played by large enterprises is about to be shattered.

Jensen Huang's stance this time is very clear:

We open - source the models, the data, and the toolchain because only in this way can each company build its own AI.

Why does NVIDIA do this?

This is an ecological war.

I. OpenAI's Closed - Source vs. NVIDIA's Open - Source

Let's first look at two different paths:

OpenAI's strategy:

  • Closed - source models with leading capabilities
  • You call my API and pay by Token
  • I control the model, and you control the application

NVIDIA's strategy:

  • Open - source models, tools, and data
  • You train on your own and use my chips
  • You control the model, and I control the infrastructure

Do you see the difference?

OpenAI wants to be the Microsoft of the AI era: selling software and services. NVIDIA wants to be the TSMC of the AI era: selling chips and computing power.

And open - source is the core weapon for NVIDIA to implement this strategy.

II. What are the Benefits of Open - Source for NVIDIA?

1. Expand the Market Scale

If AI can only rely on calling the API of large models, then only a few companies like OpenAI, Anthropic, and Google need to buy GPUs.

But if every industry and every enterprise wants to train its own model, then thousands of companies will need to buy GPUs.

The open - source toolchain lowers the threshold and activates the long - tail market.

2. Establish a De - Facto Standard

What Jensen Huang released this time is not just a model but also:

  • The Nemo toolchain (for training language models)
  • Cosmos (world - based model)
  • Omniverse (physical simulation platform)
  • Blueprint (agent architecture)

When developers around the world use this set of tools to train AI, this set of tools becomes the de - facto standard.

And this standard is deeply bound to NVIDIA's chips.

3. Lock in the Ecosystem

The partners Jensen Huang mentioned on - site are:

  • Palantir, ServiceNow, Snowflake (enterprise software)
  • Siemens, Cadence, Synopsys (industrial design)
  • Meta, Hugging Face, ElevenLabs (AI capabilities)

These companies are all using NVIDIA's toolchain to build their own AI products. Once they form a dependency, is it easy to switch to AMD or other chips?

The cost is huge.

III. What Does This Mean for the Industry?

1. The Competition in AI Shifts from Model Capability to Industrial Capability

Previously, the competition was about who had a more powerful model. Now, it's about:

  • Who can train a proprietary model faster
  • Who can deploy AI at a lower cost
  • Who can make AI land in more scenarios

All these require the support of chips, toolchains, and data.

2. Opportunities for Start - Ups

Previously, when developing AI applications, one could only call the API of large models, and the moat was very shallow.

Now, with the open - source toolchain, start - ups can:

  • Use the open - source model as the foundation
  • Train with industry data
  • Build proprietary AI capabilities

This means that AI entrepreneurship in vertical fields will explode.

3. The Roles of Cloud Providers Will Differentiate

Previously, cloud providers only sold computing power. Now, they have to choose sides:

  • Either deeply integrate with OpenAI (like Microsoft Azure)
  • Or support the open - source ecosystem (like AWS, GCP)

NVIDIA's open - source strategy makes it easier for cloud providers to choose the latter.

IV. Jensen Huang is Playing a Big Game

At this press conference, Jensen Huang demonstrated not just products but a complete industrial layout:

  • First layer: Open - source models and toolchains to lower the threshold and activate the long - tail market
  • Second layer: Rubin chips and computing infrastructure to lock in the ecosystem