HomeArticle

NVIDIA's new autonomous driving model is here.

36氪的朋友们2026-01-06 18:02
The autonomous driving model "can reason"

On Monday (January 5th) Eastern Standard Time, NVIDIA made a high-profile announcement at the 2026 Consumer Electronics Show (CES) about two of its latest technological advancements: humanoid robot technology and a new series of autonomous vehicle models named Alpamayo.

According to the company's CEO, Jensen Huang, numerous companies, ranging from Boston Dynamics and Caterpillar to LG Electronics and German robotics firm NEURA Robotics, are leveraging NVIDIA's robotics technology to develop and power their various robots.

NVIDIA claims that physical Artificial Intelligence (AI) has the potential to revolutionize the $50-trillion manufacturing and logistics industries, and the company aims to be at the heart of this transformation. At this year's CES, NVIDIA unveiled a series of new AI models to assist in training robots to interact with the surrounding world, along with the hardware required to power their digital brains.

Autonomous Driving Models Can "Reason"

In addition to humanoid robots, NVIDIA prominently showcased the new Alpamayo series of autonomous vehicle models. According to the company, Alpamayo employs a Vision-Language-Action (VLA) model based on chain-of-thought reasoning, aiming to accelerate the development of the next generation of safe, reasoning-based autonomous vehicles (AVs).

This may sound complicated. Simply put, these models can identify unique driving situations that may not occur during normal driving and figure out the correct way to proceed. For example, when approaching an intersection, the model can detect a malfunctioning traffic light, identify the problem, and attempt to determine the next course of action.

Jensen Huang stated that the Alpamayo platform enables cars to "reason" in the real world. The first vehicle equipped with NVIDIA's technology will hit the roads in the United States in the first quarter.

NVIDIA explained in a statement:

Autonomous vehicles must operate safely under an extremely wide range of driving conditions. Rare and complex scenarios, often referred to as the "long tail," remain one of the greatest challenges for autonomous driving systems to handle safely. Traditional autonomous driving architectures separate perception and planning, which limits the system's scalability when new or abnormal situations arise. Although recent advancements in end-to-end learning techniques have achieved significant results, overcoming these long-tail extreme situations requires models to be able to conduct causal reasoning safely, especially when the situation goes beyond the model's training experience.

The Alpamayo series introduces a reasoning-based VLA model, bringing human-like thinking to the decision-making process of autonomous vehicles. These systems can think through novel or rare scenarios step by step, thereby enhancing driving capabilities and interpretability – which are crucial for enhancing the trust and safety of intelligent vehicles – and are supported by the NVIDIA Halos safety system.

Jensen Huang said, "The ChatGPT moment for physical AI has arrived – machines are starting to understand, reason, and act in the real world. Driverless taxis are among the first beneficiaries. Alpamayo brings reasoning capabilities to autonomous vehicles, enabling them to think about rare scenarios, drive safely in complex environments, and explain their driving decisions – which is the foundation for safe and scalable autonomous driving."

Meanwhile, NVIDIA will make the Alpamayo models freely available, allowing potential users to retrain the models themselves. NVIDIA stated that these models are designed as "large teacher models that developers can fine-tune and distill into the backbone of their complete (autonomous driving) stack."

In other words, the role of Alpamayo is to help developers continuously improve their autonomous driving vehicle technology.

Industry Support

NVIDIA said that automakers, including Lucid, Jaguar Land Rover, Uber, and Berkeley DeepDrive, are interested in Alpamayo and hope to develop reasoning-based autonomous driving stacks to achieve Level 4 autonomous driving.

Kai Stepper, Vice President of Advanced Driver Assistance Systems and Autonomous Driving at Lucid Motors, said, "The shift towards physical AI highlights the growing demand for AI systems that can reason about real-world behavior (not just process data). Advanced simulation environments, rich datasets, and reasoning models are important elements in this evolutionary process."

Thomas Müller, Executive Director of Product Engineering at Jaguar Land Rover, said, "Open and transparent AI development is crucial for the responsible advancement of autonomous driving. By open-sourcing models like Alpamayo, NVIDIA is helping to accelerate innovation across the entire autonomous driving ecosystem, providing new tools for developers and researchers to safely handle complex real-world scenarios."

Sarfraz Maredia, Global Head of Autonomous Mobility and Delivery at Uber, said, "Addressing long-tail and unpredictable driving scenarios is one of the key challenges in autonomous driving. Alpamayo creates exciting new opportunities for the industry to accelerate physical AI, increase transparency, and enable safe Level 4 deployments."

Owen Chen, Senior Chief Analyst at S&P Global, said, "Alpamayo 1 enables vehicles to interpret complex environments, predict new situations, and make safe decisions, even in scenarios they've never encountered before. The open-source nature of the model accelerates innovation across the industry, allowing partners to adapt and improve the technology according to their unique needs."

Wei Zhan, Co-Director of Berkeley DeepDrive, said, "The release of the Alpamayo product portfolio is a significant leap for the research community. NVIDIA's decision to open-source it is transformative, as the access and capabilities it provides will enable us to train on an unprecedented scale – providing us with the flexibility and resources needed to bring autonomous driving to the mainstream."

This article is from the WeChat official account "Science and Innovation Daily," written by Huang Junzhi and published by 36Kr with permission.