Jensen Huang Talks with 10 Open-Source AI Leaders: Future Computing Power Will Tilt Towards Post-Training, and OpenClaw Unleashes New Imagination for Modern Computers
At GTC 2026, Jensen Huang invited a group of guests who rarely sit at the same table: Harrison Chase from LangChain, Michael Truell from Cursor, Misha Laskin from Reflection AI, Aravind Srinivas from Perplexity, Mira Murati from Thinking Machines Lab, Arthur Mensch from Mistral, Daniel Nadler from OpenEvidence, Hanna Hajishirzi from AI2, Robin Rombach from Black Forest Labs, and Anjney Midha from AMP.
The topic of this round - table discussion focused on Open Models. However, after listening for a full 80 minutes, it's clear that they were discussing far more than just "which is better, open - source models or closed - source models".
Jensen Huang set the tone for the discussion at the beginning. He said that the outside world has spent a lot of time discussing the cutting - edge closed - source models and closed - source labs. Undoubtedly, they are very important. "In many aspects, they have laid the foundation for the entire industry." But beyond that, there is a greater diversity in AI. There is no single answer to questions like how models should be created, how they should be integrated into applications, and how they should be implemented in different industries.
He made a very striking judgment: In terms of overall scale, the aggregated open models have already become the second - largest model group in the world. And in different industries and applications, they are very likely to eventually become the largest model group in the world.
In Jensen Huang's view, the future is not about A or B, not about one model defeating another. Instead, it's about "the combination of system models". This was also the most stable and clear main thread throughout the entire round - table discussion.
Although this conversation was titled "Open Models", the guests quickly pushed the topic deeper. The model is no longer the sole protagonist. What is truly taking shape is a new system composed of models, tools, connectors, agents, control planes, and corporate governance. In this sense, this round - table discussion is more like a collective definition of the next - stage AI industrial structure than a "discussion on open - source models".
1. How did Jensen Huang "set the topic" for this round - table discussion?
Jensen Huang's opening remarks were not long, but they were rich in information.
He started with the familiar narrative. Of course, cutting - edge labs like OpenAI, Anthropic, Gemini, and xAI are important. Closed - source frontier models are also important. In fact, they can be said to be the primary driving force for the entire industry. Then he immediately took the issue a step further. If AI is only understood as the most powerful models created by a few labs, our understanding of the entire industry is too narrow.
He said, models are a technology, just as transistors are a technology, not a product. Models are a technology, just like transistors. They are not the final product. Open Model is a technology, while ChatGPT is a product. This statement actually set a boundary for the entire round - table discussion. The guests were not discussing the chatbot market. Instead, they were exploring how the industry would develop after "models serve as the technological foundation".
Today's world certainly needs proprietary AI products and companies that sell models directly as products. At the same time, the entire industry also needs a larger ecosystem, allowing different industries and companies to use models as technological materials to further develop their own products, systems, and services.
Therefore, when Jensen Huang posed the first question - "What misunderstandings do people have about large - model companies like OpenAI and other companies in the ecosystem?" - this was no longer a simple question of open - source vs. closed - source. Instead, it was a deeper inquiry: Beyond the two familiar roles of "the most powerful model company" and "the application company", what new roles will emerge in the AI software stack?
2. The model is no longer everything. What is truly taking shape is the "system".
If we were to extract the most frequently repeated judgment from this round - table discussion, it would be: AI is no longer just about models. It is evolving into a system.
Michael Truell, the co - founder and CEO of Cursor, said that in the past, people used to think that there were only two types of companies in the AI software layer. One type develops super - large general - purpose foundation models and provides capabilities through APIs. The other type develops application products based on these models.
However, in his view, what is emerging and growing rapidly is actually a third type of company. On the one hand, they use the best model APIs in the market. On the other hand, they also do a lot of their own work at the model and agent levels.
Michael Truell's point is clear. The future software stack will not be as simple as "underlying model + upper - layer application". As agents become more and more complex, the ability to organize different models, tools, and execution processes will become a new core competency.
He mentioned that at the beginning, AI was just about "calling a model". Then, tool calls were added. In the next one or two years, truly new - type agents will emerge. They will be able to handle complex tasks that take hours or even days, just like colleagues. At this level of complexity, a single model may not be the optimal solution. Different models have different strengths, so the system will split a complex task and assign it to different models for processing.
He said that in the future, there will be a large number of composite agents. They may not rely on a single most powerful model. Instead, through orchestration, the entire system will be "smarter than any single model".
Aravind Srinivas, the CEO of Perplexity, put it more directly: AI is not a model. It is a system, a computer.
He took his company's Perplexity Computer as an example. He said that what they want to do is to organize all the things that AI can do, such as coding, writing, and multi - modal generation, into an orchestration system. They connect various tools, models, file - system connectors, and multi - cloud resources. Users only need to hand over their tasks without having to worry about "which model is good at what".
Aravind used a very vivid metaphor: Sub - agents are like musicians, models are just musical instruments. The real work that AI does for you is the symphony.
In his description, open and closed are not mutually exclusive. Open models often perform better in terms of token efficiency and cost - effectiveness. Closed - source models may be better at orchestration, reasoning, and tool calls. Eventually, models will become more like tools themselves, just like file systems and connectors, which are components of the entire system.
This is also a very representative shift in this round - table discussion. No one simply reduced the open and closed models to a moral stance or a battle of ideologies. Instead, the closer the guests were to the actual product and agent operation levels, the more specific they were. Open models have their value, and closed - source models have theirs. A real system often incorporates both.
Harrison Chase, the CEO of LangChain, summarized this idea into a new term: harness engineering.
After observing various places in recent days, I feel that this term will become a buzzword in the next period.
So - called harness engineering basically encompasses everything around the model: how it connects to tools, when to call what prompts, which sub - agents to use, and which models to assign to different sub - agents. Even closed - source labs are doing the same thing. Take Claude Code as an example. Of course, people will praise the model itself for being very powerful. However, what really makes it useful is the harness built around the model.
When Harrison and his team interact with developers, more and more of their discussions are not about "which model to use" but about "how to build a perfect harness for a specific environment".
This actually responds to a very common and contemptuous judgment in the AI field in the past year - "a certain product is just a wrapper". In this round - table discussion, almost no one regarded "wrapper" as a low - value term. Instead, people are increasingly recognizing that what truly turns models into productivity are often these previously underestimated aspects: context management, tool access, routing, memory, workflow, permissions, and execution strategies.
Misha Laskin, the CEO of Reflection AI, added another perspective. He said that there are two common misunderstandings about model companies.
The first misunderstanding is that people think that a model company only "creates a model". In fact, when you buy a commercial model, you are buying the entire stack from chips, orchestration, software, reasoning to the product. The value of openness lies in the fact that others can also optimize this system from start to finish along the entire chain.
The second misunderstanding is that people think that open models are naturally inferior to frontier models. Misha Laskin believes that this is only a phenomenon at the current stage, not a fundamental law. In his view, there is no insurmountable boundary between open and closed models. Models are "knowledge infrastructure", and "knowledge infrastructure naturally tends to be open".
Mira Murati, the founder of Thinking Machines Lab and the former CTO of OpenAI that we are familiar with, views "openness" in a broader innovation process.
She said that progress is very rapid now. Everything is on an exponential curve, and the pace is highly compressed. There is too much to learn, and it is impossible for a few large labs to complete it all alone. Many smart people lack not the ability but the opportunity to access knowledge and tools. Therefore, openness will become very important here - not only for the models themselves but also for the infrastructure, data, and research insights.
She specifically mentioned that many people regard openness as a zero - sum choice. It seems that the more open it is, the less conducive it is to commercialization. However, she disagrees. She cited an early decision of her team: opening the post - training API to allow more researchers to continue post - training on open models.
From the responses of these people, the significance of open models is clearly no longer just a single question of "whether the weights are open". Instead, it is about: Who can participate in post - training, who can participate in architectural evolution, who can bring the model into their own system, and who can redefine it in their own domain.
3. From generative AI to reasoning and then to agents, why did the inflection point occur in the past two years?
Jensen Huang summarized the AI evolution in the past period into three stages: Generative AI, Reasoning, Agentic systems.
He also made a very notable judgment. In the past, people always focused on pre - training and discussed abilities such as "memory", "generalization", and "basic knowledge". However, in his view, in the future, the main consumption of computing power may not continue to be concentrated on pre - training but will shift more to post - training. Because basic knowledge is just the starting point, post - training will become increasingly crucial for models to acquire skills and become implementable systems. Post - training will become more and more critical.
This judgment also corresponds to the common feeling of several guests on - site. The most obvious change in the past year is not that models know more facts but that they have started to "do things" for real.
On this issue, several guests gave different judgments on the inflection point.
Misha Laskin traced the timeline back further. He said that what really made him switch from theoretical physics to AI was not language models but AlphaGo. It was the first time he saw a large - scale "super - intelligent agent". In his eyes, the most important thing about AlphaGo is that it never stops learning. The problem is more of an economic one - how much computing power are you willing to invest to make it 10 times stronger. Now, RL has started to work on language models. In his view, solving basic scientific problems in the future may also gradually become a matter of computing power and economics.
This perspective is different from many views that only understand agents from the perspective of LLMs today. Misha Laskin means that the concept of agents did not emerge out of thin air in the era of language models. It is a longer evolutionary thread. It is just that language models and RL have finally started to converge in this round, pushing agents from a few special cases to a wider range of knowledge work.
Michael Truell gave a very practical answer: The most important story about AI at the economic level last year was that coding really started to work.
He believes that there is still a lot of room for the coding market in terms of both customer value and technological ceiling. The capabilities that were proven to work in coding last year will start to migrate to other fields this year. He also specifically mentioned that personal productivity agents are growing very rapidly.
Harrison Chase also recognized the key position of coding. He said that many of the agents we see today are essentially similar to coding agents. Because many of the capabilities of coding agents are naturally universal. You can write code, and the code can further help you complete a series of tasks such as sending emails, calling services, and operating workflows.
He mentioned that until about a year ago, the simple agent algorithm of "an LLM continuously calling tools in a loop" was actually not very useful. Later, it became feasible. On the one hand, the models themselves became much stronger. On the other hand, people gradually figured out what kind of tools should be connected to this harness.
This statement is actually very important because it shows that the inflection point of agents is not just about "models suddenly becoming smarter". Instead, it is about a certain alignment between model capabilities, tool interfaces, and system engineering in the same period.
Aravind Srinivas specifically attributed this turning point to tool capabilities. He said that from o1 to especially o3, these models have become very good at "navigating tools". At the same time, the entire ecosystem is also being organized around tools, MCPs, and connectors. By the end of last year, models had clearly become better at using files and the CLI.
His judgment is that the reason why the success of coding will spread to a broader world is that once a model is good at operating the console, it suddenly gains the ability to enter almost all knowledge work.
The keywords here are not coding itself but CLI, files, tools and other interfaces. For humans, much of the work in the software world is organized through these abstract interfaces. Once a model learns to use the same interfaces, it is like getting a "passport" to enter this work.