Five AI bigwigs put forward six judgments together.
At this week's Milken Institute Conference, five key figures covering each layer of the AI supply chain sat together.
The topics they discussed were very broad, ranging from AI bottlenecks to space data centers, from Agents to physical AI, and to a more fundamental question:
Could the entire technological architecture supporting AI development today be wrong from the very beginning?
These five individuals are:
ASML CEO Christophe Fouquet
ASML is a Dutch company that has almost monopolized extreme ultraviolet lithography machines, also known as EUV equipment. Without this equipment, it would be almost impossible to manufacture modern advanced chips.
Google Cloud COO Francis deSouza
He is in charge of one of Google's largest - scale AI infrastructure investments in history.
Applied Intuition co - founder and CEO Qasar Younis
This company is valued at approximately $15 billion and focuses on physical AI. It started with autonomous driving simulation and has now entered fields such as defense.
Perplexity Chief Commercial Officer Dmitry Shevelenko
Perplexity was originally an AI search company and is now evolving towards AI Agents and digital employees.
Logical Intelligence founder Eve Bodnia
She was once a quantum physicist and is now starting a business to challenge the large - model architecture that the current AI industry generally relies on.
Earlier this year, Yann LeCun, the former chief AI scientist at Meta, joined the company as the founding chair of the technical research committee.
Here are some core viewpoints they discussed on - site. Enjoy!
The bottlenecks of AI are real
The AI boom is hitting very real physical limitations. These limitations are closer to the bottom of the industrial chain than many people imagine.
ASML CEO Fouquet was the first to clearly state this.
He said that global chip manufacturing is experiencing a huge acceleration.
But even so, he still strongly believes that for the next two to three years, or even three to five years, the entire market will be in a state of supply constraints.
What does this mean?
That is to say, cloud giants like Google, Microsoft, Amazon, and Meta may not be able to get enough chips even if they are willing to spend money to buy them.
It's not a matter of money, but the supply chain really can't keep up.
Google Cloud's deSouza used data to illustrate how strong the demand is.
He mentioned that Google Cloud's revenue last quarter exceeded $20 billion, a year - on - year increase of 63%.
Even more exaggerated, its order backlog, that is, the revenue that has been signed but not yet delivered, almost doubled in one quarter, rising from $250 billion to $460 billion.
He said, "The demand is real."
For Applied Intuition's Younis, the real bottleneck is not just chips.
His company focuses on autonomous systems in the real world, such as autonomous driving, drones, mining equipment, and defense vehicles.
For them, the biggest bottleneck is data.
This kind of data is not enough to be synthesized in the office. Instead, machines must be truly put into the real world, let them run, make mistakes, and accumulate experience.
He said that you must obtain data from the real world.
His judgment is that for a long time, relying solely on synthetic simulation data cannot fully train models that can operate reliably in the real world.
The energy problem is also real
If chips are the first bottleneck, then energy is the second major problem that follows.
Google Cloud's deSouza confirmed that Google is indeed seriously researching space data centers.
The reason is straightforward: in space, more abundant energy can be obtained, especially solar energy.
But this is not easy.
Space is a vacuum environment without air convection, which means it will be very difficult to dissipate heat from data centers.
On Earth, data centers can dissipate heat through air cooling and liquid - cooling systems;
but in space, heat can only be released through radiation, which is a slower and more difficult process to engineer.
However, Google still regards it as a direction worthy of serious exploration.
deSouza also put forward another key point: efficiency.
He believes that Google's advantage lies in its ability to collaboratively design the entire AI technology stack, from TPU chips, to models, and then to AI Agents.
The advantage of doing this is that it can significantly increase the amount of computation that can be generated per kilowatt - hour of electricity.
He said that running Gemini on TPU is more energy - efficient than any other configuration.
Because the chip design team knows what kind of computing power the model will need before the model is released, it can optimize in advance.
ASML's Fouquet later expressed a similar view.
He said that nothing comes without a price.
The current AI industry is in a very peculiar stage:
For strategic reasons, everyone is willing to invest huge amounts of capital. But more computing power means more energy, and energy definitely has a cost.
Maybe AI needs another kind of intelligent architecture
While most AI companies are still discussing the scale, architecture, and inference efficiency of large - language models, Eve Bodnia of Logical Intelligence is taking a completely different route.
Her company focuses on what is called "energy models," that is, Energy - Based Models, or EBM for short.
It is different from the current mainstream large - language models.
The core mechanism of large - language models is to predict the next word or token.
While EBM is more like trying to understand the rules behind the data.
Bodnia believes that this way is closer to how the human brain works.
She said, "Language is just the user interface between my brain and your brain. The real reasoning itself does not depend on any language."
She also mentioned that their largest model only has 200 million parameters. In contrast, today's leading large - language models often have hundreds of billions of parameters.
But she claims that this model can run thousands of times faster.
More importantly, it can update its knowledge as the data changes, rather than needing to be retrained from scratch every time.
She believes that in fields such as chip design and robotics, the system needs to understand physical rules, not just language patterns.
In these scenarios, EBM may be more natural than large - language models.
She gave an example: when you drive a car, you are not looking for patterns in a certain language. You are observing the surrounding environment, understanding the rules of the world, and then making decisions.
This view is worth noting.
Because the AI industry is increasingly seriously considering a question: Is simply continuing to expand the scale of large - language models really enough?
Agent permission and trust issues
Perplexity's Shevelenko mainly talked about the changes in Perplexity.
It started as an AI search product, but now it is evolving into a kind of digital employee.
Perplexity's new product, Perplexity Computer, is no longer just a tool used by knowledge workers, but more like an employee that can be commanded by knowledge workers.
He said, "When you wake up every morning, there are a hundred employees in your team. How will you make use of them?"
This idea is very attractive, but it immediately brings up a question: How to control these AI employees?
His answer is: The permissions should be detailed enough.
Enterprise administrators should not only be able to specify which connectors and tools AI Agents can access, but also clearly define whether these permissions are read - only or read - write.
This difference is very important.
Because once an Agent enters the company's system, it may not only view information, but also modify information, submit content, and trigger processes.
When Perplexity's computer uses Agent Comet to perform operations on behalf of users, it will first show the plan and request user approval.
Shevelenko admitted that some users may find this process troublesome, but he believes this step is very necessary.
He also mentioned that after joining the board of the investment bank Lazard, he became more understanding of the conservative attitude of corporate CISOs, that is, chief information security officers.
For a company with a 180 - year history that is completely built on customer trust, security and control are not optional.
Physical AI is related to sovereignty
Applied Intuition's Younis put forward a more geopolitical view:
The relationship between physical AI and national sovereignty is more complex than that of pure digital AI.
The Internet was first spread globally as a U.S. technology.
Many countries did not strongly resist it at the beginning. The real backlash often occurred at the application layer, such as when services like Uber and DoorDash began to affect the local offline economy.
But physical AI is different.
Autonomous vehicles, defense drones, mining equipment, and agricultural machines all exist directly in the real world.
They will move within a country's territory, collect data, and perform tasks, and the government cannot ignore them.
This will bring many questions: Is it safe? Who owns the data? Who really controls the system?
Younis said that almost every country will put forward similar demands:
We don't want an intelligent system controlled by a foreign country to exist in our territory in a physical form.
He also said that currently, the number of countries that can truly deploy robotaxis globally is even less than the number of countries with nuclear weapons.
ASML's Fouquet talked about China from another angle.
He believes that China's progress in upper - layer AI applications and models is real.
The release of DeepSeek earlier this year did scare some people in the industry.
But China's bottleneck lies at a more fundamental level.
Without EUV lithography machines, Chinese chip manufacturers will have difficulty manufacturing the most advanced semiconductors.
If the model runs on relatively backward hardware, even if the software is well - developed, disadvantages will continue to accumulate.
He said, "In the United States today, you have data, computing power, chips, and talent. China has done well in the upper - layer of the technology stack, but still lacks some key elements at the bottom."
Does AI affect the critical thinking of the next generation?
Near the end of the discussion, someone at the scene asked an uncomfortable but important question:
Will the AI era affect the critical thinking ability of the next generation?
The answers from several guests were relatively optimistic.
This is not surprising because their careers are all bet on AI.
Google Cloud's deSouza said that more powerful tools may help humans solve some major problems that could not be solved in the past, such as neurological diseases, greenhouse gas removal, and the long - delayed upgrade of power grid infrastructure.
He said, AI should lead humanity to the next stage of creativity.
Perplexity's Shevelenko gave a more realistic answer.
He admitted that entry - level jobs may be disappearing. But on the other hand, the threshold for an individual to independently accomplish something has never been as low as it is today.
He said that for those who have Perplexity Computer, the real limitation may no longer be resources, but your own curiosity and initiative.
Applied Intuition's Younis distinguished between knowledge work and physical labor.
He mentioned that the average age of American farmers has reached 58, and there has been a long - standing labor shortage in industries such as mining, long - distance freight, and agriculture, and the problem is getting worse.
This is not just because the wages are not high enough, but because many people simply don't want to do these jobs.
In these fields, physical AI may not be replacing those who are willing to work, but rather filling a labor gap that already exists and will continue to grow.
Overall, this dialogue reveals several very important trends:
AI is not just a software issue; it has become an issue related to chips, energy, data, the supply chain, and national sovereignty.
Now, there is a growing suspicion in the industry that simply continuing to expand the scale of large - language models may not be the only answer.
Energy models, physical AI, embodied intelligence, and Agent permission systems all point to a new stage.
This article is from the WeChat official account "World Model Workshop", author: World Model Workshop. Republished by 36Kr with authorization.