HomeArticle

Google AI Chief: Beyond Chips, the US-China AI Race Hinges on Energy

36氪的朋友们2025-07-08 08:51
Solar power stations can be deployed on the moon or in space to provide computing power energy for AI.

On July 8th, news broke that Omar Shams, the head of Google's agent business, recently appeared on the podcast "Manifold" and was interviewed by Steve Hsu, a computational mathematics professor at Michigan State University and the founder of the large - model application developer Superfocus.ai. Shams once founded the AI startup Mutable, which was later acquired by Google.

During the conversation, Shams put forward many viewpoints on key issues such as the bottleneck of AI computing power, the implementation of agent applications, talent competition, and changes in the industry structure. The following are the core contents of this conversation:

1. Two major bottlenecks in AI development: chips and energy. Shams pointed out that although chips are important, energy supply is the key constraint for the long - term development of AI.

2. The U.S. power grid struggles to support AI energy consumption. The expansion of the U.S. power grid is slow, while China's annual new power generation capacity has exceeded the combined capacity of the UK and France, showing a gap in energy capabilities.

3. To break through the Earth's energy limit, Shams proposed deploying solar power plants on the moon or in space to provide computing power and energy for AI.

4. Shams said that although the growth of model performance follows a logarithmic law, there will be a "leap" at a specific scale, and the academic community needs to establish a new theory to understand this "phase transition" phenomenon.

5. AI agents are reshaping the software development structure. AI tools are automating multi - step programming tasks, marginalizing junior engineers, and teams are more dependent on technical leaders.

6. "Tacit knowledge" determines the success or failure of AI projects. Shams emphasized that in the AI field, what really determines the success or failure of a project is the intuition, experience, and judgment that are difficult to quantify. Although this kind of "tacit knowledge" is difficult to impart, it is the core competitiveness of top AI talents.

7. Shams is optimistic about the future of AGI and reminds young people that to keep up with the evolution of AI, knowledge alone is not enough. Only through more practice and hands - on work can they take the initiative in the future.

The following are the latest highlights shared by Shams:

Is the U.S. power grid holding back? Build a solar power plant on the moon to power AI

Question: I heard that the establishment of OpenAI is related to DeepMind being sold to Google. Back then, Elon Musk and Luke Nosek, the co - founder of PayPal, hid in a closet at a party to call Demis Hassabis, the founder of DeepMind, in order to stop Google from acquiring DeepMind. They said they were willing to match Google's $600 million offer. But Hassabis directly refused, saying, "Even if you raise enough money, you can't provide the computing resources that Google can." Later, Musk was worried that Google would monopolize AGI technology, so he supported the creation of OpenAI. Do you know about this?

Shams: I haven't heard this story. But now I work at Alphabet, so there are some things I can't say too much about. Generally speaking, there are indeed many things worth discussing in this AI competition. The AI industry does face two major bottlenecks: chips and energy supply. After all, without sufficient power support, even the most powerful algorithms won't run.

Question: When it comes to the Sino - U.S. AI competition, these two problems will surface. In terms of chips, it's a battle between NVIDIA and Huawei, and the gap in power is even greater. It's very difficult to improve the power supply capacity of the U.S. power grid, while China's power production growth rate is extremely astonishing. China's annual new power generation is equivalent to the annual power generation of the entire UK or France, and it takes the U.S. seven years to reach this level. Now, China's power growth rate is twice that of the U.S. Therefore, how to solve the energy supply problem will be the key to determining the future development of AI. How can the power gap be filled?

Shams: To be honest, it's basically impossible to upgrade the U.S. power grid. The various regulations make it impossible to speed up. I'm even thinking now about whether to move the power plant to space or the moon. Although it sounds like a fantasy, Eric Schmidt, the former CEO of Google, is already taking action. The Relativity Space company he invested in is researching this. He wants to move the data center to space because the energy supply there is not as restricted as on Earth.

Question: Will the energy come from solar panels or an orbiting reactor in space?

Shams: The energy will probably mainly come from solar energy rather than nuclear energy . This is because nuclear energy is strictly restricted internationally, and if an accident occurs during a rocket launch, it may cause extremely serious consequences, so it's not suitable for use in space.

Question: Don't you think that to collect so much energy, you may need to deploy solar panels covering an area of 1 square kilometer or even 10 square kilometers in space?

Shams: Yes, it's really a crazy idea. I've done some calculations. To reach a power of one gigawatt, it may really require about 1 square kilometer of solar panels, or even more. My intuition tells me that it will take a large amount of resources to send them to space. So, it's indeed a huge challenge.

Moreover, solar panels can't be deployed in low - Earth orbit. If it's a 10 - square - kilometer solar panel, as I calculated before, astronomers may strongly oppose such a design. So, the most ideal deployment location is a place like the Lagrange point.

The so - called Lagrange point is a special position within the solar system or between any two celestial bodies where an object can maintain a stable orbit relative to these two celestial bodies. Fortunately, there are many Lagrange points in the solar system suitable for deploying solar panels.

The unsung hero behind the AI programming revolution: He entered the game earlier than Copilot but remains little - known

Question: You founded the Mutable company and served as its founder and CEO for three years. This company mainly develops AI programming tools, right?

Shams: Yes, that's right. I founded this company in November 2021. We were one of the pioneers in the field of AI development tools. Almost at the same time, Copilot also started. Now, this industry is developing very fast. Companies like Cursor, which develop AI development tools, have already achieved annual revenues of over $100 million. Now, many such companies have quickly achieved revenues of over $100 million.

Question: I know that Mutable was ahead of the curve on many concepts that are now very common. I remember that Andrej Karpathy, the "AI god", recently gave a keynote speech and talked about some concepts. Although he didn't mention you, I think these ideas were first proposed by you, including how to understand software in a certain context or how to generate better documentation from a company's codebase. I think you did a lot of interesting things at Mutable. Would you like to talk about them?

Shams: Indeed, many ideas were first proposed by Mutable and may have had a greater impact on today's products. I've seen many open - source codebases. Although you can quickly get started by continuously learning and accumulating experience, it's always a bit slow. So, I thought, why not let AI help? Why not let AI write an article like a Wikipedia entry to explain the code? Then I came up with a name, Auto Wiki. We worked on this project, using recursive summarization to explain the code. After the project was launched in January 2024, it became very popular.

The most interesting technical part is actually what Karpathy mentioned in his speech. He said that Auto Wiki actually became a very useful context - filling tool because large language models (LLMs) can benefit a lot from it. In fact, I think we can train LLMs in an "anthropomorphic" way because their training data basically comes from human data and experience.

So, having these code - summarization functions is actually very helpful for LLMs, not only for retrieval (such as RAG - Retrieval - Augmented Generation) but also for the generation part, especially during the inference process.

Question: During the construction of Auto Wiki, did you need to manually correct some problems to further generate code?

Shams: We have a function that allows users to modify the generated content, but this function isn't widely used. In fact, you don't need to do that.

Indeed, the content generated by AI sometimes has so - called "hallucinations", but I think there are already some technologies that can effectively deal with this problem. Even with the hallucination problem, Auto Wiki is still much better than not having it, especially when dealing with lightweight problems.

So, in this fully automated process, the model will first browse the entire codebase, understand it, and generate continuously updated documentation.

From a certain perspective, this is actually like reasoning: the model first generates content, and then when doing other tasks, it will refer to the previous reasoning results to deepen its understanding of the code and further generate content.

From Llama to AGI: Zuckerberg spent $100 million not on programmers but on "future seers"

Question: Why is Mark Zuckerberg willing to spend $100 million to recruit a person? What exactly does he see in this person? Can certain people really make a huge difference to a company?

Shams: Although I can't speak on Mark's behalf, and I'm not sure if the $100 - million figure is accurate, there are indeed reports that he poached several top talents from OpenAI. When it comes to talent, I think the success or failure of a company often depends on the team configuration and the role division of each person.

But from a certain perspective, a team is more like the structure of an airplane. Just having a powerful engine isn't enough; without wings, the plane can't fly. Similarly, relying solely on a genius won't work either. There must be a reason why Zuckerberg is willing to pay a high salary for top talents . This phenomenon is common among entrepreneurs. Even founders with strong technical abilities will fail if they lack communication and team - coordination skills. Because investors usually don't understand technology, their decisions mostly rely on intuition and feelings.

Question: But does Zuckerberg also rely on intuition when building his super - intelligent team?

Shams: I don't dare to comment on this, but I must admit that Zuckerberg is a very outstanding founder. When it comes to his decision - making, I think it's a very bold gamble. Only a founder - CEO with super - voting rights like him would dare to do this. After all, Meta has a very abundant cash flow. Compared with some other money - burning projects, investing in AGI (Artificial General Intelligence) is a relatively wise choice. I think it's too early to judge now. We can wait and see the results.

Question: If I had his resources, I'd also think: Why not build the strongest team? I'm not questioning Zuckerberg's strategic decision, but I'm curious: Is spending $100 million to recruit the so - called best talents really the optimal strategy? On the surface, it seems reasonable since there are only a limited number of people who really know the field. But the opposite view is also valid: such talents aren't actually scarce.

Shams: There's indeed a subtle contradiction here. If there aren't any real "technical secrets" in the industry, then why pay sky - high prices for talents? My personal understanding is that what companies are buying isn't specific technologies but those "complex experiences" or "tacit knowledge".

The value brought by these talents is more reflected in the judgment and intuition they've accumulated in actual work, which can help the company avoid some common mistakes and take fewer detours. For example, Zuckerberg may have learned lessons from Meta's Llama project.

Developing AI is like building an airplane. Even if you master all the theories, you still need someone to tell you "which screw to tighten first". After all, the arrival of the AGI era is just around the corner. He'd rather pay more than miss this opportunity. You can understand it this way: Even if he spends a lot, Meta can afford it, and the potential return may be huge.

30% of programmers will be unemployed within two years, and the employment logic of enterprises has changed

Question: If someone tells you, "I see a video on social media every day saying that an agent can do everything for me, but the people I know actually haven't gained much value from agents", how would you answer? Where are agents actually useful now, and where is it just hype?

Shams: I think this field is developing very fast, but many advancements still need time to spread. Although economist Tyler Cowen once said that AGI is similar to electrification and will take 100 years to penetrate the economy, I don't fully agree with this view.

I think the speed may be faster than he imagined. Indeed, there are many regulatory obstacles, and many people need time to change their concepts and habits. But in my opinion, the penetration speed of AGI will be much faster than that of traditional technological revolutions.

Many classic physicists never accepted the concept of quantum mechanics in their lifetimes, and it only became common sense after they died. A similar cognitive change is happening in the AI field. Some traditional engineers still don't believe in the capabilities of AI, which I find hard to understand.

Take the projects I've been involved in as an example. Tools like Cursor and GitHub Copilot have greatly changed the way programmers work. Now, even for startups, the standard for software quality has been significantly raised. Low - quality code can no longer easily pass the review, and this pressure has promoted the progress of the entire industry.

In the legal field, AI companies like Harvey have already started to generate considerable profits. Although the progress in other industries may be slower, in the white - collar job field, the introduction of AI assistants has become an inevitable trend. I'm not sure about the specific impact of this trend on the job market, but I'm sure that the work process will change greatly. These AI assistants will either assist humans in their work or directly replace some jobs.

Question: There are reports that graduates majoring in computer science and software engineering in 2025 will face a relatively sluggish job market. The number of recruitment opportunities has decreased, and the increase in the employment rate is also very small. How much of this situation is due to the productivity improvement driven by AI?

Shams: It's hard to judge accurately, but I think the main reason is that technology companies are shrinking their recruitment scale.

A few years ago, the industry really entered a stage of crazy recruitment. Almost anyone who knew a little programming could get a job offer, but this bubble was obviously unsustainable. Even after the wave of layoffs, many companies didn't cut enough staff to maintain employee morale. As a result, many companies are now in the "after - effect stage of over - recruitment".

But from a more fundamental perspective, the disconnection between the computer education system and AI development is also a big problem. Most university courses still focus on traditional content such as discrete mathematics and algorithm theory, ignoring the cultivation of practical software development skills. This makes many fresh graduates lack engineering practice abilities, which is why I rarely hired fresh graduates in the past because they usually couldn't contribute much to the company.

Of course, there are exceptions. I once hired a 19 - year - old high - school student from Princeton (who hadn't gone to college). He showed amazing abilities through robot projects. This shows that if you can demonstrate your abilities and complete projects, in some cases, academic qualifications are no longer that important.

Startups like Y Combinator value more whether you can demonstrate practical abilities and complete tasks independently. I think in the future, "action ability" will become more and more important.

Question: You think the reduction in software engineer positions is the result of multiple factors. On the one hand, technology companies are shrinking their scale after over - recruiting in the post - pandemic era, and the high - interest - rate environment has exacerbated this trend. On the other hand, AI tools have indeed improved productivity. Is that right?

Shams: I think the impact of AI can't be ignored. Now, many tasks of junior engineers can be replaced by AI. The job demand is shifting towards team leaders (TL) or technical leads (TLM) who need to manage AI agents.

Now the problem is that enterprises may no longer need so many junior engineers. After all, training new employees often results in a net loss in the short term. Previously, they were hired mainly for talent reserve.

In the initial stage, hiring new employees may have some negative impacts and