StartseiteArtikel

Anthropic Mitbegründer Mann: Supersintelligenz könnte bereits 2028 auftauchen.

36氪的朋友们2025-07-24 13:11
Der Gründer von Anthropic hat gesagt, dass die Entwicklung von KI beschleunigt wird. Es wird erwartet, dass sich bis 2028 Superintelligenz entwickeln wird. Er hat die Sicherheitstests und wirtschaftlichen Tests betont.

Recently, Benjamin Mann, the co-founder of Anthropic, shared his insights on the future development of AI and its profound impact on human society and employment in the podcast "Lenny's Podcast".

Benjamin Mann was the founding architect of OpenAI's GPT - 3. In the latest conversation, Mann discussed several important topics, including his motivation for leaving OpenAI to found Anthropic, his prediction of when AGI will emerge, and his view on using the economic Turing test as a standard to measure the realization of AGI. Additionally, he explored why the Scaling Law has not slowed down but accelerated, why AI safety and alignment are so important, and what the biggest bottlenecks in current AI research are.

The following are the key points shared by Mann in the interview:

  1. Although some believe that AI has hit a bottleneck, Anthropic thinks that technological progress is accelerating. The model iteration cycle has been shortened from annually to quarterly or even monthly. The Scaling Law remains valid, and it only needs to shift from traditional pre - training to the application of reinforcement learning.
  2. Different from traditional reinforcement learning from human feedback (RLHF), Anthropic advocates reinforcement learning from AI feedback (RLAIF), which enables AI to improve itself, reduces human intervention, and has higher scalability, but also faces risks and challenges.
  3. Anthropic has established a five - level AI safety assessment system. Currently, its model is at the third level, with certain but not severe risks. The fifth level may lead to human extinction.
  4. Anthropic defines AGI as an AI system that can pass the "economic Turing test" in multiple high - value job positions. Once achieved, it will have a profound impact on the global GDP, social structure, and employment market.
  5. Mann believes that when AI can pass the "blind test" (employers cannot distinguish between humans and machines) in 50% of high - paying jobs, it marks the birth of transformative AI, which will trigger a global GDP reconstruction and social transformation.
  6. Mann said that from an economic perspective, unemployment will be a combination of skill mismatch and job disappearance. And 20 years from now, the breakthrough of the technological singularity may lead to more changes in capitalism.
  7. AI may cause 20% of white - collar jobs to disappear, but the greater impact will come from the fundamental changes in the nature of work and social structure. In the future, work will no longer rely solely on human labor but will be a collaborative effort with AI, greatly enhancing productivity.
  8. AI can already automatically write 95% of the code, but it cannot surpass human creative thinking. Creativity will be the last line of defense for humans.
  9. Based on the expansion of computing power and the Scaling Law, Mann predicts that superintelligence may appear as early as 2028. However, its social impact will be delayed, and the impact will be unevenly distributed.

The following is the essence of Mann's exclusive interview:

01 The Battle for AI Talent: $100 Million Signing Bonuses and Anthropic's Mission

Anthropic's co - founder confidently stated that the team is committed to the mission of using AI to benefit humanity and is not afraid of high - paying poaching.

Question: Recently, Mark Zuckerberg, the CEO of Meta, offered a signing bonus and compensation package worth $100 million to recruit top AI talents, poaching on a large scale from major AI labs. Has this had an impact on Anthropic?

Mann: The current development value of AI technology and the speed of industry development have indeed led to this level of competition. However, Anthropic has been relatively less affected because our team members generally have a strong sense of mission.

Even when faced with sky - high offers, they usually say, "We may earn more at Meta, but at Anthropic, we can directly influence the future of humanity and promote the use of AI technology for the benefit of society." For me, this is an easy choice. Of course, everyone's life situation is different, and their choices are understandable. But personally, I would never leave because of this.

Question: Is the $100 million signing bonus real? Have you encountered any specific cases?

Mann: I'm completely sure of its authenticity. Imagine the value a top - notch researcher could bring: if their work can improve reasoning efficiency by 1% - 10%, the resulting commercial value could far exceed this figure. Calculated over four years of salary, $100 million is even considered "cost - effective" compared to the value created by such talents.

We are in an era of unprecedented scale - the current capital expenditure in the entire industry is about $300 billion, and it almost doubles every year. $100 million is just a drop in the bucket. But if calculated by exponential growth, this figure may reach trillions in a few years, and today's sky - high offers will seem "conservative" then.

02 Has AI Development Hit a Bottleneck? New Models Are Being Released More Frequently

Question: There is a view in the industry that "AI development has hit a bottleneck", believing that the improvement in the intelligence of new models is no longer significant. But you seem to think that the Scaling Law is still valid?

Mann: Similar views appear every six months, but they have never come true. I think that in fact, technological progress is accelerating: the model iteration cycle has been shortened from annually to quarterly or even monthly, thanks to the breakthroughs in post - training technology.

As Dario Amodei (co - founder and CEO of Anthropic) metaphorically said, it's like the time dilation effect when approaching the speed of light. Our progress is growing non - linearly, and the perception of a slowdown is actually an illusion.

The Scaling Law is still valid, but it needs to shift from traditional pre - training to the large - scale application of reinforcement learning. This is similar to the development path of the semiconductor industry - when the miniaturization of transistors approaches the limit, we turn to pursuing the computing power scale at the data center level. The key is to dynamically adjust the definition of the technical route.

Question: New models are being released more and more frequently, so when people compare each new version with the previous one, they feel that the progress is not obvious. Does this mean that people are ignoring the cognitive bias caused by the accelerated iteration?

Mann: I'd like to be fair to those who think the progress is slowing down - in some specific tasks, we have indeed approached the upper limit of intelligence required for that task. For example, in the task of extracting information from structured form documents, current models perform almost perfectly, reaching 100% of the required ability.

In fact, there is a very telling chart on the online platform "Our World in Data" of the University of Oxford: whenever a new benchmark test is released, it is usually "destroyed" by model performance within 6 - 12 months. So the real bottleneck may be: how can we design more challenging benchmark tests? How can we set more ambitious task goals? Only in this way can we make better use of existing tools and more accurately evaluate the "fluctuation period" of intelligence we are experiencing.

03 When Will AGI Arrive? Creativity Is the Last Line of Defense for Humans

Question: You have a unique definition and understanding of AGI?

Mann: I think AGI is a controversial term, so I now prefer to use the concept of "transformative AI". This term focuses more on whether AI can have a substantial transformative impact on the social economy, rather than whether it has all - around human - level intelligence.

Specifically, I advocate using the economic Turing test as a measurement standard - when an AI agent can perform a job position and the employer doesn't care whether it is a machine or a human, it passes the test for that position.

We can refer to the construction method of the purchasing power parity index and select a representative "basket of occupations". When AI can pass the economic Turing test in 50% of high - value positions (weighted by salary) in this basket, it can be regarded as transformative AI.

Although the specific threshold can be discussed, once this standard is reached, it will have a profound impact on the global GDP, social structure, and employment market. Social systems have inertia, and changes are usually gradual. But when this critical point arrives, it means the beginning of a new era.

Question: Amodei predicted that AI may cause 20% of white - collar jobs to be lost. Do you think the current society underestimates the impact of AI on the workplace?

Mann: From an economic perspective, unemployment can be divided into skill - mismatch - type and job - disappearance - type, and these two forms will be intertwined in the future. But if we look ahead 20 years - by then we will have passed the technological singularity, capitalism may be very different from what it is today.

Ideally, if we successfully develop safe and controllable superintelligence, as described by Amodei in "Machines of Love and Grace": there will be countless "digital geniuses" running in data centers, and there will be an explosive development in fields such as technology and education.

In an abundant world where labor is almost free and expert intelligence is readily available, the concept of "work" itself will be redefined. Of course, there will inevitably be a painful transition period from the current situation to this ideal state. Since it is called a "singularity", it means that this turning point will be incredibly fast and difficult to predict.

Question: Many people feel that "my job hasn't changed much". Where does this cognitive bias come from?

Mann: This is partly due to human limitations in understanding exponential growth. When observing an exponential curve, the initial changes are barely noticeable. It's not until after the inflection point that explosive growth occurs, and then it's almost vertical growth.

I personally realized that this inflection point was approaching when GPT - 2 was released in 2019, but it wasn't until ChatGPT emerged that the general public really felt the coming of the transformation. So, I don't expect large - scale social transformation to happen in the short term. Instead, I expect to see a skeptical reaction. I think this skepticism is reasonable, as it is based on the traditional concept of linear progress.

The most significant changes are currently taking place in two areas: one is customer service. Without human intervention, intelligent agent tools in the customer service field such as Fin and Intercom can independently solve 82% of routine problems; the other is software development. Our Claude Code can automatically generate 95% of basic code. More precisely, engineers can now produce 10 - 20 times more code, and the team's efficiency has been qualitatively improved.

The essence of this change is productivity reconstruction. Human employees can focus their energy on more complex and difficult situations, which might have been ignored five years ago due to the inability of human labor to handle them in a timely manner. But now, AI helps employees handle a large number of simple tasks, allowing them more time to focus on more challenging problems.

I think in the short term, the productivity of the labor force will be greatly improved, and each person will be able to do much more. As the recruitment manager of a rapidly developing company, I've never heard anyone say, "We don't need more people." This may be an optimistic sign, but society must be prepared for the upcoming structural adjustments. Jobs with low skill requirements or limited room for improvement will, I think, be largely replaced.

Question: Facing the potential risk of job replacement by AI, what specific advice do you have for ordinary people?

Mann: I want to say that even someone like me, at the core of the AI industry, is at risk of being replaced by technological changes. This uncertainty is something everyone has to face. But the key lies in how we respond - the most important thing is to maintain an open and learning attitude, be brave in trying new tools, and truly understand how to maximize their value.

Take programming as an example. Many people only use AI assistants as more intelligent auto - completion tools. But we've found that those who can really make good use of Claude Code are characterized by their willingness to use AI to solve more complex problems. If they don't succeed the first time, they'll adjust their methods and try again. Data shows that after 3 - 4 iterative attempts, the success rate of problem - solving will increase significantly.

This principle actually applies to all fields. Our legal and financial teams were initially inexperienced in using AI tools, but now they can use AI to complete more tasks, and their efficiency has increased several times. We'll continue to optimize these tools to make them easier to use and reduce the complexity of operations. The key is to overcome the initial adaptation period and maintain patience and a spirit of exploration.

The key to using AI more efficiently lies in mastering the correct interaction methods. Specifically, you can first tell AI what methods you've tried but failed, then avoid repeating the same attempts and explore new solutions. This approach often yields better results.

This reminds me of a widely circulated view: "What really threatens you is not AI itself, but your peers who are better at using AI". From our practice, teams that are good at using AI tools can indeed create greater value. This also explains why our company is still expanding its recruitment scale.

During the training of new employees, someone directly asked me, "Since AI is so powerful, why do you still recruit us?" My answer is: currently, we are in a critical transitional period of technological development. Using the concept of an exponential curve to explain, we are still in the relatively gentle initial stage, and there is still a distance from the real technological explosion. In this special period, the value of excellent talents is even more prominent. They can help the company and AI evolve better together. This is the fundamental reason why we continue to recruit talents.

Question: In this era of rapid AI development, what abilities do you focus on cultivating in your children?

Mann: My two daughters are 1 year old and 3 years old respectively. My eldest daughter can already interact naturally with the Alexa Plus intelligent assistant. She asks it to play her favorite nursery rhymes or asks some simple questions. This daily interaction with AI has become a part of her life.

In terms of educational philosophy, I particularly agree with what their school advocates: cultivating curiosity, creativity, and self - learning ability.

Every day I receive a growth report from the school. For example, a message from the teacher today made me very happy: "Your daughter had a small argument with her peer today. Although she was very emotional, she tried to express her feelings in words." In my opinion, the cultivation of emotional management and communication skills is very important.

In this AI era, specific factual knowledge will become easier and easier to obtain, but the ability to think independently, the wisdom to solve problems creatively, and the kindness in one's heart are the core competitiveness that will never be replaced by AI.

Question: In this era of rapid AI development, what key role will "creativity" play?

Mann: Although the term "creativity" is not often emphasized, it is precisely the most precious core competitiveness in the future. AI can indeed handle repetitive tasks efficiently, but it can never truly understand human creative thinking that breaks through the framework.

What we need to cultivate is the ability to propose more possibilities when AI gives a standard answer and to find new solutions when AI encounters a bottleneck.

Just like when using Claude, it provides tools, but the real creative spark always comes from the human brain. This creative thinking ability will be the key criterion to distinguish ordinary people from top - notch talents.

04 Why Start a New Venture? Safety Is Not OpenAI's Top Priority

Anthropic's co - founder: Founded Anthropic because OpenAI did not attach importance to AI safety.

Question: Let's look back at the founding process of Anthropic. At the end of 2020, your team decided to leave OpenAI and start a new venture. What's the story behind this decision?

Mann: As a core R & D member of the GPT - 2 and GPT - 3 projects and the first author of the papers, I was deeply involved in the whole process from technology R & D to commercialization, including assisting in raising $1 billion in financing and promoting the deployment of GPT - 3 on the Microsoft Azure platform.

There was an obvious struggle among the safety, research, and business factions within OpenAI. Whenever I heard the management rationalize this divided state, I was deeply worried and felt that this was not the right way.