HomeArticle

The entire Silicon Valley was stunned by Meta's annual salary offer of $100 million. The co-founder of Anthropic directly confronted them, saying that the team's mission is more valuable than gold and they can't be poached no matter how much money is offered.

极客邦科技InfoQ2025-07-22 15:24
When AI can independently complete more than 50% of economic tasks and receive corresponding compensation, the real inflection point of AGI (Artificial General Intelligence) will arrive.

When AI can independently complete more than 50% of economic tasks and receive corresponding compensation, the real inflection point of AGI (Artificial General Intelligence) will arrive.

In the past three weeks, Silicon Valley witnessed an unprecedented "talent war worth $100 million" —— As soon as Meta Superintelligence Labs (MSL) was established, it offered sky - high salaries to core talents from top AI companies such as OpenAI, Anthropic, and Google DeepMind: The annual compensation package in the first year exceeded $100 million, with a cap of $300 million in four years, just to compete for a small number of "super - intelligent engineers" who can define the future of AI.

Behind this high - stakes bet on tech talents is the crazy competition among tech giants for a monopoly on AI talents. Meta even spared no expense to poach the head of OpenAI's perception team with an average per - capita compensation of over $100 million, with a total cost of billions of dollars.

However, in the midst of this capital frenzy, Benjamin Mann, the co - founder of Anthropic, issued a sober warning in a blog column called "Lenny's Podcast": The exponential progress of AI will reshape the job market, and about 20% of jobs may be redefined or even disappear.

As one of the core architects of GPT - 3, Ben witnessed the early development of OpenAI. However, due to his insistence on AI safety, he chose to lead his team to leave and founded Anthropic with the core mission of "Alignment".

In 2020, Ben left the company with the entire OpenAI security team and co - founded Anthropic. It is reported that its current valuation exceeds $100 billion.

Now, he serves as the head of product engineering technology at Anthropic, devoting most of his time and energy to making AI useful, harmless, and honest.

Ben's prediction about the impact of AI on employment is based on a key concept —— the "Economic Turing Test": When AI can independently complete more than 50% of economic tasks and receive corresponding compensation, the real inflection point of AGI (Artificial General Intelligence) will arrive, and this moment may come between 2027 and 2028.

Ben's prediction is not entirely optimistic: He bluntly stated that the popularization of AI will lead to about 20% of jobs being reshaped or even disappearing, especially in white - collar fields such as programming and customer service. But he also emphasized that this change is not entirely pessimistic —— through safety alignment research, AI can become a collaborative partner of humans rather than a threat. This is confirmed by Anthropic's AI assistant Claude: Its beloved "gentle personality" is a direct result of AI safety research.

Regarding the future, Ben's suggestions are both realistic and forward - looking: He no longer relies on the traditional education model but teaches children three key skills to cope with the challenges of the AI era. This conversation not only reveals the cutting - edge trends of AI technology but also represents a deep reflection on how humans can ride the technological wave.

The following is the translation of the podcast, edited by InfoQ without changing the original meaning:

The Battle for AI Talents

Lenny: Ben, I'm glad to chat with you. I'd like to start with something very timely, about what happened this week. There's a story in the news right now that Zuckerberg is poaching talent from all the top AI labs after offering a $100 million signing bonus to all the top AI researchers. I assume you're dealing with this, and I'm just curious, what have you seen inside Anthropic? What's your take on Meta's strategy? And how do you think things will develop next?

Ben: I think it's a product of the times. The technology we're developing is extremely valuable, and companies are growing rapidly, as are many other companies in this field.

But I think Anthropic may be affected much less than other companies because the employees here are very mission - oriented. They stay because they see a greater meaning —— In a company like Meta, the best - case scenario might be making money, but at Anthropic, we have the opportunity to truly impact the future of humanity and promote the common prosperity of AI and humans. For me, this is not a difficult choice at all.

Of course, I understand that everyone's situation is different, and some people may face more complex considerations. If someone really accepts those huge offers, I won't hold it against them. But if it were me, I would definitely not choose to leave.

Lenny: Regarding the offer from Meta, do you think the $100 million signing bonus is real? Have you ever seen an offer this high?

Ben: I'm very sure the offer is real. If you only consider an individual's impact on a company's development trajectory —— Take us for example, what we're selling is in high demand. If we can achieve a 1%, 5%, or even 10% efficiency improvement, the value of our inference stack will become extremely astonishing. So from a business perspective, a $100 million compensation over four years is actually quite cheap compared to the value created.

We're in an era of unprecedented scale, and things will only get crazier. If you project the exponential growth of company spending —— Currently, capital expenditure roughly doubles every year, and the entire industry's investment in this area may have reached around $300 billion. So $100 million is just a drop in the bucket in this context. But in a few years, if the scale doubles several more times, we'll be talking about trillions of dollars, and these numbers may become immeasurable.

Lenny: Along this line of thought, many people are worried about the progress of AI —— We're hitting bottlenecks in many areas, and it feels like new models don't bring as obvious a leap in intelligence as before. But I know you don't agree with this view. I know you don't believe we've reached a plateau in terms of scaling. Can you talk about what you've observed? And what key factors do you think people are overlooking?

Ben: It's kind of funny because this story pops up about every six months, but it never really comes true. I really wish people would install a "bullshit detector" that would go off when they see such arguments. In fact, I think the progress has been accelerating —— Just look at the rhythm of model releases: It used to be once a year, but now, with the progress of post - training technologies, we can see new models released every month or every three months.

There's an interesting time - compression effect here. Our CEO Dario Amodei compares it to a journey approaching the speed of light: One day on the spaceship is equivalent to five days on Earth. We're accelerating, and the time - dilation effect is becoming more and more obvious. This may be one of the reasons why people think the progress has slowed down. But if you look closely at the scaling laws, they still hold. It's true that we need to shift from simple pre - training to methods like reinforcement learning to continue the scaling law, but it's like the semiconductor industry —— The key isn't how many transistors a single chip can hold, but how many computing units an entire data center can deploy. You just need to adjust the definition a little bit to keep going.

Surprisingly, this is one of the few laws in the world that still holds across multiple orders of magnitude. In physics, many laws break down within 15 orders of magnitude, so this phenomenon is really incredible.

Lenny: What you're saying is essentially: Because new models are being released more frequently, people always compare the latest generation with the previous one and feel that the progress isn't significant. But if you look at the long - term trend, compared to the rhythm of one major breakthrough a year before, the progress is actually faster now. It's just that because we see more iterative versions, we have this illusion. Is that correct?

Ben: I can understand how people who say the progress has slowed down feel. In some specific tasks, we've indeed reached a saturation point of intelligence —— For example, in simple tasks like extracting information from structured documents, we may have achieved 100% accuracy. We often see this phenomenon: When a new benchmark test is released, it will be completely conquered within 6 - 12 months. So the real limitation may be that we need better benchmark tests and more ambitious tools to fully demonstrate the intelligence level of current AI systems.

Defining AGI and the Economic Turing Test

Lenny: This is actually a great transition for us to think about AGI in a more concrete way and accurately define what AGI means. Can you elaborate on the definition of AGI?

Ben: I think the term "AGI" is too loaded with meanings, so we don't use this word much internally anymore. I prefer the concept of "transformative AI" —— The key isn't whether it can do everything humans can do, but whether it can truly change the socioeconomic structure.

There's a very specific way to measure this, called the "Economic Turing Test" (although I didn't come up with it, I really like the idea). Imagine: If you hire an agent to work for a month or three months and later find out that it's actually a machine rather than a human, then it has passed the economic Turing test for that position.

We can expand this concept, just like economists use a "basket of goods" to measure purchasing power or inflation. We can define a "basket of jobs", and if an AI system can pass the economic Turing test for 50% of the money - weighted jobs in this basket, then we've entered the era of transformative AI. The specific threshold isn't that important. The key is that once this threshold is crossed, we expect major changes in global GDP growth, social structure, the job market, etc.

Social institutions are sticky, and change usually happens slowly. But when these breakthroughs really occur, you'll know: A brand - new era has begun.

Lenny: Your CEO Dario Amodei recently talked about how AI will take over a large part of white - collar jobs, potentially leading to an unemployment rate increase of around 20%. I know you have a more distinct view on this issue —— Regarding the actual impact of AI on the workplace, it may be far greater than what people currently perceive. Can you specifically talk about what key points are being overlooked by the public regarding the impact of AI on employment, both the upcoming impact and the impact that's already happening?

Ben: From an economic perspective, unemployment can be divided into several types: One is that workers lack the new skills required by the economy, and the other is that certain jobs are directly replaced. I think the actual situation is a combination of the two. But if we look 20 years into the future —— By then, we'll have long passed the technological singularity —— I can hardly imagine capitalism remaining in its current form.

If we successfully develop a safe and reliable super - intelligence, as Dario described in "Machines of Love and Grace": A single data center could house an entire nation of geniuses, which would greatly accelerate progress in science, technology, education, and mathematics. But it also means we'll live in a world of extreme material abundance, where labor is almost free, and any professional service can be obtained instantly. By then, the concept of "work" itself may change fundamentally.

We're currently in a difficult transition period: People are still working, and the capitalist system is still functioning, but the world 20 years from now will be completely different. The reason the technological singularity is called a "singularity" is that once we cross this point, the speed and depth of change will be beyond our imagination.

In the long run, in a world of material abundance, work itself may not be that important. But the key is how to smoothly navigate this transition period.

Lenny: I'd like to focus on a few aspects. First, although the media is full of various headlines about AI, most ordinary people may not have really felt these changes personally or witnessed the actual impact. This leads many people to think: "Maybe AI is really changing something, but for me personally, my job seems to be going on as usual, and nothing seems to have changed."

In this situation, what actual changes brought about by AI that have already occurred do you think most people are unaware of or misunderstand? In other words, in which areas has AI quietly changed the real face of the job market outside the public eye?

Ben: I think part of the reason is that humans are naturally bad at understanding exponential growth. On a graph, an exponential curve looks almost flat at the beginning until the inflection point suddenly appears, and the curve starts to rise vertically —— This is exactly the situation we're talking about. I personally realized in 2019 when GPT - 2 was released that "this is the path to AGI", but many people didn't really feel the change until ChatGPT appeared.

Let me list a few areas experiencing rapid change: For example, customer service: Our partners Fin and Intercom have achieved an 82% automatic resolution rate for customer problems without human intervention; in software development, about 95% of the code in our Claude Code team is generated by AI, which means a small team can produce 10 - 20 times more code; finally, in knowledge work, complex tickets that were originally abandoned can now be deeply processed.

In the short term, we'll see a significant increase in productivity. I've never heard a hiring manager at a growing company say "I don't want to hire anyone anymore". This is the optimistic side. But for jobs with lower skill requirements, even if the overall economy is doing well, a large number of people will still face the risk of unemployment. Society needs to plan ahead to deal with this.

How to Deal with the Uncertainty Brought by AI

Lenny: Okay, I'd like to continue exploring this topic, but more importantly, give the listeners some practical advice —— In this era of rapid change, how can people create an advantage for themselves? When hearing about the prospects of AI development, many people's first reaction may be: "This doesn't sound good. I need to prepare in advance."

I know you can't have all the answers, but for those who want to plan ahead and ensure that their careers and lives aren't affected by the AI wave, do you have any specific suggestions?

Ben: Even as someone at the center of the AI revolution, I can't completely avoid the impact of the disruption of the way I work. This isn't just a simple matter of job replacement; the entire work paradigm is undergoing a fundamental transformation. This transformation will ultimately affect everyone —— including me, Lenny, and everyone here.

During the transition period, I think the two most crucial core abilities to develop are the vision to stay ambitious and the learning ability to quickly master new tools. Those who stick to old tools will eventually be eliminated. Take the programming field as an example: Although many people are used to code auto - completion and simple Q&A, truly efficient users will ask AI to perform system - level refactoring and are willing to try 3 - 4 times until they get the ideal result. The success rate of a complete rewrite is much higher than patching up old code.

This change isn't limited to the technical department. Our legal team uses AI for contract annotation, and our finance team uses AI to complete BigQuery data analysis —— These are tasks that originally required professionals, but now can be efficiently solved through AI tools. Although there's a fear to overcome in the initial stage of use, breaking out of the comfort zone can lead to breakthrough gains.

Here are three specific suggestions:

Use tools in - depth: Don't just scratch the surface. Immerse yourself in them as if they were your daily work environment.

Set higher goals: Break through self - imposed limitations. AI may achieve what you thought was impossible.

Keep trying multiple times: After the first failure, you can ask the question in a different way or simply repeat the attempt (it may succeed due to randomness).

Regarding the fear of being "replaced", a more accurate way to put it is that in the short term, the threat to you isn't AI itself but your colleagues who are better at using AI. At our company, although AI has improved the team's efficiency, recruitment has never slowed down. Newcomers often wonder: "If we'll be replaced, why are we still hiring?" The answer is simple: We're still in the early stage of exponential growth, and excellent talents are always the core driving force for change. In the next few years, it won't be about jobs disappearing but about the reconstruction of job content —— That's why we need more talents.

Lenny: Let's think from a different perspective —— As practitioners at the forefront of AI, you've witnessed the development trajectory and potential impact of AI firsthand. If you have your own children, after understanding all these development trends, which aspects of your children's abilities will you focus on cultivating to help them succeed in an AI - dominated future?

Ben: Yes, I have two daughters, one is 1 year old, and the other is 3 years old. I've observed that my 3 - year - old daughter can interact with Alexa naturally, which has given me