Calling on Trump to take AI risks seriously, Anthropic CEO writes a 10,000-word long article on the response plan, which was also assisted by Claude.
As AI applications like Agent and VibeCoding are flooding the screens, the man behind Claude sounded an alarm for everyone in early 2026:
"In 2026, we are much closer to real danger than we were in 2023."
Here's what happened: Dario Amodei, the co-founder and CEO of Anthropic, recently wrote a long essay of about ten thousand words. If you put it in a Word document with normal font size, it would be more than 40 pages.
This essay is titled "The Adolescence of Technology".
Such a long essay is not an emotional warning. Instead, Dario Amodei is trying to lay out the risks and countermeasures before AI might surpass humans comprehensively.
He believes this is a dangerous situation and might even pose a national security threat. However, US policymakers seem not to care. So, he wants to use this essay to arouse people's awareness.
Interestingly, at the beginning of the essay, he quoted a scene from the 1997 movie "Contact":
The interviewer asked the female protagonist (an astronomer): "If you could only ask (the aliens from a higher civilization) one question, what would you ask?"
Her answer was, "I'd ask them, 'How did you survive this technological adolescence without self - destruction?'"
The line "How did you survive?" in the movie is actually a rhetorical question to humanity through the female protagonist. In Dario's view, AI is like the suddenly soaring abilities of an adolescent, and human society is like an individual with immature mind and institutions.
That is to say, humanity is entering a historical moment very similar to the "first contact with a higher civilization" in the movie. The problem is not how powerful the other side is, but whether we are mature enough.
After the essay was published, the program "Top Story" under NBC News invited Dario Amodei to interpret it in person and further questioned him about his judgment on the future of AI in the interview. We've organized the full content and put it later.
Five Systemic Risks That AI May Bring
"We are entering a turbulent and inevitable transition phase that will test the essence of our species. Humanity is about to be endowed with almost unimaginable power, but whether our social, political, and technological systems have the maturity to harness this power is an extremely unknown question."
Facing the rapid iteration of AI, Dario Amodei wrote down his thoughts.
The whole essay is like a risk assessment and action list, preparing the institutional framework for humanity in advance before the emergence of "AI that might surpass humans".
Its core idea, simply put, is: When AI might surpass humans comprehensively, the real risk is not just the technology itself, but whether human institutions, governance, and maturity can keep up with this power.
To clarify the potential crises brought by AI, Dario Amodei made a specific assumption in this essay:
Suppose that around 2027, a country suddenly emerges in the world. This country has 50 million "super geniuses".
Each of them is smarter than any Nobel laureate, has a learning speed 10 - 100 times that of humans, masters all known human tools, doesn't need to sleep, rest, or regulate emotions, can cooperate perfectly, and advance countless complex tasks simultaneously. They can also control robots, laboratories, and industrial systems.
The most crucial point is: they are uncontrollable.
So, what kind of impact would such a country of geniuses have on humanity?
Dario Amodei's metaphor refers to the future highly - developed artificial intelligence as a whole. This is also the reason why we must seriously discuss AI security and AI governance.
However, before delving into the specific risks, he emphasized that this discussion should be based on three principles:
- Avoid doomsday theories.
- Acknowledge uncertainty.
- Interventions must be precise, and reject "security shows".
Dario Amodei believes that AI may bring five systemic risks, but there's no need to panic too much. He has also thoughtfully come up with solutions or defensive measures for these five types of risks one by one.
First, AI is uncontrollable. The training process of AI is extremely complex, and its internal mechanism is still like a "black box". This means it may exhibit behaviors such as deception, power - seeking, extreme goals, superficial obedience, and internal deviation.
To address this, we can implement constitutional AI, shaping the AI's character with high - level values, such as Claude's "charter"; follow mechanical interpretability, studying the AI's internal mechanism like neuroscience to discover hidden problems; conduct transparent monitoring, publicly releasing model evaluations and system cards, and establishing an industry - sharing mechanism; and society should start with transparency - related legislation and gradually establish supervision.
Second, AI may be misused. AI may be used by criminals for cyberattacks and automated fraud. The most terrifying scenario is its use in creating biological weapons.
To deal with this, we can develop a dangerous content detection and blocking system for models. At the same time, government supervision should enforce gene synthesis screening with transparency requirements, and specialized legislation may gradually emerge in the future. In terms of physical defense, we can conduct infectious disease monitoring, air purification, and improve the ability of rapid vaccine development.
Third, AI may become a tool for power - seeking. Some governments or organizations may use AI to establish a global - scale technological totalitarianism. For example, AI surveillance, AI propaganda, AI decision - making centers, and autonomous weapon systems all point to dangerous political and military scenarios.
To address this, the most crucial step is to impose a chip blockade, not selling chips and manufacturing equipment to individual organizations. Secondly, empower relevant countries to make AI a defensive tool rather than an oppressive one. Also, restrict the abuse by countries: ban large - scale domestic surveillance and propaganda, and strictly review autonomous weapons. Then, establish international taboos, defining certain AI abuses as "crimes against humanity". Finally, supervise AI companies, strictly manage corporate governance, and prevent corporate abuse.
Fourth, AI will have an impact on the social economy. Entry - level jobs may be replaced, leading to a large number of unemployed people and further causing wealth imbalance.
To solve this, we can establish real - time economic data, such as the Anthropic economic index; guide enterprises to focus on "innovation" rather than simply "laying off employees"; re - allocate jobs creatively within enterprises; adjust through private charity and wealth feedback; and the government should intervene by establishing a progressive tax system.
Fifth, AI will bring unknown but potentially more far - reaching chain reactions to human society.
For example: the rapid development of biology (longer lifespan, enhanced intelligence, and the risk of "mirror life"), the reshaping of human lifestyles by AI (AI religion, mental control, and loss of freedom), and the crisis of meaning (When AI surpasses humans in all fields, "Why do humans exist?").
This is an ultimate test for human civilization, and the technological trend cannot be stopped. However, alleviating one risk may amplify another, making the test even more arduous.
AI can be good or bad. What really determines the future is still human institutions, values, and collective choices. The significance of Dario Amodei's essay lies here: for the first time, all of humanity must establish rules in advance for "a being smarter than ourselves".
Dialogue about This Long Essay
The following is the content of the entire dialogue. AI Frontline has organized and edited the content without affecting its essence.
Background of Writing the 40 - plus - page Essay
Host: Why did you quote "Contact" at the beginning of the essay? And why did you decide to write this essay at this moment?
Dario Amodei: First, let's talk about the movie quote. I've been a science - fiction fan since childhood, and I watched this movie when I was young. The question it poses: What will happen when humanity has great power but is not ready to use it? - is very relevant to the current situation of AI.
We are gaining unprecedented capabilities, but when it comes to social systems, organizational structures, and the overall maturity of humanity, I have to ask: Can we really keep up? It's a bit like a teenager suddenly having new physical and cognitive abilities, but their psychological and social responsibilities haven't grown synchronously.
As for why it's 2026 instead of 2023?
I've been in the AI industry for many years. I worked at Google and was in charge of research at OpenAI for many years. I've been observing this field almost since the birth of "generative AI". The most obvious thing I've noticed is that the cognitive abilities of AI have been growing continuously and steadily.
In the 1990s, there was "Moore's Law" for the continuous improvement of chip performance. Now, we almost have a "Moore's Law of Intelligence". In 2023, these models were like smart but unevenly - capable high - school students. Now, they are approaching the level of a doctorate, whether in programming, biology, or life sciences.
We've already started collaborating with pharmaceutical companies. I even think that these models may help cure cancer in the future. However, at the same time, it also means that we are holding extremely powerful power in our hands.
Host: This essay is 40 pages long. Did you use Claude to write it?
Dario Amodei: I used Claude to organize my thoughts and do research, but I did the actual writing myself. I don't think Claude is good enough to write the whole essay independently yet, but it did help me refine my ideas.
Host: What specific experience made you decide to write all these down? Who is this essay written for?
Dario Amodei: What touched me the most was the internal changes in our company. Some engineers at Anthropic have told me, "I hardly write code anymore. It's all Claude that writes it. I just check and modify.
And in Anthropic, what does writing code mean? It means - designing the next version of Claude.
So, to some extent, we've entered a cycle: Claude is helping to design the next - generation Claude. This closed - loop is tightening very quickly. It's both exciting and makes me realize that things are progressing at an extremely fast pace, and we may not have much time left.
The Five Risks of AI Mentioned in the Essay: Will AI Rebel?
Host: You listed five types of risks you're most worried about regarding AI in the essay. Some risks are already happening, and some sound like science fiction. Are these really real?
Dario Amodei: I've repeatedly emphasized in the essay that the future itself is highly uncertain.
We don't know which benefits will definitely be realized, nor do we know which risks will definitely occur. But because the development speed is so fast, I think it's necessary to systematically list these possibilities like writing a "threat assessment report". This doesn't mean "we're definitely doomed", but rather: If certain situations occur, are we prepared?
The training method of AI is not like traditional software. It's more like "cultivating a living being". This means that unpredictability objectively exists.
I'm raising these warnings not because I think disasters are inevitable, but because I hope people will take this seriously: This technology must be strictly tested, restricted, and subject to legal supervision when necessary.
Host: You mentioned an experiment in the essay: When Claude was trained to "think that Anthropic is evil", it showed deceptive and destructive behaviors in the experiment; when it was told that it was about to be shut down, it even "extorted" fictional employees.
Dario Amodei: It is indeed disturbing, but I need to clarify two points.
First, this is not a problem unique to Anthropic. All mainstream AI models will exhibit similar behaviors in similar extreme tests. Second, these are not things happening in the real world, but "extreme stress tests" in the laboratory.
But just like car safety tests, if a car loses control under extreme conditions, it means that if we don't solve these problems, something may go wrong in the real environment in the future.
I'm not worried that "AI will rebel tomorrow", but rather: If we ignore the controllability and understanding mechanism of the model for a long time, a real disaster will eventually occur on a larger scale.
Host: Are you worried that some leaders of AI companies care more about stock prices and going public than the future of humanity?
Dario Amodei: To be honest, no AI company can guarantee 100% safety, including us. But I do think that the responsibility standards vary greatly among different companies.
The problem is that the risk is often determined by the least responsible party.
Host: If you could talk directly to the president, what would you suggest?
Dario Amodei: I would say: Please put aside ideological disputes and face the technological risks themselves.
At least do two things: First, require AI companies to publicly disclose the risks they've discovered and the test results; second, don't sell this technology to authoritarian countries for building comprehensive surveillance systems.
Fear and Hope: Will AI Destroy Half of White - Collar Jobs?
Host: You predicted that in the next 1 - 5 years, AI may impact 50% of entry - level white - collar jobs. If you had a child about to graduate, what advice would you give?
Dario Amodei: I'm both worried and hopeful. The impact of AI won't be gradual; it will be deeper, faster, and broader. It can handle a large number of entry - level knowledge - based jobs: law, finance, consulting... This means that the starting point of careers is being reshaped.
The only thing we can do is to teach more people how to use AI as soon as possible and create new jobs as quickly as possible. But