HomeArticle

AlphaEvolve presents an amazing one-year report card, making AI self-improvement no longer science fiction.

新智元2026-05-08 20:29
AlphaEvolve's one-year report card is amazing - modifying chips, solving math problems, and optimizing power grids. Jeff Dean said, "The TPU brain is designing the next-generation TPU body." - "AI creating AI" is no longer a science fiction concept but a closing engineering loop.

AlphaEvolve has been released for a year in a blink of an eye.

Just now, Google quietly released an amazing annual report card.

Wow, AlphaEvolve has accomplished so many things in this year -

It helped Terence Tao solve mathematical problems, redesigned the circuits for quantum chips, optimized power grid dispatching, accelerated drug screening, and even directly modified the silicon wafer design of the next-generation TPU.

All these indicate that: AlphaEvolve is no longer a toy in the laboratory.

This evolutionary programming agent driven by Gemini has transformed from a concept verification in a paper into a part of Google's core infrastructure in just one year.

As a netizen commented: This kind of recursive self-improvement is really crazy!

Fighting side by side with the world's top minds

Let's start with the most eye-opening part.

In the field of genomics, AlphaEvolve optimized Google's DeepConsensus model, directly reducing the error rate of DNA sequencing variant detection by 30%.

Aaron Wenger, the senior director of PacBio, commented that this means that researchers may discover previously hidden disease-causing mutations - that is, the AI-optimized algorithm may help humans find new life-saving clues.

In the field of quantum computing, AlphaEvolve designed a new quantum circuit scheme for Google's Willow quantum processor, with an error rate 10 times lower than that of traditional optimization methods.

Note, not 10%, but 10 times. This directly made a batch of molecular simulation experiments that couldn't be run before a reality.

But what really got people in the circle excited is mathematics.

AlphaEvolve collaborated with Terence Tao to tackle a classic mathematical problem proposed by Erdős.

There's no need to introduce who Terence Tao is - a Fields Medalist, a mathematics professor at UCLA, and one of the most brilliant mathematicians in the world.

His evaluation is as follows: Tools like AlphaEvolve are providing mathematicians with "very useful new capabilities", especially in optimization problems. It can quickly test whether there are counterexamples to potential inequalities and verify extreme value conjectures. "It greatly improves our intuition about problems and makes it easier for us to find rigorous proofs."

An AI system that makes a top-ten mind in the history of mathematics sincerely say "very useful" - this in itself is a historical signal.

In addition, AlphaEvolve also refreshed the known optimal solution to the Traveling Salesman Problem (TSP) and improved the lower bound record of the Ramsey number.

These are all classic old problems in combinatorial mathematics that generations of mathematicians have been struggling with for decades. An AI programming agent, through evolutionary search, found solutions that human intuition has never reached.

Engineering front: AI starts to optimize its own "body"

If scientific research breakthroughs can still be classified as "smart tools", what AlphaEvolve has done in Google's internal infrastructure can no longer be simply summarized by the word "tool".

The most amazing one: AlphaEvolve proposed a "counterintuitive" circuit design scheme.

How counterintuitive is this scheme?

Google's chip engineers' first reaction would probably be "this is wrong" - but after running the test, they found that it was not only correct, but also more efficient than human-designed ones.

So Google made a decision: It directly integrated this AI-designed circuit into the silicon wafer of the next-generation TPU.

Jeff Dean, Google's chief scientist, personally endorsed this.

His exact words were: "AlphaEvolve starts optimizing from the hardware at the very bottom of our AI technology stack. The circuit design it proposed is so counterintuitive yet so efficient that it was directly integrated into the silicon wafer of the next-generation TPU. This is the latest example of the TPU brain helping to design the body of the next-generation TPU."

Note the significance of this statement: The TPU is the hardware for training Gemini, Gemini is the brain driving AlphaEvolve, and now AlphaEvolve is designing the next-generation TPU.

Business front: From the laboratory to real money

Through Google Cloud, AlphaEvolve has been implemented in multiple industries.

The fintech company Klarna used it to optimize its largest transformer model, doubling the training speed and improving the model quality at the same time. The logistics company FM Logistic used it to optimize the route planning for the Traveling Salesman Problem, increasing the efficiency by 10.4% and reducing the annual mileage by 15,000 kilometers. The computational chemistry company Schrödinger used it to accelerate the training and inference of molecular force fields, increasing the speed by about 4 times - compressing the drug R & D screening cycle from several months to just a few days.

When AlphaEvolve was released a year ago, the biggest question in the circle was: Is this just an amazing demo or a truly usable system?

The report card after one year answers this question: It is not only usable, but has also penetrated deep into Google's most core infrastructure, from the silicon wafer of the chip to the database kernel, from quantum computing to the production environments of commercial customers.

But the most crucial achievement of AlphaEvolve actually lies in none of the above.

Let's read Jeff Dean's words again: "The TPU brain is designing the body of the next-generation TPU."

Translated into more straightforward language: The chips for training AI are being redesigned by AI itself.

After the new chips are made, they will train stronger AIs, and stronger AIs will design better chips - this is a closed loop.

AI creating AI: Recursive self-improvement

On the same day that AlphaEvolve released its report card, IEEE Spectrum - one of the most authoritative media in the global engineering and technology field - published a long article: Recursive Self-Improvement Edges Closer In AI Labs.

The term "recursive self-improvement (RSI)" has basically only appeared in two scenarios in the past decade: in the warning reports of AI safety researchers and in science fiction novels.

IEEE Spectrum pulled it out of these two scenarios with a whole feature article and put it on the table of engineering reality.

What really made this report go viral was the prediction given by Jack Clark, the co-founder of Anthropic, around the same time: By the end of 2028, there is a more than 60% probability that an AI system will be able to train its own next generation completely autonomously.

He wrote in his newsletter Import AI No. 455 that he spent weeks reading hundreds of public data sources and finally came to this conclusion.

He admitted that he "wasn't sure if society was ready".

This wasn't just a casual comment on Twitter. Clark is the co-founder of Anthropic and one of the most influential public intellectuals in the field of AI safety and policy.

When a person like him admits that "early signs have appeared", it is a sign in itself.

Now, three clues are on the table.

Anthropic admitted that Claude Code wrote most of the company's code, and Dario Amodei publicly said that the engineers' efficiency had increased by 20% - 40%.

In other words, a large part of the code for creating Claude was written by Claude itself.

On Google's side, AlphaEvolve is designing the chips for training itself.

Looking at the academic community, the AI Scientist system published in Nature in March 2026 has been able to autonomously complete the entire process of "coming up with ideas - conducting experiments - writing papers - peer review".

When AI can participate in improving the next generation of AI, a company's moat is no longer the number of model parameters, data scale, or computing power reserve - but the speed of self-evolution.

Of course, the IEEE Spectrum report also presented the opposite view.

Nathan Lambert from the Allen Institute for AI proposed the concept of "lossy self-improvement" - as AI systems become more and more complex, the flywheel of self-improvement may slow down due to increased friction, rather than accelerating infinitely.

Jason Weston and Jakob Foerster, researchers at Meta, argued that compared with pure self-improvement, "human - AI co - improvement" is a more realistic and safer path.