HomeArticle

Just now, Google AI solved an alien problem, breaking a decade-long record. It developed its own algorithm, astonishing Nobel laureates.

新智元2026-03-16 08:11
Google DeepMind has pulled off another major move: AlphaEvolve autonomously writes algorithms, rewriting the lower bounds of five classic Ramsey numbers in one go and breaking a decade-old mathematical record! Nobel laureate Hassabis and Turing Award laureate LeCun have both given their thumbs up - AI is completely changing the way mathematical breakthroughs are achieved!

Just now, Google DeepMind has demonstrated another remarkable feat.

The system AlphaEvolve they developed has achieved a breakthrough in extreme combinatorics, improving the lower bounds of five classic Ramsey numbers all at once!

Paper link: https://arxiv.org/pdf/2603.09172

What's even more surprising is that this wasn't achieved by manually designing five different algorithms, but through a unified "meta-algorithm."

Deriving these search algorithms is extremely difficult, and the best results were achieved at least a decade ago. This time, DeepMind let the large model write algorithms autonomously, shattering the decade - old mathematical records!

After the DeepMind researchers announced the good news, CEO Hassabis quickly retweeted to celebrate. He said that this achievement of AlphaEvolve is another major milestone for AI in the field of mathematics!

Even Turing Award winner LeCun actively congratulated the team.

How difficult is the Ramsey number problem? It can be said that it has made the greatest mathematicians despair.

Famous mathematician Paul Erdős, the mentor of Terence Tao, once said that if aliens threatened Earth to calculate the Ramsey number R(5, 5) within a specified time or face human extinction, the most rational choice for humanity might be to surrender.

For decades, Ramsey numbers have been one of the most stubborn problems in combinatorial mathematics. Mathematicians usually have to design a specialized search algorithm from scratch to make progress on a specific Ramsey number.

But now, AlphaEvolve has broken through five of them!

And this is just the tip of the iceberg of AlphaEvolve's capabilities.

Previously, it had already broken the 56 - year - old record in matrix multiplication, optimized the scheduling of Google's data centers, and discovered structural simplification schemes in AI chip design that human engineers had overlooked.

When a system capable of automatically discovering algorithms can also optimize the computing infrastructure for its own training, there's no doubt that we're witnessing the birth of a new species.

The Numbers That Even Aliens Can't Calculate

About a hundred years ago, British logician Frank Ramsey proved the following conclusion.

In a group of six people, regardless of their relationships, one can always find three people who know each other or three people who don't know each other.

This simple and intuitive example is the earliest prototype of Ramsey theory.

In graph theory, R(r, s) is defined as the smallest integer n such that any undirected graph with n vertices necessarily satisfies at least one of the following conditions:

  • There exists a complete sub - graph (clique) of r vertices
  • Or there exists an independent set of s vertices

For example, R(3, 3) = 6.

This means that any graph with six vertices must contain a triangle or three non - connected points.

Currently, for some small - scale parameters, such as (3, s) where s ≤ 9 and (4, s) where s = 4, 5, these Ramsey numbers have been precisely calculated.

But for a large number of parameters, there is still a huge gap between the upper and lower bounds.

How to obtain the lower bound?

If we can construct a graph with n vertices that does not contain an r - clique and does not contain an s - independent set, then it means that R(r, s) ≥ n + 1.

Smashing Five Decade - Old Records in One Go

This time, three DeepMind researchers published a paper announcing that AlphaEvolve has refreshed the lower bounds of five classic Ramsey numbers all at once.

  • R(3, 13): 60 →61 (The previous record was held for 11 years)
  • R(3, 18): 99 →100 (The previous record was held for exactly 20 years)
  • R(4, 13): 138 →139 (The previous record was held for 11 years)
  • R(4, 14): 147 →148 (The previous record was held for 11 years)
  • R(4, 15): 158 →159 (The previous record was held for 6 years)

Although each number was only increased by 1, in the field of Ramsey numbers, increasing by 1 is more difficult than increasing by an order of magnitude in many other mathematical fields.

More importantly, all five breakthroughs came from the same system.

In addition to these five new records, it also matched the lower bounds of all known exact Ramsey numbers and the current best lower bounds of many other values, covering a total of 28 R(r, s) values.

AlphaEvolve

Calculating Algorithms, Not Just Problems

So how did AlphaEvolve achieve this?

Simply put, it is an intelligent agent that uses large language models to evolve code. What it searches for is the search algorithms themselves.

This is the most crucial point in understanding this work.

The traditional approach is:

Human experts manually design a search algorithm (such as simulated annealing or tabu search) based on mathematical intuition and domain knowledge, and then let the computer run it. The quality of the algorithm depends entirely on the human's skills.

AlphaEvolve automates this process. Its workflow is as follows:

Step 1: Maintain an "algorithm population."

At the beginning, there is only a simplest baseline program (or even just returning an empty graph).

Step 2: Mutate the code using an LLM.

Select a well - performing algorithm from the population and let the large language model (Gemini) "mutate" the code - modify the search strategy, change the initialization method, and add new heuristic rules.

Step 3: Execute and score.

Run the mutated program to see how large a legal graph it can create. If the size of the graph exceeds the current best record, give a high score; if the graph is nearly legal (with few violations), also give a certain reward - this is to guide the search towards the boundary.

Step 4: Evolutionary iteration.

Put the high - scoring algorithms back into the population and continue to select, mutate, and evaluate. Repeat this process.

This is the so - called "meta - search" - searching in the algorithm space rather than the graph space.

In other words, what AlphaEvolve ultimately produces is not a graph, but a piece of code that can find a good graph.

Four Algorithm Families Invented by AI

The paper shows the specific search algorithms that AlphaEvolve discovered for 28 R(r, s) values. These algorithms have very different styles but can be grouped into four families.

Family 1: Random Start, Brute - Force Annealing

The most basic approach. Start from a random graph and use simulated annealing to gradually eliminate violating structures (triangles or large independent sets).

Smaller values such as R(3, 5) and R(3, 7) use this method.

It's simple but not sufficient for large - scale problems.

Family 2: Algebraic Structure Seeding

Use algebraic graphs with deep mathematical backgrounds, such as Paley graphs, quadratic residue graphs, and cubic residue graphs, as starting points, and then perform