Google DeepMind: 32 random numbers to predict the Earth's next 15 days in 1 minute
The era of weather forecasting has truly changed.
The newly released WeatherNext 2 by Google DeepMind has upgraded weather checking to an hourly and real - time level.
It runs eight times faster than its previous generation, and its resolution has been improved to the hourly level. That is to say, instead of the traditional forecast like "it will rain tomorrow afternoon", it can be as detailed as "there will be light rain from 2 - 3 pm tomorrow, the rain will intensify from 3 - 4 pm, and gradually stop from 5 - 6 pm".
Interestingly, it doesn't just give you one version of the forecast. Instead, it can generate dozens or even hundreds of possible weather evolution scenarios from the same input.
It can finish in one minute on a TPU what traditional supercomputers would take several hours to do.
As a result, 99.9% of the forecast variables and timeliness exceed those of the previous generation of WeatherNext. It can also detect the scope of influence of extreme weather such as high temperatures and heavy rains earlier.
So why does weather forecasting need to be so detailed?
Turn the model into a mini - Earth
First of all, in reality, many industries are closely tied to the weather.
The energy system relies on it to coordinate loads; urban management depends on it to arrange manpower; agriculture uses it to set the rhythm; logistics and flights make daily decisions based on it.
Moreover, the atmospheric system can be regarded as a huge chaotic machine. Any small disturbance may affect the cloud movement or rainfall area a few days later.
The traditional approach is to run many forecasts with a large number of "different initial conditions" and then find the most likely trend from thousands of results.
However, this method consumes too much computing power.
The key to making WeatherNext 2 both fast and accurate is the FGN (Functional Generative Networks) proposed by Google DeepMind. It is a functional generative network.
The idea of FGN is very different. It doesn't pile up more physical equations, nor does it simulate the weather itself. Instead, by adding a slight but globally consistent random disturbance to the model itself, it turns the model into a changing mini - Earth.
More specifically, FGN inputs a 32 - dimensional small random vector, that is, 32 random numbers, for each forecast. This random vector passes through all layers of the model. By controlling the internal state of the model, the model naturally generates a complete future weather field.
One set of random numbers represents one future, and another set represents a different future.
This is because FGN makes the model itself a samplable random function, spreading low - dimensional noise into a globally consistent change pattern through its internal structure. During training, it only optimizes the single - point error (CRPS) of each grid point. However, in order to reduce the errors of all points simultaneously, the model is forced to learn the structural laws of the weather itself, thus being able to spontaneously generate high - dimensional spatial correlations.
This is why 32 numbers can ultimately generate a global weather change of up to 87 million dimensions, which is both coherent and in line with the physical structure.
This seemingly simple and crude method is also more accurate. The overall performance of FGN comprehensively surpasses that of DeepMind's previously strongest GenCast. It has lower prediction errors, better probability performance, a more natural spatial structure, and more coordinated relationships between wind fields, temperature fields, and height fields. The width of the probability distribution is more reasonable, neither over - contracting nor over - diverging.
In terms of extreme weather, its early prediction ability is particularly obvious. For example, for typhoon paths, FGN can reach the same accuracy about 24 hours earlier than GenCast, which is crucial for emergency decision - making and traffic scheduling.
Moreover, on one TPU, it takes less than one minute to generate a 15 - day global forecast, which is about eight times faster than before.
Of course, in the real prediction process, high - frequency variables in the FGN method may occasionally cause some slight artifacts.
But overall, FGN is stable, efficient, and practical enough.
Paper link: https://arxiv.org/abs/2506.10772
Reference link: https://x.com/GoogleDeepMind/status/1990435105408418253
This article is from the WeChat public account "QbitAI". Author: Wen Le. Republished by 36Kr with permission.