Farewell to the era of memorizing formulas. A Polish physicist unifies mathematics with an operator.
How many mathematical formulas have you memorized?
sin, cos, tan, ln, log, sqrt, exponents, power operations, hyperbolic functions...
From junior high school to university, these symbols have grown wildly in your notebooks like weeds. Each one has its own definition, its own graph, and its own set of properties.
You thought that's what math is like - the more you learn, the more there is, and the more complex it gets. There are endless formulas to memorize and countless test papers to do.
But what if I told you that all of these things, without exception, are variations of the same formula?
In April 2026, Andrzej Odrzywołek, a physicist from Jagiellonian University in Poland, published a paper. The translated title is: "Generating All Elementary Functions with a Single Binary Operator."
Paper link: https://arxiv.org/pdf/2603.21852
This operator looks like this: eml(x, y) = eˣ − ln(y).
That's it. Just this one line. Exponent minus logarithm.
Then he proved something that "shook" the entire mathematical community: as long as you keep nesting this operator within itself - like a set of Russian nesting dolls - you can derive every button on a scientific calculator.
Trigonometric functions, logarithms, square roots, power operations, hyperbolic functions, and even the pi (π) and the natural constant e can "grow" from it.
In theory, only one button is needed for the dozens of buttons on a scientific calculator.
Why can one operator rule all functions?
Let's start with the simplest example.
To get the exponential function eˣ? Just set y to 1: eml(x,1)=eˣ−ln(1)=eˣ−0=eˣ.
It's that simple.
To get the natural constant e itself? Set both inputs to 1:
eml(1,1)=e¹−ln(1)=e−0=e
The constant e just pops out.
To get the logarithmic function ln(x)? This requires three levels of nesting:
First, use eml to construct an intermediate value, then feed it back into eml to let the exponent and logarithm cancel each other out. What remains in the end is ln(x)=eml(1,eml(eml(1,x),1)).
And to get π?
Five levels of nesting. It uses the reverse derivation of Euler's formula e^(iπ)=−1 - first construct -1, then take the complex logarithm, and π emerges.
To get the imaginary unit i? Six levels.
To do addition x+y? This seemingly most basic operation actually requires five levels of nesting to express.
Since the underlying language of eml is exponents and logarithms, it first has to translate addition into the "exponent - logarithm" dialect like ln(eˣ·eʸ), and then piece it together using the nesting of eml.
This sounds like a precise mathematical magic trick. But Andrzej Odrzywołek proved through exhaustive computational searches that this is not an individual coincidence, but systematic completeness.
Every elementary function can find its place on this "nesting tree."
The "NAND gate" in the mathematical world. Why did programmers react the most?
If you're a programmer, you must have heard of this classic fact: In theory, all the logic of modern computers only requires one type of logic gate - the NAND gate.
AND, OR, NOT, XOR... all Boolean logic operations can be implemented using different combinations of NAND gates.
The NAND gate is the "universal building block" in the digital world.
Essentially, an entire CPU is just an arrangement and combination of billions of NAND gates.
The eml operator discovered by Andrzej Odrzywołek does something completely analogous - except its battlefield is not the discrete 0s and 1s, but the entire territory of continuous mathematics.
As The Register put it: In digital hardware, only one two - input gate is needed to implement all Boolean logic, and now continuous mathematics may also have its own similar primitive.
Some people immediately thought of the Y combinator - another classic concept in computer science that "creates everything from nothing."
What's more interesting are the skeptical voices. Some people asked: Didn't the hypergeometric function already unify these functions? What are the boundaries of this discovery?
Some people also asked: Is the complexity of performing operations with the eml gate really lower than traditional methods?
These discussions precisely illustrate the value of this discovery - it's not answering an old question, but posing a new one: How small can the minimum axiom set of continuous mathematics be?
The counter - intuitive truth: The end of mathematics is extreme minimalism
The mathematics education we received since childhood has given us a deep - rooted impression: Mathematics is a building that keeps growing upwards, and each floor is more complex than the previous one.
From addition, subtraction, multiplication, and division to equations, from equations to functions, from functions to calculus, knowledge only accumulates more and more, and formulas only become thicker and thicker to memorize.
But the discovery of the eml operator is like a scalpel, cutting through the outer wall of this building and revealing the steel framework inside - and that framework is incredibly thin.
The vast family of functions that have tormented countless students for three hundred years: trigonometric functions, inverse trigonometric functions, hyperbolic functions, logarithmic functions, power functions...
They seem to be independent of each other, each with its own formula and properties.
But now we know that they are all descendants of the same "super ancestor," all products of the single line of code eml(x,y)=eˣ−ln(y) self - folding in different ways.
This doesn't mean that math has become simpler.
This means that we've finally realized: Math has never been that complex.
This is very similar to the development of physics.
Before Newton, people thought that the motions in the sky and on the ground followed two completely different sets of rules. It wasn't until the law of universal gravitation unified them. Before Maxwell, electricity and magnetism were two completely unrelated phenomena.
It wasn't until the electromagnetic equations unified them. Now, in the world of elementary mathematical functions, there has also emerged its own "theory of everything."
Andrzej Odrzywołek, a physicist from Jagiellonian University in Poland
If you're willing to push your imagination a little further, this discovery will take you to some interesting places.
If the mathematical underlying logic of the universe can really be described by some kind of "source code," then this source code might be much shorter than anyone imagined.
The eml operator implies a possibility: The universe - no matter how you define this word - might be an extreme minimalist.
It didn't create a hundred tools to build the world. Instead, it just wrote a single function and let it self - fold and self - nest, eventually giving rise to trigonometric functions, logarithms, calculus, and even the entire mathematical language of the physical world.
This idea has a special meaning in the AI era.
Current large - language models like GPT, Claude, and Gemini adopt the strategy of "brute force leads to miracles": using hundreds of billions of parameters, massive amounts of data, and huge computing power to exhaust all patterns of human knowledge.
This is a path from complexity to complexity. And Andrzej Odrzywołek's discovery points to the opposite path: From extreme minimalism to everything.
One operator, one constant, infinite nesting, and everything emerges.
This makes people can't help but wonder: If the underlying layer of continuous mathematics can be so streamlined, is there also some kind of "single - operator" - style underlying reconstruction for future AI architectures? Is there a computational primitive yet to be discovered that can achieve what now requires hundreds of billions of parameters with very few parameters?
No one knows the answer. But this question itself might be more important than the answer.
Reference: https://arxiv.org/abs/2603.21852
This article is from the WeChat official account "New Intelligence Yuan". Editor: KingHZ. Republished by 36Kr with permission.