HomeArticle

The cyber fruit fly is alive. Is human "digital immortality" still far off?

有界UnKnown2026-03-12 20:25
The ghost is already in the machine.

If you were told right now that you could upload your consciousness to a computer and achieve cyber immortality, what would you choose?

This is not science fiction but a possible reality.

On March 7th, the American startup Eon Systems posted a demo on X that can be called a cyber miracle. They transferred the real connections of 125,000 neurons of an adult fruit fly into a computer. During this process, no code was used to program behaviors, nor was a large amount of data used to train the model...

But miraculously, this "digital fruit fly" "came to life" on the screen. It can not only move around, clean its antennae, but also look for food, just like a real fruit fly.

▲  The First Multi-Behavior Brain Upload,Dr. Alex Wissner-Gross@X 

In countless science fiction novels and movies, humans have fantasized many times about uploading their brains to achieve true cyber immortality.

And now, it has been realized.

Eon Systems used a 43-second video to bring an old dream in the tech circle back to everyone's attention - Whole-Brain Emulation (WBE for short).

This time, it may completely subvert the existing path of building AGI through large models and develop a new form of intelligence that is closer to the essence of life.

The long-neglected Whole-Brain Emulation

Whole-Brain Emulation is more commonly known as "consciousness upload".

This concept first appeared in science fiction novels in the 1950s. Many great science fiction writers mentioned such settings in their works, such as Isaac Asimov, who invented the "Three Laws of Robotics", and Arthur C. Clarke, who wrote "2001: A Space Odyssey".

Later, John von Neumann discussed the possibility of machines simulating the human brain from a mathematical and computer perspective in "The Computer and the Brain", providing a theoretical basis for this idea.

However, this technological path remained in the realm of speculation until 2008, when Anders Sandberg and Nick Bostrom from the Future of Humanity Institute at the University of Oxford formally proposed the term and engineering framework of "Whole-Brain Emulation (WBE)" in "Whole Brain Emulation: A Roadmap", advancing this concept from science fiction imagination to an interdisciplinary scientific research direction with a clear technological path.

Put simply, if large models attempt to achieve AGI by simulating the process of "intelligence" generation, then whole-brain intelligence attempts to replicate existing intelligence by restoring the brain's structure.

This idea sounds quite feasible at first, but in the past nearly two decades, it has been a well-deserved backseat in scientific research.

The reason is simple: the technical difficulty required to achieve whole-brain emulation is extremely high.

Unlike large models, which can achieve quick results by piling up computing power and corpora, whole-brain emulation needs to reach the "critical point" simultaneously in four dimensions: nanoscale imaging, ultra-large-scale computing, biodynamics, and physical simulation.

▲Whole Brain Emulation: A Roadmap, the technological drivers for promoting the development of key technologies in Whole-Brain Emulation (WBE) 

For example, to achieve whole-brain emulation, the brain needs to be sliced at the nanoscale and scanned with an electron microscope; then the nerve fibers in the massive images need to be reconnected to build a complete three-dimensional neural connection map; next, the signals transmitted between synapses and the laws of neural activity need to be analyzed; finally, these structures and signals need to be simulated in a computer to reconstruct the operation process of the brain.

However, around 2010, scanning 1 cubic millimeter of brain tissue (about the size of a fly's brain) often required multiple microscopes to work continuously for several years. Then, manually "tracing the lines" to connect neurons would consume hundreds of thousands of man-hours. Not to mention that further decoding of neural signals and dynamic simulation of these biological signals would be even more difficult.

But in recent years, with the leapfrog development of related technologies, these tasks that seemed impossible in the past are gradually becoming feasible.

For example, the maturity of multi-beam electron microscope technology has increased the scanning speed by hundreds of times, and the work cycle originally calculated in "years" can be compressed to "weeks". Another example is the rapid progress of computer vision algorithms represented by Google's FFN algorithm, which has greatly promoted the automatic segmentation and tracking of neurons.

In addition, the open source and optimization of machine learning and high-performance physical engines such as MuJoCo are the keys to Eon Systems' breakthrough.

Specifically, Eon Systems' breakthrough is first based on the FlyWire project published in "Nature" in 2024. This is an extremely precise "map". Scientists accurately reconstructed about 125,000 neurons and more than 50 million synaptic connections in the fruit fly's brain through electron microscope scanning.

Then, the researchers used the model to infer the "attributes" of each connection with extremely high accuracy based on the morphological characteristics of the synapses. Finally, with the open source and optimization of high-performance physical engines such as MuJoCo, the digital life finally had a realistic "digital training ground".

The closed loop between perception, decision-making, and action had a chance to run through in the virtual environment for the first time. This is the core of Eon Systems' technological breakthrough: the researchers almost restored the "soul" of an adult fruit fly in a 1:1 manner in the digital world.

Without any programmers teaching it how to walk, this digital fruit fly spontaneously started to walk, clean its antennae, and even showed a tendency to look for food. These complex behaviors are not the result of pre-written "programming" but emerge naturally from the real biological structure.

There are "thousands of mountains" between the fruit fly and the human brain

The success of the whole-brain emulation of the fruit fly easily makes people ask a bigger question: If it works for the fruit fly, can it work for humans?

The clear answer is that it is theoretically possible, but not currently, and it may not be possible for a long time in the future.

The reason is not complicated: the gap between the fruit fly and humans is not simply a matter of "scaling up" but a huge engineering chasm.

Take the successful fruit fly experiment as an example. It has only about 125,000 neurons, while the human brain has about 86 billion neurons, nearly 700,000 times that of the fruit fly.

Currently, whole-brain emulation is still in the stage of "moving from insects to mammals". Before this, the academic community had only completed the brain emulation of the nematode, a microorganism with only 302 neurons. The fruit fly has just exceeded the nematode, and beyond that, the technology has hit a bottleneck.

The next larger experimental subject after the fruit fly is the mouse, which has about 70 million neurons, approaching the current technological limit. Therefore, it is still quite far from achieving a 1:1 whole-brain emulation of the much more complex human brain.

▲ Drosophila cranial nerves, Image source: ChatGPT 

The first reason for the bottleneck is the sharp increase in data volume.

As the number of neurons increases by an order of magnitude, the data that needs to be scanned, stored, and processed almost explodes.

The data volume of the whole fruit fly brain is about hundreds of terabytes. If the mouse brain is fully scanned, the raw image data may reach several petabytes, and the data scale of the human brain may approach 1 zettabyte.

Many people have no concept of 1 zettabyte. 1 zettabyte is equal to 1 billion terabytes. If 1 terabyte is equivalent to a truckload of sand, then 1 zettabyte can fill the Pacific Ocean. According to IDC's prediction, the total global data volume in 2025 will only be about 175 zettabytes.

This means that just to observe and store a human brain, a top-scale data center needs to be built separately.

Even if the data can be obtained, the second hurdle still lies ahead: how to make this "neural map" truly run.

The brain is not a static circuit diagram but a dynamic system that continuously conducts electrochemical activities. There are about 100 trillion synapses in the human brain, and they are constantly transmitting and regulating information every second.

If traditional von Neumann architecture computers are used to simulate these processes one by one, even today's most powerful supercomputers may take several days to simulate one second of human brain activity.

But the deeper problem is not just computing power, but our limited understanding of the brain itself. Connectomics can tell us "who is connected to whom", but it may not be able to explain what information these connections transmit, how they are regulated, and why specific cognitive and conscious states are produced.

One of the key reasons for the breakthrough in the fruit fly experiment is that the researchers made effective inferences about the functions of some neurotransmitters.

To put it bluntly, it's just a guess, and it worked (which is an effective method for small-scale data). But once it comes to the more complex mammalian brain, this method becomes much more difficult.

Because the brain not only relies on "wired connections" but also has a large number of "wireless" chemical modulations. Neurotransmitter substances such as dopamine and serotonin diffuse in the brain and affect a wide range of neural activities, and these mechanisms cannot be directly observed through electron microscope scanning.

This means that in the future, truly mature whole-brain emulation not only needs to replicate a connection map but also must understand the complex relationships between electrical signals, chemical modulations, and dynamic activities.

Even if these technical problems are finally solved, humans will still face another more difficult problem: ethics.

If one day the human brain is truly emulated 1:1, what exactly is this system? Is it just a highly realistic behavior simulator, or does it already have subjective experiences, emotions, and even self-awareness?

If it can feel pain, is turning off the simulator equivalent to "killing a person"? If it has memory and identity continuity, should it have legal status?

These problems are not just speculations in science fiction novels but real challenges that cannot be avoided once WBE approaches the human stage. By then, humans may need not only new technical standards but also a whole set of new digital ethics and legal frameworks.

Therefore, the significance of the whole-brain emulation of the fruit fly does not lie in "human whole-brain upload is just around the corner" but in the fact that it makes this technological route seem no longer completely illusory for the first time.

It proves that the brain of a complex organism can indeed be scanned, reconstructed, and run to a certain extent.

But from the fruit fly to humans, there is still the engineering red line of the mouse, as well as several real mountains such as data scale, computing power bottleneck, chemical mechanisms, and ethical boundaries. For today's WBE, the fruit fly is an important milestone but far from the end.

Today, whole-brain emulation is no longer science fiction

Since it is still far from truly achieving human-level intelligence, why are we discussing whole-brain emulation now? After all, some people predict that AGI may appear in the next few years.

The answer is that the Scaling Law also applies to whole-brain emulation to some extent.

The real significance of the fruit fly experiment is not just a stunning technological demonstration but the first proof that a complete technological path can work: scanning, reconstruction, simulation, and the emergence of embodied behaviors form a verifiable closed loop.

Once this path is proven feasible, the problem is no longer just a scientific imagination but gradually becomes an engineering problem: how to increase the scanning throughput, how to improve the physical simulation, and how to handle a larger data scale.

In other words, starting from the fruit fly, WBE has finally shifted from "is it possible" to "how to expand".

More importantly, WBE is not a simple extension of the large model route. It represents a completely different intelligence path in some key dimensions.

The most prominent difference is energy efficiency.

Many people have said before that the essence of the AI problem is an energy problem. The power consumption of a top AI graphics card like the NVIDIA H100 is close to 700 watts, and training or running a GPT-4 level model often requires thousands of GPUs to work simultaneously. Together with cooling and infrastructure, the overall power consumption is in the megawatt range, enough to support the electricity needs of a small town.

But for the same work, the human brain only needs about 20 watts of power to continuously complete perception, memory, reasoning, learning, and motor control. This is only equivalent to the power consumption of a dim light bulb or a router in standby mode.

▲ Image source: UnSplash 

The gap between the two is not just an efficiency issue but more like a generational difference in architecture.

Therefore, the success of the whole-brain emulation of the fruit fly means that another paradigm may be emerging in AI research: from the past "black-box imitation" that relies on massive data training to the "white-box restoration" that attempts to replicate biological structures.

If this direction continues to develop, its impact on the artificial intelligence industry will be profound.

First, it provides a more end-game-oriented idea for embodied intelligence. A core problem with today's robot systems is the lack of common sense and natural physical understanding.

The perception and motor abilities formed by organisms through hundreds of millions of years of evolution are largely encoded in the neural connection structure. As long as the structure is restored accurately enough, machines may obtain flexibility closer to that of organisms.

Second, it verifies the hypothesis of "structure is intelligence" to some extent. Intelligence may not only be stacked up through massive data but may also be a structural result that can be calculated and replicated.

If this idea holds, then the entire AI architecture may be re-examined.

For example, is the current large model architecture centered on Transformer really the only way to higher-order intelligence? Will future computing systems gradually shift to designs closer to biological nervous systems, such as spiking neural networks, sparse connection structures, and event-driven computing?

Therefore, the reason for paying attention to WBE today is not that it will replace large models tomorrow, nor that human whole-brain upload is just around the corner. The real reason is that it is gradually changing from a distant scientific fantasy path to a real route with a clear technology stack and phased results.

The large model represents a way of approaching intelligence through data, while WBE