Wenn man dem KI nicht gewachsen ist, sich ihr dann anschließen? Er hat sein Studium an der MIT abgebrochen, um das menschliche Bewusstsein in einen Chip zu packen und ein "digitales Leben" zu schaffen.
“I will leave the Massachusetts Institute of Technology and will not continue to pursue a doctorate. The development of artificial intelligence is progressing so rapidly that it is already difficult for humans to keep up.”
But there might still be a way out: I have found that the feasibility of digital humans is far greater than most people think. If there were the support of top researchers in the field of artificial intelligence, one might be able to achieve this goal in less than 10 years with an investment of 10 billion US dollars and the use of 50,000 H100 GPUs.
The two paragraphs were written by Isaak Freeman, a doctoral candidate at the Massachusetts Institute of Technology. He believes that the development of artificial intelligence is progressing so rapidly that it is difficult for humans to keep up. The evolution of the biological brain is limited by the physical laws of carbon-based tissues (e.g., the speed of nerve signal transmission, lifespan, and storage capacity). If humans remain in their carbon-based form, they will lose in the intellectual competition with AI. Migrating consciousness to a digital substrate is the only way to “expand human intelligence exponentially.”
That is to say, since we cannot stop the rapid development of AI, we should use the computing power and tools that AI provides to transform ourselves into “digital form” and thus participate in this competition.
In the scientific community, this idea is not new. It also sparked intense debates in 2023 when the movie “The Wandering Earth 2” was released (in the movie, Tu Hengyu, played by Andy Lau, boldly uploaded his dead daughter Ya Ya's digital information illegally in order to give her a “complete life”). But Isaak says that he doesn't think it's just a science-fiction idea, but that there are real possibilities to realize this idea - we can completely copy the existing intellectual structure by conducting high-resolution scans.
To prove that he is not acting impulsively, he has made a rough calculation:
The computing power required to simulate the human brain may be much lower than people think - about 50,000 H100 GPUs would be sufficient. xAI currently has more than 200,000 H100 chips or chips of higher specifications. Under relatively pessimistic assumptions, simulating the human brain with the current high-resolution neuron model (e.g., the Hodgkin - Huxley model) and multi-state synapses would require about 600 exaFLOP/s of computing power, 700 GB of memory per GPU, and an interconnection bandwidth of 24 GB/s - these parameters are already achievable for today's supercomputer clusters.
If a simpler neuron model (e.g., the Leaky - Integrate - and - Fire model) would be sufficient (this still needs to be confirmed by empirical research), the computing power required to simulate a human brain could drop to about 2 to 3 petaFLOP/s, which is almost equivalent to the computing power of a single H100 GPU at FP16 accuracy. Naturally, the memory capacity and the interconnection bandwidth are probably the tightest bottlenecks.
The core problem, however, is: Which neurons should be simulated? How should the parameters be set? How should the connection method be established?
Therefore, data acquisition is the real bottleneck, and this process is associated with many difficulties: We need hundreds of next-generation microscopes that have to run for several years; we need automated processes for mass data acquisition and tissue staining; we need expansion microscopy with a magnification of about 20 times and have to completely molecularly stain more than 30 receptors, neurotransmitters, and neuropeptides; we also need X-ray microscopes to image the entire human brain within a year. At the same time, we need whole-brain functional imaging devices that can completely image the brain activity of worms, fish, and other animals to decipher the relationship between “structure and function.”
In addition, researchers need to develop prediction models from structure to function, correction models for the connectome, strict evaluation standards, and a complete research framework for validating the concept through animal simulations.
Encouragingly, this field is gradually taking shape: From the early attempts at nematode simulation (e.g., BAAIWorm), to the completed connectome of the fruit fly with 140,000 neurons, to an incomplete fruit fly simulation attempt that accidentally went viral on the X platform, the large datasets generated by the Brain - Computer - Interface research center, the upcoming connectome of the zebrafish, and the new microscopes that can image at gigahertz speeds. Isaak believes that all these signs indicate that the pioneering exploration of the “digital human” is no longer just a distant science-fiction idea.
With the desire to make this field more accessible, Isaak wrote a detailed study before leaving MIT, in which he systematically outlines the entire path from nematode simulation to the digital human. He admits that this work is still imperfect and raw, but that he has put a lot of enthusiasm and effort into it.
Title of the study: From Worm to Human: Scaling Brain Emulation
Link to the study: https://pdf.isaak.net/scaling - emulations
This study carefully plans the roadmap for whole-brain simulation from nematodes (302 neurons) to humans (86 billion neurons), including connectomics costs, data bottlenecks, and technical approaches.
Image caption: The current state of electron microscopy connectomics.
The study shows that the realization of this grand vision is based on three core pillars: structure recording, function recording, and computer simulation. The first major hurdle for researchers is the basic structure recording.
To simulate the brain, one first has to know what it looks like. Currently, one mainly relies on electron microscopes (EM), but these have great problems with scaling. Manual correction is extremely expensive. For example, correcting the connectome of the fruit fly cost 33 “man - years.” If one wants to scan the human brain, reconstructing a single neuron at current costs would require an astronomical amount of money.
For this, the author proposes the combination of expansion microscopy (ExM) and protein barcodes and other new technologies. These technologies can preserve information at the molecular level (e.g., ion channels, neurotransmitter receptors) and at the same time significantly reduce the difficulty of tracking, thereby greatly increasing the accuracy of the AI's automatic segmentation model.
Image caption: Comparison of EM and ExM images
Naturally, it is not enough to only know the static physical structure. The brain sends electrical signals, and we also have to record the dynamic discharge process of neurons. But mammalian brain tissue scatters light, which means that the “glass ceiling” of optical imaging is currently limited to a depth of 1 to 2 millimeters below the surface.
Therefore, the author has found two natural “substitutes”: the naturally transparent zebrafish larvae and the tiny nematodes. In these organisms, humans can already perform whole-brain and single-neuron real-time function recording, which provides extremely important real data for establishing the relationship between “structure and function.”
Now, how can one convert the static connection map into a dynamic simulation? The study shows that even the simplest differential equation models in the first experiments on the visual system of the fruit fly and the nematodes, as long as they have accurate connectome data, can reproduce surprisingly realistic biological behaviors.
As already mentioned, pure computing speed (FLOP/s) is no longer the biggest obstacle. The real challenge lies in the “memory wall” and the “interconnection bandwidth”. Simulating billions of neurons and a huge synaptic network requires a memory capacity of about 70 PB and a very high communication speed between nodes. This is an architectural problem that today's computing - power - oriented AI computing centers have to overcome.
Finally: How can one prove that we have really “uploaded” a human and not just built a machine that only reads tables? The author proposes an “embedded Turing test” similar to the Turing test - one places the simulated brain in a virtual body and sees if it searches for food, learns, and avoids damage like a real nematode or a real mouse.
The author estimates that this cannot be a side project of a few laboratories, but requires a “major scientific project” like the Human Genome Project or the Apollo Program. It could take 10 to 25 years and require 5 to 50 billion US dollars in investments.
This article is from the WeChat account “Machine Heart” (ID: almosthuman2014), editors: Zhang Qian, +0, published by 36Kr with permission.