HomeArticle

One hundred years of brain-computer interface

脑极体2025-07-07 18:53
Looking back over the past century, the dialogue between the human brain and machines has quietly begun long ago.

“Can you imagine controlling a computer with your mind?”

What was once only a scene in science fiction novels is now becoming a reality.

In June 2025, Elon Musk stood at the Neuralink press conference and announced a series of new technologies: Seven patients with brain-computer interfaces implanted could type, play games, and even control robotic arms with just their thoughts. Among them, Alex, who had been paralyzed for many years, used brain signals to command the Tesla Optimus robot arm to pour water for himself.

But how did all this become possible?

Looking back over the past century, the dialogue between the human brain and machines has quietly begun. In 1924, when Hans Berger first captured brain waves, he probably never imagined that these weak electrical fluctuations would become a new language for human-machine interaction a century later.

From the accidental discovery in the laboratory to today's “telepathy,” this path has not been smooth.

1920 - 1970: Discovery and Research of Brain Waves

The origin of all great technologies is often insignificant. Before the birth of computers and AI, the exploration of brain-computer interfaces started with a more fundamental question: What exactly is thought? Some pioneers speculated that it might be related to electricity.

In 1924, at the psychiatric hospital of the University of Jena in Germany.

Dr. Hans Berger stared at the teenager in the hospital bed and carefully adjusted the electrodes with two silver wires inserted into the patient's scalp. These metal plates soaked in saltwater were connected to a bulky oscilloscope, recording the weak electrical fluctuations.

“Seventh attempt, voltage 12 microvolts...” he recorded in a low voice.

Over five years, Berger conducted countless experiments, accumulating more than 1000 electroencephalogram records, and even experimented on himself and his son.

However, the scientific community at that time generally believed that the operation of the brain was a purely biochemical process, and there were no measurable electrical signals at all. Berger's persistence was regarded as a paranoia of pseudoscience.

It wasn't until 1929 that his paper “The Use of the Human Electroencephalogram” was finally published. Those regular signals called alpha waves and beta waves proved to the world for the first time that the brain's electrical activity changes with a person's mental state, and human thoughts can be captured by instruments.

However, Berger's discovery was regarded as heresy by the academic community, and he never received the recognition he deserved before his death. But his persistence ultimately opened up a new field for neuroscience.

However, what exactly can these weak and chaotic brain waves convey? And how can they be decoded?

To decipher the secrets of brain waves, scientists conducted decades of exploration on animals before implanting electrodes into the human brain.

In 1969, the brain neurons of a monkey triggered the rotation of a galvanometer needle for the first time. This was the first time in history that brain signals were directly converted into machine instructions. Fetz's experiment proved that the brain can learn to control external devices just like controlling its own limbs.

But the question is, can the human brain do it?

1970 - 2000: Birth of Brain-Computer Interface in the Laboratory

In 1973, Jacques Vidal, a researcher at the University of California, Los Angeles, first formally proposed the term “Brain-Computer Interface (BCI)” in his paper “Toward Direct Brain-Computer Communication.”

In the experiment, the subjects wore EEG electrode caps. By staring at the flashing lights on the screen, the specific visual evoked potentials (VEPs) generated by the brain were captured and recognized by the computer, and then a virtual cursor was controlled to move in a maze.

Although this process was slow, it proved for the first time that human intentions, like Morse code, can be directly converted into instructions that machines can understand without going through muscles and nerves.

However, the BCI research at this stage was mainly theoretical speculation, and a complete and operable technical system had not yet been formed. Moreover, the EEG signals recorded through the skull were like listening to the cheers inside a stadium from outside. They were noisy and blurred, and it couldn't really be called the birth of a technology.

To obtain clearer and more accurate signals, scientists realized that they might need to get closer to the brain, or even enter it.

In 1978, in New York.

Dr. William Dobelle implanted an array of 68 electrodes into the visual cortex of a blind person. When he turned on the power, he saw a low-resolution dot-matrix image.

This was not real vision but phosphenes (the brain's hallucinatory response to electrical stimulation), but this experiment brought BCI into the clinical field.

In 1988, scientists developed the P300 speller, allowing paralyzed patients to select letters through brain waves and achieve basic communication. This was the first time in history that people who had completely lost their ability to move could communicate with the outside world using only their thoughts. The P300 speller thus became the first real application of brain-computer interfaces.

After a series of clinical trials on the human brain, the principle of brain-computer interfaces was basically established.

In 1999, the first International Brain-Computer Interface Conference was held. Scientists reached a consensus: Brain-computer interface is not science fiction but a serious science. From then on, brain-computer interface, as a professional research field, was officially recognized by the academic and industrial circles.

What exactly is a brain-computer interface?

To understand how a brain-computer interface works, we first need to understand the basic operation mode of the brain. All human thinking, behavior, and consciousness ultimately come down to the electrical activities of nerve cells in the brain. The brain is like a command center, sending electrical signals to other parts of the body through about 80 - 100 billion neurons. Each neuron is connected to tens of thousands of other neurons, forming a complex neural network. When you want to move your arm, the motor cortex of the brain will generate specific neural electrical signals, which are transmitted to the arm muscles through the spinal cord and peripheral nerves, triggering the movement.

The basic principle of a brain-computer interface is to establish a new information channel outside this natural nervous system. It does not rely on the peripheral nervous system and muscle tissues but directly creates a connection path between the brain and external devices, just like inserting a data cable into a computer's USB port to read data from the hard drive.

A complete brain-computer interface system usually includes four key stages: recording, decoding, control, and feedback. In the recording stage, devices such as electrodes are used to collect the neural activity signals of the brain; in the decoding stage, algorithms such as machine learning are used to analyze the recorded neural activities; in the control stage, the decoded information is converted into control instructions for external devices; in the feedback stage, information such as vision and touch generated after the device performs an action is fed back to the user, forming a closed loop.

However, the early brain-computer interfaces were like old-fashioned computers: the electrodes were thick and the response was slow. Scientists were like standing outside a thick wall, vaguely hearing someone talking inside but only able to catch a few scattered words. The brain-computer interface, which was still in the stage of single-point breakthrough, urgently needed systematic research and application.

2000 - 2019: Brain-Computer Interface Moves towards Clinical Application

Entering the 21st century, brain-computer interface technology has witnessed explosive development, and it has truly started to serve humanity.

In 2004, at Rhode Island Hospital in the United States, Matthew Nagle, a young man paralyzed in all four limbs due to a spinal cord injury, became the first subject of the BrainGate brain-computer interface. An electrode array of 4mm × 4mm was implanted into his motor cortex. This electrode array, about the size of a match head, was equipped with about 100 needle-shaped electrodes, which could simultaneously record the discharge activities of hundreds of nearby neurons. With this system, after several months of training, he learned to control the computer cursor with his thoughts and became the first person to control a robotic arm with an invasive brain-computer interface.

At the opening ceremony of the 2014 FIFA World Cup in Brazil, a paraplegic teenager wearing a mechanical exoskeleton kicked off the first ball with his thoughts. This brain-controlled exoskeleton called Bra - Santos Dumont was designed by Professor Miguel Nicolelis from Duke University. For the first time, it realized the feedback of tactile, temperature, and force information to the wearer while the brain controlled the exoskeleton's activities. At this moment, brain-computer interface technology became the global spotlight from a clinical medical device.

Meanwhile, non-invasive technology has also been developing rapidly. In 2016, the team led by Professor Bin He from the University of Minnesota was able to control objects in three-dimensional space using scalp EEG without implanting electrodes in the brain. This included the ability to manipulate robotic arms to grab and place objects and control the flight of aircraft, bringing hope to millions of disabled people and patients with neurological diseases.

After a series of clinical breakthroughs in projects like BrainGate, the brain-computer interface entered a bottleneck period. The problems such as insufficient electrode stability, limited signals, and complex surgeries hindered the wide application of this technology.

Against this background, Neuralink emerged, and during the same period, the research showed the characteristic of parallel development of multiple technological routes.

Since 2019: Neuralink Breaks the Deadlock, Multiple Technological Routes Develop in Parallel

In July 2019, Elon Musk held a press conference and announced that Neuralink had made a major breakthrough in brain-computer interface technology.

Before the emergence of Neuralink, Brain-Computer Interface (BCI) technology had long been limited by core pain points such as tissue damage caused by rigid electrodes, low efficiency of signal collection, large surgical trauma, poor device stability, and difficulties in commercialization.

Neuralink developed a system that uses a neurosurgical robot to implant 96 flexible electrode “threads” with a diameter of only 4 - 6 microns in an area of 28 square millimeters in the brain. These threads are more adaptable to the soft environment of brain tissue than traditional hard silicon-based electrodes and cause less damage to the brain. This set of electrode threads contains a total of 3072 electrode sites, which are quickly implanted into the cerebral cortex with micron-level precision by the R1 surgical robot. It has stronger decoding ability and is more aesthetically pleasing to wear.

In just a few years, Neuralink technology has been applied in the clinical field. In January 2024, Neuralink successfully completed the first human implantation surgery, enabling paralyzed patients to control electronic devices with their thoughts.

As of June 2025, seven subjects globally (four with spinal cord injuries and three with amyotrophic lateral sclerosis) have had the N1 chip implanted. Some users use it for more than 60 hours a week and can control robotic arms, play video games, and even program. At the same time, Neuralink has launched the “Blindsight” visual restoration project, aiming to help blind people regain low-resolution vision by 2026.

However, the exploration of BCI is not limited to the invasive approach.

According to the different ways of obtaining brain signals, brain-computer interfaces are mainly divided into three categories:

The invasive brain-computer interface represented by Neuralink requires implanting electrodes directly into the brain tissue through a craniotomy. This method can obtain the most accurate and strongest neural signals, but it causes the greatest trauma to the brain and has the highest surgical risk. As the implantation time extends, glial scar tissue may form around the electrodes, leading to a gradual attenuation of the signals.

However, there is also a relatively mild technological route in the field of invasive BCI. Synchron's Stentrode does not require a craniotomy. Instead, it is placed in a blood vessel near the motor cortex of the brain through the jugular vein, recording the brain's electrical activity from inside the blood vessel. This solution has minimal trauma, reducing the surgical risk and the threshold for patients to accept. However, since the electrodes are separated from the neurons by the blood vessel wall, the signal accuracy and bandwidth are not as good as those of Neuralink, which implants electrodes directly into the brain tissue.

Meanwhile, non-invasive technologies represented by EEG have not stagnated either. Although the ceiling of low signal resolution still exists, thanks to AI, researchers have been able to extract more reliable intentions from the noisy EEG signals. Non-invasive brain-computer interfaces collect EEG signals by wearing electrode caps and other devices on the scalp. They do not require surgery and are highly safe. Due to the attenuation of brain waves by the skull and external interference, the signals obtained in this way are relatively weak and require powerful decoding algorithms. They are more suitable for commercial applications such as EEG games and attention monitoring.