HomeArticle

A comprehensive review of brain-computer interfaces: From science fiction to reality, who is leading this "mind-reading" revolution?

硅谷1012026-01-09 16:21
Redefine the boundary between humans and machines

On January 1st, Elon Musk suddenly announced on X that Neuralink's brain-computer interface devices will be mass-produced on a large scale this year. As of now, Neuralink has implanted brain-computer interface devices in 13 patients. They can type, browse the web, and play games with "thoughts".

Image source: X

The brain-computer interface - a technology that sounds like it's from "The Matrix" - is now moving from the laboratory to reality. Although Neuralink is the most well-known company in the brain-computer interface field, the field is also becoming lively: a group of veterans and newcomers are catching up, non-invasive/minimally invasive projects are quietly emerging, and many bigwigs, including OpenAI CEO Sam Altman, have also entered the game.

It's certain that the global brain-computer interface market is experiencing explosive growth. According to reports, just in the United States, the market size of the brain-computer interface could reach $400 billion. And this might just be the beginning.

In this article, let's talk about where the brain-computer interface technology, which may change the future of humanity, stands now. Who is leading? Who might overtake on a curve? When will such technology be available to ordinary people?

01 The Principle of Brain-Computer Interface: From "Mind Reading" to "Controlling Everything"

The full English name of the brain-computer interface is Brain–Computer Interface, abbreviated as BCI. Simply put, a brain-computer interface establishes a direct communication channel between the human brain and external devices. It bypasses our traditional nerve-muscle-sensory system, allowing the brain to directly "talk" to machines.

For example, when we usually use a computer, we need to operate it by typing on the keyboard and moving the mouse with our fingers. But brain-computer interface technology allows you to skip this intermediate step and directly control the computer with "thoughts". This isn't mind reading; it's by capturing the signals emitted by the brain and then using algorithms to "translate" these signals into instructions that machines can understand.

Chapter 1.1 Core Principle: Four Steps to Connect the Brain and Machines

Imagine that there are 86 billion neurons in your brain, "talking" all the time, that is, communicating through electrical signals. The fact that you can see this text and understand these concepts now is essentially your neurons discharging. The working principle of the brain-computer interface is actually very simple:

Step 1: Collect signals. Record the activities of neurons through electrodes or ultrasound, just like installing a high-precision monitor in the "chat group" composed of hundreds of millions of neurons in the brain.

Step 2: Decode signals. Use AI algorithms to translate these signals and understand what the brain wants to do. For example, when you want to move your finger, specific neurons in the motor cortex will discharge in a specific pattern. Once the AI learns to recognize this pattern, it knows what you want to do.

Step 3: Output instructions. Send the decoded instructions to external devices - computer cursors, robotic arms, wheelchairs, and even future humanoid robots.

Step 4: Feedback loop. The most advanced brain-computer interfaces also work in reverse: after the device performs an action, it sends a feedback signal back to the brain, forming an interactive closed-loop system. For example, when a brain-computer interface controls a robotic hand to pick up a cup, the brain can "feel" the touch and weight, thus forming a complete closed loop.

Chapter 1.2 Three Major Technical Routes: The Game between Safety and Performance

We already know what the brain-computer interface "can do", but "how to do it" is a big problem. Because this involves drilling a hole in the most important and fragile brain of humans to implant a chip, and its safety cannot be ignored. Therefore, the brain-computer interface has also developed three major technical routes, making different trade-offs between safety and performance.

The first type is non-invasive: the advantage is that it's the safest, but the disadvantage is that the signal is the weakest. This type of device is like a "mind-reading hat" that can be used by wearing it on the head. Its working principle is to detect the weak electrical signals generated by brain activities through electrodes placed on the scalp.

The advantages are that it's completely non-invasive and doesn't require surgery; it's convenient to use, just wear and use; and it's relatively inexpensive, with consumer-grade ones costing only a few hundred to a few thousand dollars. The disadvantages are also obvious. The signal is very weak, like listening to music through a thick wall; the accuracy is low, and it can only perform some simple controls. It's also easily affected by hair, sweat, and external electromagnetic fields.

Currently, many brain-computer interface products on the market adopt non-invasive solutions. Its simplicity and ease of use are suitable for consumer-grade scenarios. However, its effectiveness has also been questioned by some professionals.

Jia Liu

Assistant Professor at the Harvard School of Engineering and Applied Sciences:

We need to respect physical facts. The frequency bandwidth of each neuron in the brain is approximately in the range of 300 to 3000 Hz. And more importantly, in the range of 3000 Hz, that's the action potential of the neuron.

Our skull and the membrane on the surface of the brain are very good Low-Pass Filters. All signals above 40 Hz are filtered out. If it's non-invasive, in essence, from a physical perspective, we can't obtain the signals of single neurons; we only get an average result.

Yeyang Ye

Co-founder and CTO of Axoft:

Our skull is a very perfect insulator, so most of the electrical signals developed in our brain are blocked by the skull. For example, if all the thoughts in the brain are a very wonderful symphony, then our skull is the concert hall. If you place electrodes outside the skull and measure the brain's electrical signals in a non-invasive way, it's like listening to a symphony outside a concert hall. No matter how wonderful the symphony is, no matter how advanced the audio equipment you place outside the concert hall, because of the concert hall in between, the signal you finally hear is a very weak and very mixed signal. This is the problem faced by current non-invasive brain interfaces. They've never been able to obtain high-precision, high-bandwidth signals.

The second faction's middle route is called "semi-invasive". Semi-invasive is a "middle route". It requires a craniotomy, but the electrodes are only placed on the surface of the brain or outside the dura mater, without penetrating the brain tissue, or are implanted through blood vessels. The advantage is that the signal quality is better than that of non-invasive methods, but the risk is slightly lower than that of fully invasive methods; the disadvantage is that the number of channels is relatively small, and the performance is not as good as that of fully invasive methods.

But this faction is a bit awkward because the guests told us that in fact, the highest risk is the craniotomy, but after opening the skull, if you don't go deep to collect data, it's like you buy a ticket to listen to a symphony, enter the concert hall, but sit in the last row.

Yeyang Ye

Co-founder and CTO of Axoft:

There are two major factions in invasive brain-computer interfaces on the market. One is called Surface Brain-machine Interface, and the other is called Depth Brain-machine Interface. After we remove the skull and expose the brain, you can choose to attach the electrodes to the surface of the brain to measure the electrical signals, or you can choose to insert the electrodes into the brain to measure the signals. Attaching the electrodes to the surface of the brain has the advantage of ensuring the integrity of the brain structure, but the disadvantage is that the brain's electrical signals are actually in the depth.

So there's still the problem of listening to music in the concert hall. It's just that now you're inside the concert hall, but you're sitting in the last row. Sitting in the last row with the best microphone and sitting in the first row with a relatively poor microphone, which one gets better results? This is what everyone is currently discussing. And the depth electrodes are equivalent to inserting the electrodes into the actual location next to each neuron cell that generates electrical signals, obtaining the first-hand, most accurate, and highest-throughput signals, and restoring the brain's thoughts most completely in this way. These are the two schools in invasive brain-computer interfaces.

So after a craniotomy, how to collect data and how deep to go? This is the key point that the industry is currently actively exploring and seeking solutions for.

The third faction: Invasive brain-computer interfaces. As the name suggests, this type of invasive device directly penetrates the cerebral cortex and has "zero-distance contact" with neurons. It directly inserts tiny electrode needles into the brain tissue to record the activities of single neurons.

The advantages are that the signal strength is high, like listening to high-definition stereo; the accuracy is extremely high, and complex controls can be achieved; the bandwidth is large, and more information can be transmitted. The disadvantages are that it requires a craniotomy, and the risk is high; long-term implantation of electrode needles may cause rejection reactions and even infections, and the electrodes may also degrade over time.

At the same time, there's also the problem of the implanted material.

Jia Liu

Assistant Professor at the Harvard School of Engineering and Applied Sciences:

The brain, especially a living brain, is a very soft tissue, like tofu. But all metal or silicon-based probes are very hard, like a steel knife. So when you insert this into the brain, and the brain is constantly moving. First of all, this electrode will, like a steel knife, cut the brain on a micro-scale, not only causing long-term mechanical damage but also causing the electrode to drift inside the brain.

The result of the drift is that even if you can measure the neuron signals, you can't stably measure the signals from the same neuron. On the other hand, it will cause a large number of immune rejection reactions in the brain. As time passes, neuron apoptosis will occur at the place where the device is implanted, and at the same time, a large number of immune cells will proliferate. Eventually, the Single Unit Action Potential that you could measure at the beginning will gradually become undetectable. For patients with a Deep Brain Stimulator Implanted, every few months or even a year, you need to change the stimulation location to effectively treat the disease because it also causes a large amount of damage to the deep brain tissue.

To prevent the human brain, which is as soft as tofu, from being "repeatedly cut" by the implanted electrodes and chips, the material must be soft. Jia Liu led his team to make a breakthrough and founded Axoft with Yeyang Ye. They believe that by making hard materials extremely thin, they can become soft.

Just like a thick iron block can be made into tin foil and bent at will, and a thick plastic board can become a soft plastic wrap, this discovery immediately inspired the entire industry. Companies like Neuralink have also adopted a similar idea - because thinner wires are softer and less likely to damage brain tissue.

However, new problems have also emerged: if the electrodes are too thin, they're easy to break, and they may not be able to be pulled out once broken; moreover, too thin materials can't accommodate too many electronic components. To increase the number of channels, more electrodes need to be