HomeArticle

A Comprehensive Review of Brain-Computer Interfaces: From Science Fiction to Reality, Who is Leading the Revolution of "Mind Reading"?

硅谷1012026-01-15 10:27
Redefine the boundary between humans and machines

On January 1st, Elon Musk suddenly announced on X that Neuralink's brain-computer interface devices will be mass-produced on a large scale this year. As of now, Neuralink has implanted brain-computer interface devices in 13 patients. They can type, browse the web, and play games with "thoughts".

Image source: X

The brain-computer interface - a technology that sounds like it's from "The Matrix" - is now moving from the laboratory to reality. Although Neuralink is the most well-known company in the brain-computer interface field, the field is also becoming lively: a group of veterans and newcomers are catching up, non-invasive/minimally invasive projects are quietly emerging, and many bigwigs, including OpenAI CEO Sam Altman, have also entered the game.

It's certain that the global brain-computer interface market is experiencing explosive growth. According to reports, just in the United States, the market size of brain-computer interfaces could reach $400 billion. And this might just be the beginning.

In this article, let's talk about where the brain-computer interface technology, which could potentially change the future of humanity, stands. Who is leading? Who might overtake on a curve? When will such technology be available to ordinary people?

01 The Principle of Brain-Computer Interface: From "Mind Reading" to "Controlling Everything"

The full English name of the brain-computer interface is Brain–Computer Interface, abbreviated as BCI. Simply put, a brain-computer interface establishes a direct communication channel between the human brain and external devices. It bypasses our traditional nerve-muscle-sensory system, allowing the brain to directly "talk" to machines.

For example: When we usually use a computer, we need to operate it by typing on the keyboard and moving the mouse with our fingers. But brain-computer interface technology allows you to skip this intermediate step and directly control the computer with just "thinking". This isn't mind reading; it's about capturing the signals emitted by the brain and then using algorithms to "translate" these signals into instructions that machines can understand.

Chapter 1.1 Core Principle: Four Steps to Connect the Brain and Machines

Imagine that there are 86 billion neurons in your brain, "talking" all the time, that is, communicating through electrical signals. The fact that you can see this text and understand these concepts is essentially your neurons discharging. The working principle of the brain-computer interface is actually quite simple:

Step 1: Collect signals. Record the activities of neurons through electrodes or ultrasound, just like installing a high-precision listener in the "chat group" composed of hundreds of millions of neurons in the brain.

Step 2: Decode signals. Use AI algorithms to translate these signals and understand what the brain wants to do. For example, when you want to move your finger, specific neurons in the motor cortex will discharge in a specific pattern. Once the AI learns to recognize this pattern, it knows what you want to do.

Step 3: Output instructions. Send the decoded instructions to external devices - computer cursors, robotic arms, wheelchairs, and even future humanoid robots.

Step 4: Feedback loop. The most advanced brain-computer interfaces also work in reverse: after the device performs an action, it sends a feedback signal back to the brain, forming an interactive closed-loop system. For example, when a brain-computer interface controls a robotic hand to pick up a cup, the brain can "feel" the touch and weight, thus forming a complete closed loop.

Chapter 1.2 Three Major Technical Routes: The Trade-Off between Safety and Performance

We already know what the brain-computer interface "can do", but "how to do it" is a big problem. Because this involves drilling holes in the most important and fragile human brain to implant chips, and the most important thing to consider is its safety. Therefore, the brain-computer interface has also developed three major technical routes, making different trade-offs between safety and performance.

The first type is non-invasive: the advantage is the highest safety, but the disadvantage is the weakest signal. This type of device is like a "mind-reading hat" that you can wear on your head. Its working principle is to detect the weak electrical signals generated by brain activities through electrodes placed on the scalp.

The advantages are that it is completely non-invasive and does not require surgery; it is convenient to use, just wear and go; and it is relatively inexpensive, with consumer-grade ones costing only a few hundred to a few thousand dollars. The disadvantages are also obvious. The signal is very weak, like listening to music through a thick wall; the accuracy is low, and it can only perform some simple controls. It is also easily affected by hair, sweat, and external electromagnetic fields.

Currently, many brain-computer interface products on the market use non-invasive solutions. Its simplicity and ease of use make it suitable for consumer-grade scenarios. However, its effectiveness has also been questioned by some professionals.

Jia Liu

Assistant Professor at the Harvard School of Engineering and Applied Sciences:

We need to respect physical facts. The frequency bandwidth of each neuron in the brain is approximately in the range of 300 to 3000 hertz. And more importantly, it's in the 3000-hertz range, which is the action potential of the neuron.

The human skull and the membrane on the surface of the brain are very good Low-Pass Filters. All signals above 40 hertz are filtered out. If it's non-invasive, in essence, from a physical perspective, you can't get the signals of a single neuron; you get an average result.

Ye Tianyang

Co-founder and CTO of Axoft:

Our skull is a very perfect insulator, so most of the electrical signals generated in our brain are blocked by the skull. For example, if all the thoughts in the brain are a wonderful symphony, then our skull is the concert hall. If you place electrodes outside the skull and measure the brain's electrical signals in a non-invasive way, it's like listening to a symphony outside a concert hall. No matter how wonderful the symphony is, no matter how advanced the sound recording equipment you place outside the concert hall, because of the concert hall in between, the signal you finally hear is a very weak and mixed signal. This is the problem faced by current non-invasive brain interfaces. They've always been unable to obtain high-precision, high-bandwidth signals.

The second faction's middle route is called "semi-invasive". Semi-invasive is a "middle route". It requires a craniotomy, but the electrodes are only placed on the surface of the brain or outside the dura mater without penetrating the brain tissue, or they are implanted through blood vessels. The advantage is that the signal quality is better than that of non-invasive methods, but the risk is slightly lower than that of fully invasive methods; the disadvantage is that the number of channels is relatively small, and the performance is not as good as that of fully invasive methods.

But this faction is in an awkward position because the guests told us that in fact, the highest risk is the craniotomy. But after opening the skull, if you don't go deep to collect data, it's like buying a ticket to a concert, entering the concert hall, but sitting in the last row.

Ye Tianyang

Co-founder and CTO of Axoft:

There are two major factions in invasive brain-computer interfaces on the market. One is called Surface Brain-machine Interface, and the other is called Depth Brain-machine Interface. After removing the skull and exposing the brain, you can choose to attach electrodes to the surface of the brain to measure electrical signals, or you can choose to insert electrodes into the brain to measure signals. Attaching electrodes to the surface of the brain has the advantage of maintaining the integrity of the brain structure, but the disadvantage is that the brain's electrical signals are actually in the depth.

So there's still the problem of listening to music in a concert hall. It's just that now you're inside the concert hall, but you're sitting in the last row. Which has a better effect: sitting in the last row with the best microphone or sitting in the first row with a relatively poor microphone? This is what everyone is currently discussing. And depth electrodes are like inserting electrodes into the very place next to each neuron cell that generates electrical signals, obtaining the most first-hand, most accurate, and highest-throughput signals, and in this way, fully restoring the brain's thoughts. These are the two schools within invasive brain-computer interfaces.

So after a craniotomy, how to collect data and how deep to go? This is the key point that the industry is currently actively exploring and seeking solutions for.

The third faction: invasive brain-computer interfaces. As the name suggests, this type of invasive device directly pierces the cerebral cortex and has "zero-distance contact" with neurons. It uses tiny electrode needles to directly insert into brain tissue and record the activities of single neurons.

The advantages are high signal strength, like listening to high-definition stereo; extremely high precision, enabling complex control; and large bandwidth, capable of transmitting more information. The disadvantages are that it requires a craniotomy, which is high-risk; long-term implantation of electrode needles may cause rejection reactions and even infections, and the electrodes may degrade over time.

At the same time, there's also the problem of the implanted materials.

Jia Liu

Assistant Professor at the Harvard School of Engineering and Applied Sciences:

The brain, especially a living brain, is a very soft tissue, like tofu. But all metal or silicon-based probes are very hard, like a steel knife. So when you insert this into the brain, and the brain is constantly moving. First of all, the electrode will be like a steel knife, cutting the brain at a micro-scale. This not only causes long-term mechanical damage but also makes the electrode drift inside the brain.

The result of the drift is that even if you can measure the neuron signals, you can't stably measure the signals from the same neuron. On the other hand, it will cause a large number of immune rejection reactions in the brain. As time passes, neuron apoptosis will occur at the place where the device is implanted, and at the same time, a large number of immune cells will proliferate. Eventually, the Single Unit Action Potential that you could initially measure will gradually become undetectable. For patients with a Deep Brain Stimulator Implanted, every few months or even a year, you need to change the stimulation position to effectively treat the disease because it also causes a large amount of damage to the deep brain tissue.

To prevent the human brain, as soft as tofu, from being "repeatedly cut" by the implanted electrodes and chips, the materials must be soft. Jia Liu led his team to make a breakthrough and founded Axoft with Ye Tianyang. They believe that by making hard materials extremely thin, they can become soft.

Just like a thick iron block can be made into tin foil and bent easily, and a thick plastic board can become very soft when made into plastic wrap. This discovery immediately inspired the entire industry. Companies like Neuralink have also adopted a similar idea - because thinner wires are softer and less likely to damage brain tissue.

However, new problems have also emerged: if the electrodes are too thin, they are easy to break, and once broken, they may be difficult to pull out; moreover, too thin materials can't accommodate many electronic components. To increase the number of channels, you need to insert more electrodes. That's why Neuralink has that complex "sewing machine" surgical robot.