Can brain-computer interfaces become the "cheat codes" for future humans by repairing paralysis and enhancing ordinary people?
In the second half of 2025, there were continuous hot topics related to "brain-computer interface" in the Silicon Valley tech circle.
In the middle of the year, Neuralink, led by Elon Musk, received another $650 million in financing, and its valuation reached $9 billion. The indications it developed have expanded from developing brain-controlled devices for patients with severe motor dysfunctions such as amyotrophic lateral sclerosis (ALS) and spinal cord injuries to the functional reconstruction of patients with aphasia and blindness.
In August, Sam Altman and his team initiated the establishment of the brain-computer interface startup Merge Labs, planning to raise $250 million in financing from OpenAI. They aim to use ultrasound technology to read and regulate brain signals.
In November, another representative company, Synchron, received $200 million in Series D financing; Paradromics obtained FDA approval to initiate human clinical trials of brain-computer interfaces for speech function reconstruction.
Compared with overseas companies that enjoy star status and large amounts of financing, most Chinese brain-computer interface startups are pragmatic and low - key. Under the strict regulatory framework of medical devices, they leverage domestic clinical resources and medical advantages to steadily advance clinical trials and product optimization.
In terms of registration and access, the implantable systems developed by brain-computer interface companies represented by BrainCo and Ladder Medical have entered the "Innovation Green Channel" of the National Medical Products Administration's review of innovative medical devices to conduct human clinical trials and verify the safety and effectiveness of the systems.
With the rapid development of artificial intelligence technology, "human-machine integration" has become a more concerned topic. From treating incurable diseases and restoring patients' functions to enhancing human capabilities, what is the timeline for the implementation of brain-computer interfaces in different scenarios? Will this new technology be an information bridge for the interconnection between carbon-based life and silicon-based life? How can the single-cell data set collected by flexible electrodes deep in the cerebral cortex raise the ceiling of brain-computer interface applications?
Regarding a series of questions, 36Kr interviewed Li Xue and Zhao Zhengtao, researchers at the CAS Center for Excellence in Brain Science and Intelligence Technology, who are also the founders of Ladder Medical. In March this year, the invasive brain-computer interface system developed by Ladder Medical completed China's first prospective clinical trial at Huashan Hospital of Fudan University. In November, this product entered the "Green Channel" of the National Medical Products Administration, which is expected to shorten the cycle from clinical verification to market access.
When asked "whether they would like to implant a brain-computer interface system when the technology matures", both of these post - 90s scientists gave affirmative answers.
The following is the edited dialogue between 36Kr and Li Xue, Zhao Zhengtao:
From Saving Lives to Consumer Healthcare in the Next 5 - 10 Years
36Kr: As a platform technology, where are the application boundaries of brain-computer interfaces?
Zhao Zhengtao: Currently, there are mainly three clear application scenarios for brain-computer interfaces.
Brain control category: That is, information is output from the brain. For example, extracting the motor intention information of paralyzed patients and the language intention information of aphasic patients, decoding them, and then outputting the information externally to achieve motor control or language expression.
Neural regulation category: Writing information into the central or peripheral nervous system of the brain to regulate abnormal states such as brain diseases. For example, using deep brain stimulation (DBS) to regulate Parkinson's disease and using spinal cord stimulators to regulate pain.
Sensory reconstruction category: Such as auditory and visual reconstruction. For patients who have lost the ability to obtain external information, the brain-computer interface can re-encode external information into electrical signals and input them into the brain to generate sensory perception.
We are working on brain-computer interface platform technology, which requires sensors for accurate reading and writing of information, as well as signal processing, wireless transmission, decoding and encoding technologies. As a platform company, we hope to maximize the value of brain-computer interfaces and cover different application scenarios.
36Kr: Currently, the people concerned about brain-computer interfaces have expanded from the medical circle to the technology and consumer circles. How do you judge the timelines for the implementation of brain-computer interfaces in clinical treatment and consumer scenarios respectively?
Zhao Zhengtao: In the next three to five years, the clinical value of brain-computer interfaces in treating diseases will definitely be verified, that is, improving patients' quality of life and work ability.
In the next five to ten years, the consumer healthcare attributes of brain-computer interfaces will begin to emerge. It has communication attributes and is an information bridge for building interactions between humans and machines, which can transform the way of human-machine interaction and improve efficiency.
In the future, when brain-computer interfaces are linked with the rapidly developing artificial intelligence, software and hardware technologies, it will be possible to efficiently and complexly control intelligent proxy devices through brain thoughts. At that time, through brain-computer interfaces, the brain can be integrated with various peripheral devices, and the era of "human-machine integration" will truly arrive and be smoothly applied to millions of households.
36Kr: What specific indications does Ladder Medical's first "implantable wireless brain-computer interface system" target? What value do you hope to bring to patients?
Zhao Zhengtao: Currently, the indications of this product are mainly for patients with severe motor dysfunctions. This includes high paraplegia caused by spinal cord injuries, motor dysfunctions caused by ALS, and severe paralysis caused by severe strokes (such as brainstem strokes).
The purpose of the implantable brain-computer interface is to directly control external electronic devices and complex physical peripherals, such as robotic arms and robotic dogs, through brain thoughts.
We hope to improve the quality of life of these patients. From basic daily life functions, such as turning over and picking up a cup, to accessing the digital world, such as playing games, sending and receiving emails, and handling bank accounts.
At the same time, we also hope to help these patients restore their productivity. For example, people with an engineering design background can do 3D modeling, and patients with e-commerce experience can run online stores. This not only helps patients improve their quality of life but also restores their lost employment ability, making them feel needed by society.
36Kr: What is the product form and implantation method?
Li Xue: The most core component at the front end of the (brain-computer interface) system is the ultra-flexible electrode that we have developed over the past decade. The thickness of the electrode wire is only 1% of a hair, and its width is similar to that of a hair. The force generated when the electrode is bent is similar to the interaction force between two cells. This can counteract the displacement of the electrode when the human body moves, allowing us to collect signals relatively stably at the same position.
During the implantation process, a part of the skull is thinned by about 5 millimeters to form a bone groove, and a coin-sized implant is embedded in the skull. Then, through a puncture hole in the skull, the electrode wire is minimally invasively implanted to a depth of about 5 - 8 millimeters below the cerebral cortex. The surgeon told us that to some extent, this cannot be regarded as a craniotomy but is closer to a puncture procedure.
Through the ultra-flexible electrodes implanted in the cerebral cortex, fine single-cell neural activities are collected with extremely low latency, about tens of milliseconds. The implant recipient can hardly perceive this latency because it also takes about 100 milliseconds for our brain to generate an intention and for the arm to execute it. Through this implantable system, it is possible to achieve thought control of cross-platform electronic devices (mobile phones, computers, iPads) and physical peripherals (wheelchairs, robotic arms).
On December 4th, Ladder Medical released its second-generation high-throughput wireless invasive brain-computer interface system, with the number of electrode channels increased to 256. The application scenarios will expand from "motor control" to "language reconstruction".
The "Data Flywheel" from the Cerebral Cortex
36Kr: Ladder Medical attaches great importance to the brain single-cell data set. Why? Can this be understood as the key to the "data flywheel" of brain science?
Zhao Zhengtao: You are absolutely right. Previously, we often mentioned the "inverted pyramid" logic of brain-computer interfaces: the bottom layer is the neural interface, the second layer is the full-system development, then comes the clinical accessibility, and the top is the neuroscience's understanding of the brain. The extent to which we understand the brain at the top determines the extent to which we can develop or use it, which is the ceiling.
The underlying interface system supports the possibility of obtaining clinical data, and these single-cell data are the key to improving the understanding of cognitive science. This understanding, in turn, can help us design better products and develop new application scenarios.
For the brain-computer interface platform, in the future, whoever masters a larger amount of brain science data will be closer to the right to define new application scenarios and technical paths.
Previously, the old American brain-computer interface project BrainGate implanted brain-computer systems in forty to fifty patients. The limited data obtained have supported the output of a series of influential research results, such as thought typing and language decoding.
If the number of implanted patients increases from dozens to thousands or even tens of thousands, the accumulated data will bring great value to human brain science research.
36Kr: What role do AI and algorithms play in this? Will large brain data models be used?
Zhao Zhengtao: You are absolutely right. After a large amount of data is accumulated in a single brain region for a single task, we can train a "base model" like a large model through brain data. This base model can be applied to future new patients, significantly improving the initial performance of their decoders and brain control performance.
Neural network algorithms are very suitable for parsing complex data. However, currently, our brain control instructions are relatively simple, such as two-dimensional vector information of the cursor or control information for 3 - 5 degrees of freedom. So the algorithms we use now are also relatively simple, and a recurrent neural network (RNN) with more than a hundred thousand parameters is almost sufficient.
In the future, to achieve higher-throughput and more complex brain control, the complexity and the number of parameters of the neural network will also increase accordingly. Personally, I think that at this stage, the development process and speed of artificial intelligence in this area fully meet the needs of brain-computer interfaces.
AI also has great value at the application end. For example, in language decoding, we don't need to decode complete words. As long as we decode the information of dozens of classified "morphemes" through brain activities, we can combine it with large language models and make predictions through context information, which can greatly help users interpret language information.
In the future, the relationship between brain-computer interfaces, humans, and artificial intelligence (embodied intelligent agents) will be a process of dynamic adjustment and mutual cooperation. Sometimes, we give a high-level instruction through brain signals, and the intelligent agent splits and executes it through its own intelligence; in specific scenarios, we can dynamically adjust and give it precise motion control instructions to achieve higher-level strategies.
The Brain-Computer Interface: An "Information Bridge" for Human-Machine Integration
36Kr: Judging from the performance of patients who have already implanted brain-computer interfaces, is it possible for humans to obtain some abilities beyond the ordinary through "brain control" in the future?
Li Xue: Initially, we chose to develop brain-computer interface technology with the intention of applying it in consumer-level scenarios. The reason we started with medical applications is that medical use is a necessary step. It can meet the actual needs of many people and also drive us to continuously refine various technologies.
In this process, we hope not only to restore patients' motor abilities and sensory perceptions but also to help them explore the boundaries of human control. In the past, we used our brains to control our limbs. In the future, with brain-computer interfaces to control peripheral devices, where are the boundaries? Can it exceed the current human level? I believe it definitely can.
For example, Neuralink previously released a small game to test the speed and accuracy of people controlling the cursor. Normal people can reach 8 - 10 BPS (Bits Per Second). After training, paralyzed patients with implanted brain-computer interfaces can achieve 9.5 BPS through the interaction between brain signals and the machine, which is faster than many ordinary people, indicating that they have a certain degree of comparative advantage in a specific direction.
Let me add another idea. Currently, all device control has a "path dependence". For example, to move the computer cursor from the lower right corner to the upper left corner, we must push it with the mouse. But in fact, we can imagine this process in our brains. In the future, by decoding the spatial position information in the brain's hippocampus, it may be possible to achieve "imaginary teleportation" of the cursor. This is something impossible for normal people.
In addition, for complex peripheral devices such as intelligent robots and multi - degree - of - freedom robotic arms, it is relatively rough for ordinary people to control them with a joystick or language. However, if we use the complex information encoded by brain neural activities to control a multi - degree - of - freedom robotic arm, it may be as flexible as a part of our own body. Previously, a top - tier journal published a study indicating that patients with implanted brain-computer interface devices can smoothly and freely control multi - degree - of - freedom robotic arms after training.
36Kr: Currently, the evolution speed of large AI models is very fast, and many experts are full of concerns about human education, employment, and even existence in the future. Do you think that human-machine integration is an inevitable trend?
Li Xue: AI should be a tool serving humans, and human intention is the core. However, we cannot rule out the possibility that AI will evolve self - awareness in the future.
To prevent humans from losing their dominant position and being marginalized, I think deep human-machine integration is a relatively inevitable trend. And the key to integration is the "bridge", and the brain-computer interface plays the role of an information connection bridge between humans and machines.
If this bridge is lacking in the development process, the silicon-based life represented by AI and humans may go their separate ways. If artificial intelligence can seamlessly become a part of humans, or humans become a part of artificial intelligence and evolve into a better group, there may be a harmonious coexistence state with no obstacles in information communication.
Let me give a concrete example: autonomous driving. You can tell the car to go from point A to point B, and it can automatically plan the route and avoid obstacles. But at any time, humans can intervene and take over the control system, which is a controllable development. We don't want a result where AI, intelligent agents, etc., get out of control and humans lose control of physical things like the steering wheel and brakes. Deep human-machine integration may be a better state.
36Kr: If the brain-computer interface technology is mature enough in the future, as founders, will you be the first to implant it?
Li Xue: (laughing) Some time ago, we were still discussing that we also wanted to implant one to see what it feels like.