HomeArticle

The next frontier of AI? Consciousness algorithms

神译局2026-05-12 07:24
Although it's still in the very early stages and nothing concrete has been established yet, the possibilities contained in this line of thinking are truly exciting.

The God Translation Bureau is a compilation team under 36Kr, focusing on fields such as technology, business, the workplace, and life, with a particular emphasis on introducing new technologies, new ideas, and new trends from abroad.

Editor's note: Top global thinkers believe they may have cracked the mystery of machine consciousness. This article is from a compilation and we hope it will inspire you.

Image source: WIRED STAFF; GETTY IMAGES

As a journalist in the field of artificial intelligence, I often hear countless people firmly believe that chatbots such as ChatGPT and Claude already possess "sensibility," "consciousness," or my personal favorite term - "independent thinking." It's true that they have long passed the Turing test, and there's no doubt about that. However, unlike mechanical intelligence, these concepts are not so easy to define. Large language models may claim to be able to think independently, and even describe inner pain or express eternal love, but such statements do not mean they have inner experiences.

Is it really possible for them to have consciousness? Many real AI developers never talk about these things. They are busy chasing the performance benchmark known as "Artificial General Intelligence" (AGI), which is a purely functional category and has nothing to do with the machine's potential experience of the world. Therefore, although I'm skeptical, I think it might be eye - opening and even inspiring to spend some time learning about a company that claims to be able to crack the code of consciousness.

Conscium was founded in 2024 by British AI researcher and entrepreneur Daniel Hume. Its advisory team includes an impressive group of neuroscientists, philosophers, and experts in the field of animal consciousness. When we first talked, Hume was very practical: There are indeed grounds for questioning the view that language models have consciousness. Crows, octopuses, and even amoebas can interact with the environment in ways that chatbots cannot. Experiments also show that the statements of AI do not reflect a coherent or consistent state. As Hume said, this is also the widespread consensus: Large language models are very rough simulations of the brain.

But, and this is a very important "but," it all depends first on the definition of "consciousness." Some philosophers believe that consciousness is too subjective to ever be studied or reproduced, but Conscium is betting that since consciousness exists in humans and other animals, it can be detected, measured, and implanted into machines.

There are several competing and overlapping views on the key features of consciousness, including the ability to perceive and feel, awareness of oneself and the environment, and so - called metacognition - the ability to think about one's own thought processes. Hume believes that when these phenomena come together, the subjective experience of consciousness emerges, just like when you quickly flip through consecutive images in a book, an illusion of motion is created. But how to identify the components of consciousness - the individual "frames," so to speak, and the "force" that combines them? Hume says the answer is to let AI act on itself.

Conscium's goal is to break down conscious thinking into its most basic forms and catalyze it in the laboratory. "Consciousness must be made of something, and it emerged from something in the process of evolution," says Mark Solms, a South African psychoanalyst and neuropsychologist involved in the Conscium project. In his 2021 book "The Hidden Spring," Solms proposed a new way of thinking about consciousness that focuses more on "feeling." He believes that the brain forms a feedback loop through perception and action, aiming to minimize surprises, generate hypotheses about the future, and constantly update them as new information arrives. This view is based on the "free energy principle" proposed by another well - known and controversial neuroscientist, Karl Friston, who is also an advisor to Conscium. Solms further suggests that in humans, this feedback loop has evolved into a system mediated by emotions, and it is these feelings that give rise to the ability to perceive and consciousness. This theory is supported by cases of brainstem damage: The brainstem plays a key role in emotional regulation, and its damage seems to cause the patient to lose consciousness.

At the end of his book, Solms proposed a way to test his theory in the laboratory. Now, he says he has done it. He hasn't published a related paper yet, but he showed me a draft. Did it subvert my perception? It was a bit of a shock. Solms' artificial agents live in a simple computer - simulated environment, controlled by an algorithm with a Friston - style, feeling - mediated loop, which he believes is the basis of consciousness. "I have several motivations for doing this research," Solms says. "One of them is that it's just so interesting."

Solms' laboratory environment is constantly changing, requiring continuous modeling and adjustment. The agents' experience of the world is mediated by simulating responses similar to fear, excitement, and even pleasure. So, in short, they are "pleasure robots." Different from all the AI agents people talk about today, Solms' creations have a real desire to explore their own environment; to understand them correctly, one must try to imagine how they "feel" about their small world. Solms believes that it should ultimately be possible to combine the method he is developing with language models to create a system that can talk about its own perceptual experiences.

At present, Conscium's research is still in a very early stage, almost in its infancy. It's like a "possible impossibility." But it's really interesting. In addition, it has prompted me to think about my own consciousness in a new way. Let me engage in some metacognition: I tend to think that it's thinking that makes me conscious, not emotions. But have we been looking for consciousness in the wrong place? What does it mean if consciousness can be reduced to such a simple mechanism? Perhaps those who claim to have glimpsed sensibility in ChatGPT are not imagining things after all.

Translator: Teresa