The proposer of the Chinese Room has passed away. He once "flirted" with Hinton in public and was remembered for half a lifetime.
The philosopher John Searle has passed away at the age of 93.
You may not be familiar with his name, but you must be no stranger to the thought experiment "Chinese Room" he proposed.
This experiment, proposed in 1980, was later placed on a par with the "Turing Test" and is regarded as a classic proposition in the history of artificial intelligence philosophy.
It not only challenges the boundary of whether machines can "understand", but also forces people to rethink the nature of the mind.
More than half a century later, when large language models such as GPT took the stage of the era, people are still asking: Are they merely simulating understanding, or have they truly generated understanding?
Outside of academia, a TV recording in the 1970s still remains vivid in the memory of Hinton, the father of deep learning -
Half a century has passed, and he still can't forget the experience of "being toyed with by Searle".
So, what exactly did this pioneer in artificial intelligence, who made Hinton brood over for most of his life, do?
Why was Chinese specifically chosen for the Chinese Room?
A Few Things about Searle
Before reviewing Searle's life, let's start with two anecdotes.
As mentioned at the beginning, during a TV recording in the 1970s, Searle publicly "toyed with" the young Geoffrey Hinton - this episode has haunted Hinton for half a century.
Until 2022, in a live interview with Stephen Hanson, Hinton still described that program as an "extremely painful experience".
The incident started when Hinton and Searle were arranged to record a TV program together.
Before the recording, Hinton specifically asked Daniel Dennett whether he should participate.
(Daniel Dennett is a contemporary American philosopher and cognitive scientist, on a par with Searle but with completely opposite views. He emphasizes that consciousness and intelligence can be naturally explained within the framework of computation and evolution, and opposes the view that "the mind cannot be algorithmized". He is the author of Consciousness Explained.)
Dennett advised him "better not to go", but Hinton thought it would be okay as long as he made an agreement with Searle in advance not to talk about the "Chinese Room".
As soon as the program started, Searle raised the microphone and said:
Today we are going to have a conversation with the connectionist Geoffrey Hinton. Of course, he won't have any problems with the Chinese Room experiment.
This move directly broke the agreement. Hinton was stunned on the spot but couldn't refute publicly, so he had to respond reluctantly.
Searle then launched a "philosophical interrogation":
If we replace each neuron in your brain with a chip, slowly we'll lose Hinton. He will just ~dis~ap~pear~.
Hinton was completely speechless. The producer couldn't stand it and whispered to him, "You have to be more assertive!"
Hinton thought to himself: Oh my god, do I have to be even more assertive?
So, during that long recording, Hinton just silently looked out of the window, much like a well - known performing artist.
Finally, the two - hour recording was edited into a one - hour program.
More than fifty years later, Hinton still clearly remembers the details of that program recording: the ITV studio, the green curtain wall, the cash red envelopes... nothing is forgotten.
So, although we can't say that Searle was responsible for the old man's back problem, he should bear some responsibility for Hinton's mental health.
To understand this awkward encounter, we have to go back to the root of the academic disagreement.
In his early years, Hinton, along with Rumelhart, McClelland and others, was well - known for parallel distributed processing (PDP). They advocated that the mind is not like a computer program performing rule - based operations on symbols, but a distributed network that represents knowledge through the activation patterns between neurons, which is the so - called connectionism.
Searle regarded all artificial intelligence as a "symbol - operating system" without distinguishing between symbolism and connectionism.
This forced Hinton to answer within Searle's semantic framework during the debate -
It was a debate that was doomed to be asymmetric.
The second anecdote comes from the obituary in The New York Times.
Once, Searle learned that the brochure for a philosophy introductory course printed the photos of three philosophers - René Descartes, David Hume, and himself.
Searle took a look and casually said:
"Who are the other two?"
This statement is extremely arrogant, but it is very much in Searle's style.
This reminds people of his predecessor Ludwig Wittgenstein, who became famous at the neighboring Cambridge.
It is rumored that when Wittgenstein was introduced to Trinity College by Bertrand Russell, he also asked a legendary question:
"Who is Aristotle?"
Although Searle's unruly temperament is more like Wittgenstein's, he chose to go to the neighboring Oxford - he selected the language philosopher John Austin, who was as famous as Wittgenstein at that time, as his tutor.
The Guardian used an excellent metaphor to describe this teacher - student relationship:
Searle's cowboy - style straightforwardness is incompatible with Austin's pedantic and introverted British aristocratic temperament.
This unorthodox and even morally controversial temperament also became the background color of Searle's life.
Searle's full name is John Rogers Searle. He was born on July 31, 1932, in Denver, USA.
At the age of 19, he won a Rhodes Scholarship and transferred from the University of Wisconsin - Madison to the University of Oxford in the UK.
After completing his degree and doctoral thesis under Austin's guidance, he joined the University of California, Berkeley in 1959 and taught there for sixty years.
He was known for his sharp and outspoken remarks:
Being a philosopher is like murder: every morning when you wake up, face a brick wall and bang your head against it until you break through.
Throughout his life, Searle constantly fought against mainstream theories. His debates with Dennett and Derrida were iconic scenes in the philosophy history of the second half of the 20th century.
In 1980, he published the famous "Chinese Room" thought experiment, targeting strong artificial intelligence.
On the issue of consciousness, Searle was also very outspoken.
Taking science as the criterion, he allied with neuroscientists, advocated that mental experiences originate from brain functions, and denied all vague "spiritual" concepts, believing that they were contrary to "obvious physical facts".
For him, consciousness is just a product of neural firing. "Let the brain scientists study how it works."
Searle also often opposed post - modernists. The latter questioned whether humans could obtain objective truth, believing that reality is always filtered by subjective experiences.
Searle insisted that looking at things from one perspective does not mean not seeing the truth itself - just like seeing only the front of a sofa still means seeing the sofa.
However, Searle's later years were overshadowed.
In 2017, he was stripped of his title of emeritus professor by Berkeley due to multiple sexual harassment accusations, and the "Searle Center" named after him was immediately closed.
This incident caused a great stir, making this philosopher, once regarded as a "symbol of reason", the target of public criticism.
As a result, after Searle's death, the obituaries of mainstream media about him were almost all late.
He died on September 16, but it was not until a month later that mainstream media reported it one after another.
Even so, his influence is still profound. The philosopher Edward Feser sighed:
Philosophers like Kripke, Putnam, Dennett, and Fodor have all made it into the obituaries of mainstream media such as The New York Times and The Guardian. But as far as I know (does anyone know otherwise?), there is still no such obituary for John Searle, although his reputation is no less than theirs. This is both absurd and unfair.
His life was both known for his sharpness and ended with controversy.
The "Chinese Room" - the thought experiment that silenced Hinton and made AI researchers ponder - may be Searle's most representative philosophical legacy.
Getting out of the Chinese Room
This room almost condenses all of Searle's philosophical stances and controversial spirit.
The Chinese Room is a famous thought experiment proposed by Searle in 1980, aiming to refute the claim of strong artificial intelligence.
The experiment imagines an English - speaking person who doesn't understand Chinese being locked in a closed room. Inside the room, there is a rule book written in English, which guides how to operate based on the input Chinese characters and output Chinese responses.
Although the operation is perfect and makes people outside the room think that the person inside understands Chinese, in fact, the person inside doesn't understand the semantics of Chinese.
Searle thus argues that programs can only mimic the form of intelligence (at the syntactic level) but cannot have real understanding ability (at the semantic level). He emphasizes that intelligence is not just program processing but also the ability to establish semantic connections between symbols and objects.
This argument questions the validity of the Turing Test. Searle believes that although a machine can show "intelligence" in behavior, it doesn't mean it truly understands the information.
In other words, the operation of a program is not equivalent to human understanding.
A computer is just a symbol - operating system. It manipulates symbols according to rules without knowing what these symbols represent or understanding their meanings. The operation of a program is completely syntactic, only involving formal structures.
Human understanding is different. It not only depends on symbols and syntax but also on the grasp of meaning.
Therefore, there is an essential difference between passing the Turing Test and truly understanding: the former stays at the syntactic level of formal symbols, while the latter concerns the semantic level carried by symbols.
The influence of the Chinese Room persists to this day. When people face language models such as GPT, they often use it as an analogy: they are just stacks of statistical patterns, only "simulating understanding" rather than "having understanding".
It's like a person who doesn't know Chinese answering questions based on a rule book. The sentences are fluent but have no real meaning.
However, can AI really only process data without understanding the content?
This debate has never stopped. The core question may not be "Can machines understand?" but "What does understanding mean?"
As early as the 1980s, Margaret Boden pointed out in her article "Escaping from the Chinese Room":
The important question is not when a machine understands something? (This question implies the existence of a definite breakpoint where understanding stops, which is misleading), but what things must a machine (whether biological or not) be able to do in order to be able to understand?
- This shifts the focus from "whether to understand" to "how understanding is generated", from philosophical traps back to operable scientific research.
The pioneer of artificial intelligence John McCarthy also criticized Searle for confusing two levels: he attributed the psychological characteristics of the person performing the calculation (such as Searle himself) to the process simulated by the calculation (such as understanding Chinese).
In other words, of course Searle himself in the Chinese Room doesn't understand Chinese, but that doesn't mean the entire "Chinese Room system" can't understand Chinese - just like a single neuron doesn't understand language, but the brain can.
The psycholinguist Steven Pinker believes that Searle is just discussing the usage of the word "understanding" and not touching on observable scientific issues.
Currently, Hinton has given a new perspective on this debate.
In an interview, when responding to the Chinese Room, he said:
Large language models do "understand" language - although this understanding is achieved by simulating human cognition. These models assign features to words and analyze the interactions between these features, just like the way the human brain processes language.
In other words, the interaction of billions of features itself is a form of understanding. Perhaps this is the closest simulation of the human brain's language processing we have achieved so far.
Maybe, as Feynman said, "What I cannot create, I do not understand."
Only when we stop being obsessed with "what is understanding" and regenerate understanding through creation and construction can artificial intelligence possibly touch the essence of "understanding".
Finally, an off - topic note.
As for why he chose Chinese, Searle casually said in an interview later:
Choose a language I don't know at all, like Chinese, and then assume that someone has written a program to "understand Chinese".
This choice seems casual but actually has profound implications.
Netizens have put forward two convincing explanations:
Firstly, Searle's choice may reflect cultural stereotypes in the Western context.
In English, people often say "It's all Chinese to me", meaning "I don't understand it at all", similar to "It's all Greek to me".
The metaphorical power of the "Chinese Room" partly comes from this imagination of a language that can be operated but is difficult to understand.
Secondly, there are often recognizable word forms or etymologies among Latin - alphabet languages. Even if an English speaker doesn't understand French or German, they can still guess part of the meaning.
The independent writing system of Chinese completely cuts off this possibility, making "non - understanding" more thorough.
Perhaps for this reason, Searle's "Chinese Room" has become one of the most metaphorically powerful thought experiments in the history of philosophy -
A closed room reflects human confusion about "understanding".
Reference links: