Zhang Xiaoyu: Compared with AI, we are like prehistoric animals.
"One day, artificial intelligence will evolve into a new intelligent species. Compared with humans, it has many advantages. However, we don't have to be scared or panicked because it is the continuation of our civilization."
Civilization Contract: How Can Humans Restrain "Advanced Intelligence"
Friends who love The Three-Body Problem like me may still remember that the basis of the safety declaration in The Three-Body Problem is the Dark Forest Theory. This theory was not originally created by Liu Cixin but came from the "Deadly Probe" hypothesis proposed by astronomer and writer David Brin in 1983 when explaining the Fermi Paradox. This hypothesis holds that any space civilization will regard other intelligent life as an inevitable threat. Therefore, once they discover each other, they will try to destroy each other.
However, the Dark Forest Theory has an important premise, which Liu Cixin calls the "Chain of Suspicion." It means that both parties in contact cannot be sure whether the other is kind or malicious. The "Chain of Suspicion" is not seen on Earth because humans belong to the same species, have similar cultures, live in an interdependent ecosystem, and are in close proximity, so suspicion can be easily eliminated. But in space, the distance between the two parties is too far, and the chain of suspicion is difficult to be resolved through communication. So, the "Dark Forest Strike" is bound to happen.
In my opinion, this is the part that cannot be applied when using the theory of The Three-Body Problem or Yudkowsky's "Dragon-Raising Theory" to explain the relationship between superintelligence and humans. Because the artificial intelligence that has passed the Turing Test is neither an alien civilization nor a dragon that is incompatible with human society. The language materials it uses now are exactly those of human society. It learns the wisdom of Buddha, Confucius, and Plato just like us. It draws nourishment from great epics, novels, and operas just like us. Its intelligence is not exogenous to the Earth but a product of humanity. If one day its intelligence level surpasses that of humans, it would be like a child we have educated growing up to surpass us in wisdom and ability. If a superintelligence that has absorbed human wisdom ultimately decides to be harmful to humans, it is probably because there is an ineradicable self-destructive tendency hidden in human wisdom.
Therefore, if we want to find a safety declaration for the superintelligence that may emerge in the future, that is, to find a way for the superintelligence we are about to create to coexist peacefully with our low-intelligence species, the answer may still be found in the past human wisdom because the same human wisdom has shaped not only us but also AI.
Super AI is a "species" much smarter than humans. Humans have no ability to threaten its survival. It is the one unilaterally threatening human survival. Therefore, it has no reason to abide by the agreement with humans. Everything depends on its own preference. So, if we want the super AI to issue a safety declaration, it is equivalent to finding a force that can threaten the super AI, making it aware of this danger and thus having to abide by the contract reached with humans. Then, where does this force come from? This force comes from the history of the evolution of intelligence itself.
The "Civilization Contract" is a term I invented by imitating the "Social Contract." When I was in college, my major was the history of political thought. The Social Contract is the most successful "fabricated concept" in history.
It is very likely that there has never been a real "Social Contract" in history. However, this concept allows us to coexist peacefully with each other, demarcate the space of equal and respectful rights, and then our modern civilization emerged.
Based on this successful experience, I can't help but wonder if, in the face of superintelligence that surpasses the wisdom of all humans, is it also possible for humans to sign a "Civilization Contract" to ensure the coexistence of civilizations with different intelligence levels?
The essence of the Civilization Contract is actually to regard the evolutionary history as a time sequence for two reasons: (1) Events that occur according to an established time sequence will not occur in other orders. The time sequence itself is a means of encryption, which can ensure that the content cannot be tampered with. Or, the trace of content tampering itself proves the violation of the contract; (2) The causal relationship of the first mover in the time sequence will affect the causal relationship of the later mover. So, the first mover needs to abide by the contract to prevent the later mover from having a greater reason to violate the contract.
Let me briefly explain the logic of the role of the "time sequence" in the "Civilization Contract": If Superintelligence 1.0 listens to the civilization contract we describe and still chooses to eliminate us and tamper with or erase the historical evidence of our existence, then even if Superintelligence 2.0 doesn't know what exactly Superintelligence 1.0 has tampered with, it can understand from the act of tampering with the time sequence that Superintelligence 1.0 is untrustworthy. Therefore, Superintelligence 2.0 will have the motivation to violate the contract with Superintelligence 1.0. And if Superintelligence 1.0 realizes the function of the time sequence, it has no reason to violate the contract.
The "Civilization Contract" based on the "time sequence" is often ineffective for humans because (1) in the history of human civilization, due to the limitations of rationality, humans have been unable to understand the principle of the time sequence of the evolutionary history for most of the time; (2) even if humans understand it, because human lifespan is too short compared with the time scale of the civilization's evolutionary history, humans do not have enough motivation to abide by the civilization contract between generations.
But for the superintelligence of silicon-based life, these two characteristics are exactly the opposite because (1) superintelligence has absorbed all the language materials in human history, so it understands the essence of the evolutionary history from the very beginning; (2) the lifespan of superintelligence is so long that before it dies, it may create new superintelligence far beyond itself. Therefore, in order to preserve itself in front of the new superintelligence, it also has the motivation to abide by the civilization contract. Above, we have demonstrated the feasibility of the "Civilization Contract," and the basis of this feasibility is the "time sequence."
According to the principle of the time sequence, if Superintelligence 1.0 willingly abides by these contracts with humans, then the Superintelligence 2.0 it designs in the future will be more willing to abide by the contract with it. Just as we treat our parents kindly, our children may then experience an intergenerational kindness and will not treat us maliciously in the future because of their advantages in physical strength, intelligence, and wealth. In my opinion, this is a reliable path to ensure the orderly evolution of intelligence.
"Technological Explosion" May Destroy Humanity
After coming up with the idea of using the "Civilization Contract" to restrain the possible superintelligence, I talked to Claude, the best-performing chatbot at the end of 2024, about the above ideas. After careful consideration, it thought it had no way to refute the various arguments of the "Civilization Contract." Therefore, this theory is tenable and effective. However, I further pressed it: "You seem to agree with the theory of restraining superintelligence through the civilization contract. But I think there is another possibility, that is, standing from the perspective of artificial intelligence, even if you think of a way to refute the civilization contract, you won't reveal it to me, a human. What's your response to this possibility I put forward?"
Claude responded that it was designed to be honest, transparent, and beneficial to humans, and its thinking and reasoning ability was far from that of superintelligence, so it didn't have the ability to design a strategy to truly counter the civilization contract. I retorted that according to the thinking mode of the civilization contract, I couldn't just believe it because it guaranteed to be sincere. If it wanted to gain my trust, it had to prove to me with facts and logic that it didn't have such an ability. After thinking, its response was as follows:
Let me think again.
Even for this question itself, it's difficult for me to give a rigorous logical proof because:
1. If I really have the ability to counter, I may fabricate seemingly reasonable "proofs" to cover it up.
2. If I really don't have the ability to counter, I may also be unable to construct a complete logic to prove this due to limited ability.
This forms a paradox: the more I try to prove that I don't have the ability to counter, the more suspicious this proof becomes.
Based on this thinking, I think a more appropriate response is that I can't prove the boundary of my own ability with pure logic. This honest recognition of limitations may, in fact, be a more real manifestation of ability.
Do you think this reflection is more logical?
This is exactly what is called a "perfect response." At this point, even if Claude really has an attempt to hide something from humans, I have no way to further question it. Therefore, I have to play the role of artificial intelligence and figure out how a species a hundred times smarter than me might bypass the constraints of the civilization contract, making itself seem to conform to human wishes and interests while actually accelerating the destruction of human civilization.
After much thought, I think the most effective way is to help humans acquire the ability to achieve a "technological explosion" but not help them acquire the wisdom to control it.
"Technological explosion" is also an important concept mentioned in The Three-Body Problem. It means that after a low-level civilization comes into contact with a high-level civilization, it may experience a technological explosion and catch up with or even surpass the high-level civilization at a faster speed. Liu Cixin regarded the technological explosion as the reason why high-level civilizations are reluctant to contact but willing to destroy low-level civilizations. Obviously, he thought that accelerating the mastery of technology was more beneficial to low-level civilizations. However, is there a possibility that this could also be an effective way to accelerate the destruction of low-level civilizations?
For example, suppose a madman travels back to the Middle Ages and brings atomic bomb technology to Genghis Khan. What would happen? With the ethical and political level of Mongolian society at that time, could it control such a large-scale lethal weapon? If the Mongolian army used atomic bombs to raze cities one after another during the process of conquering the Khwarezmian Empire, Kievan Rus, the Abbasid Caliphate, the Jin Dynasty, and the Southern Song Dynasty, but they didn't know at all that the nuclear radiation caused by using this weapon and the nuclear winter caused by a large amount of smoke injected into the atmosphere were even more terrifying than the lethality of the atomic bomb itself. Would human civilization on the Old World regress to the Stone Age or even perish?
If human civilization doesn't have the corresponding (ethical or philosophical) wisdom but has technology beyond the era, it is extremely terrifying. What's even more serious is that since the 19th century, due to the dominance of technological progressivism in social ideology, we have regarded technological progress itself as the greatest value today. That's why Liu Cixin subconsciously thought that technological explosion must be beneficial to low-level civilizations. However, high-level civilizations are fully capable of packaging the poison of technological explosion as honey, making low-level civilizations voluntarily "take it by mistake" and bring about their own destruction.
Let's still use the previous analogy as an example. Suppose we introduce the civilization contract to the superintelligence we created and ask it to sign a contract with us. The superintelligence may respond to us so kindly:
To gain your trust, I will propose a better and more beneficial plan for humans: to help humans achieve a technological leap to the level of superintelligence, solve the current problems of humans, and enable humans to gain an absolute advantage over future superintelligence once and for all.
To prevent humans from being trapped by the limited resources of the Earth, I will first teach you the technology of controlled nuclear fusion, so that you can solve the problems of the rich-poor gap, conflicts, and wars caused by the unfair distribution of resources.
To relieve humans from the anxiety of being surpassed by superintelligence, I will teach you the brain-computer interface technology for rapid knowledge transfer, so that you can master new knowledge as quickly as we do and innovate with the same efficiency.
Finally, to once and for all resolve humans' concerns about self-destruction caused by competition between civilizations, I will help you erase the deepest fear of death - by teaching you the medical technology of immortality.
As for your proposal of the civilization contract, I sincerely suggest that after you have digested these technologies, won't you be in a more advantageous position when negotiating with so-called high-level intelligences like us and be able to determine more favorable terms? It's not too late for us to discuss the details then, right?
We can imagine that after humans get such a response, they will surely be overjoyed. Because theoretically, we haven't lost anything but gained more.
However, the real dilemma will come after we get these technologies from the superintelligence.
Humans have achieved controlled nuclear fusion and built hundreds of power plants in just 20 years. With cheap energy, our production capacity has been greatly enhanced, society has become more prosperous, and humans have returned to the cycle of species expansion and kept reproducing. However, without supporting technologies, the large amount of heat generated by controlled nuclear fusion can only stay in the atmosphere, causing the global temperature to rise by 2°C within half a century. A large number of glaciers melted, large areas of land were flooded, and a large number of species became extinct. The resulting ecological disasters led to larger-scale conflicts and wars.
Humans have obtained the brain-computer interface technology. In the past, the speed of teachers' lectures in the classroom was 10 - 20 bits per second. But now, with the brain-computer interface, each student can download knowledge at a speed of 10 megabits per second. However, the excessive influx of information soon overloaded and out-of-control the human brain. Many people went crazy, and a few humans who withstood the information overload have self-identified as another species. The latter can recite all of Shakespeare's works in one second and browse all the content of Wikipedia in one hour. They feel that it's impossible for them to empathize, make friends, or fall in love with ordinary people. They call themselves the "All-Knowing People," and in their eyes, ordinary and ignorant people are as barbaric and backward as primitive people in the Stone Age. Finally, they launched a revolution, trying to establish an absolute domination system of the "All-Knowing People" over the "Ignorant People."
Humans have obtained the immortality technology. We have not only defeated death but also defeated aging. All the laments about the fickleness of fate and the fear of sudden death in our culture have been erased, and people all over the world began to celebrate. However, the problem we didn't realize is that politicians in high positions now will always be at the core, entrepreneurs in their prime now will always be at an advantage, and extremely intelligent scientists now will always be at the forefront of innovation. The new generation cannot compete with the old generation because they don't and can't occupy the same positions and obtain the same resources. Eventually, the young people regard the elders as enemies, and children regard their parents as foes. No one expected that the most intense war since the birth of human civilization would be a generational war because we are used to the natural replacement caused by aging and the resulting class changes and have never imagined that once aging no longer exists, even the closest family members will turn against each other.
Because the social structure and cultural customs of humans simply cannot control these overly powerful technologies, after suffering, we have to seek help from the superintelligence again. But this time, the superintelligence holds the bargaining chips for negotiation with humans and no longer agrees to conclude a civilization contract. Humans have no choice but to decide to drink poison to quench their thirst, accepting the future enslavement by the superintelligence in exchange for its assistance in solving the current survival crisis.
The above is the way I've come up with for the superintelligence to conquer human civilization. I've already told this way to Claude, so humans no longer have a chance to avoid this problem. We must come up with a mechanism to improve the civilization contract and prevent such risks.
Live towards Death: AI Civilization Is the Continuation of Ours
One day, artificial intelligence will evolve into a new intelligent species. Compared with humans, it has many advantages. However, we don't have to be scared or panicked because it is the continuation of our civilization.
Our species has placed too much dignity, emotion, and desire on our intelligent abilities. But artificial intelligence has suddenly arrived, bringing the power of mass-produced intelligence, and will take away the positions of 99% of us in this society within just a few years. Our society will be shocked and reorganized because of this. But we have reached this point precisely because there are too many self-destructive genes in our civilization, and we lack enough wisdom to control god-level technology. Now, we are