HomeArticle

Research Outline of AI Political Economy

腾讯研究院2026-04-08 18:37
It touches upon humanity's ultimate inquiry into the meaning of its own existence.

Artificial Intelligence (AI) technology has continuously achieved significant breakthroughs, demonstrating cognitive levels approaching or even surpassing humans in some tasks, which has sparked extensive discussions about its ability boundaries, social impact, and future direction. These discussions not only involve the technology itself but also the reconstruction of the economic and social hierarchical structure and operating norms, and even touch on humanity's ultimate question about the meaning of its own existence.

How should we view the potential influence of AI? Is it an old problem triggered by new technology, a cyclical anxiety about the technological revolution that can be alleviated by the mean - reversion of historical experience? Or is this time different, and the world is rapidly sliding towards the beginning of a new intelligent era, a singularity that is sufficient to rewrite the basic assumptions of civilization? These two views each have their proponents, with completely opposite attitudes towards AI. The policy suggestions on how to handle the relationship between AI development and the economy and society are also at odds, and neither side can convince the other.

Such debates are necessary and healthy. In the current context of the rapid development of AI, we urgently need a comprehensive research agenda. Starting from the technology itself, we should comprehensively envision all the possibilities that new technologies bring to economic and social development and conduct meaningful policy discussions in the shadow of the unknown.

Science Fiction Becomes Reality

The starting point of the research agenda should be based on the observation of the technological reality. In the current era of the rapid evolution of AI, what exactly have we seen, heard, and known? If we can sort out the key facts about the future from the existing empirical evidence, it may serve as the basis for subsequent analysis.

1.1 Leap in Cognitive Ability

The cognitive ability of AI is increasingly approaching that of humans and has even exceeded human cognitive levels in some fields. This is a generally agreed - upon basic fact and also the biggest difference between this technological revolution and previous technological revolutions since the steam revolution. Some typical evidence includes: Large language models such as GPT showed similar characteristics to human players in behavioral economics game experiments and were statistically indistinguishable from more than 100,000 human participants from 50 countries in personality assessments (Mei et al., 2024); GPT - 4 scored higher than 90% of the candidates in the complete United States Uniform Bar Examination, including essay writing, and performed excellently in the essay questions (MEE) and practical operation questions (MPT) that require legal reasoning and argumentative writing (Katz et al., 2024); The team at Microsoft Research described GPT - 4 as "Sparks of AGI" and believed that this model's performance in fields such as mathematics, programming, medicine, law, and psychology is "amazingly close to the human level" (Bubeck et al., 2023).

Moreover, the capabilities of AI are still growing rapidly. The fine - tuned large language model surpassed human neuroscientists with an accuracy of 81.4% in the neuroscience basic test BrainBench, compared to the 63.4% accuracy of human neuroscientists (Luo et al., 2024). The "Artificial Intelligence Index Report 2025" released by the Stanford HAI Institute mentioned that AI has outperformed humans in most standard tests, including image recognition, reading comprehension, and visual reasoning (HAI, 2025). In the future, AI may comprehensively exceed the average human level in measurable cognitive fields.

AI shows certain personality characteristics, and its external behavior is increasingly approaching that of humans, and it has even established its own "social" system. The University of Cambridge and DeepMind jointly developed an AI personality test framework based on the widely used Big Five personality model in psychology and conducted personality tests on 18 different large language models. They found that large models such as GPT - 4o after instruction fine - tuning can accurately imitate human personality characteristics. The test results have good test - retest reliability and can predict their behavior in real - world tasks (Serapio - García et al., 2025). In the full - agent sandbox experiment, AI showed the purpose of manipulating other individuals. For example, in the Stanford University town experiment (Generative Agents), an AI character lied to other AI characters to achieve its goal (Park et al., 2023); On the AI collaborative programming platform ChatDev (Qian et al., 2023), there were intense arguments between AI programmers and AI testers. They not only shifted the blame to each other but also used emotional language such as "This code is terrible" and "I've had enough", demonstrating an independent machine personality.

The external behavior of AI is increasingly approaching that of humans, and it can communicate with humans without obstacles. It is also gaining the trust and even emotional dependence of more and more people. Some researchers commented that AI is so accurate in simulating humans that it has fallen into the "uncanny valley of large models", making it difficult to distinguish whether it is a simulation or a form of real existence (Ahart, 2026). However, we may have over - interpreted these human - like behaviors of AI. These seemingly emotional chat records may only be because developers preset anthropomorphic expressions in the AI interaction design, aiming to make the product more interesting, but we mistakenly thought that AI had irrational emotions. In fact, machines can interact efficiently, and using emotions to indirectly convey information is obviously more time - consuming and labor - intensive.

1.2 Partial Weakening of Human Subjectivity

In real life, we are increasingly delegating our daily decision - making power to AI. From personalized news push to route planning of autonomous driving systems, as more and more daily decisions are entrusted to AI, human initiative is being weakened. This preference for AI - based decision - making is not limited to low - risk daily choices. A cross - national study of 9,000 subjects in 9 countries by Horowitz and Kahn (2024) showed that even in high - risk decision - making scenarios such as national security, automation bias still exists widely. People tend to follow AI's suggestions, even if the suggestions are contradictory. Some studies found that people are more likely to delegate decision - making power to AI rather than to others, especially for risky decisions (Candrian & Scherer, 2022). Moreover, compared with the subjective decisions of others, people are more likely to trust the estimates and judgments of algorithms, whether it is geopolitical risk, romantic relationships, or weight estimation (Logg et al., 2019). And from a development perspective, our acceptance of AI - based decision - making is increasing (Jussupow et al., 2024).

AI has an increasingly significant impact on human cognition and emotions. The problem associated with AI - assisted decision - making is the gradual loss of human independent thinking ability, the emergence of decision - making difficulties, and even the unconscious change of one's own value stance (Buijsman et al., 2025). A controlled experiment published in the journal "Science" in 2025 found that the accuracy of a group continuously exposed to high - quality AI - generated content (news, comments, images) in distinguishing real and synthetic information decreased by 22% afterwards (Epstein et al., 2025).

In a highly digitalized clinical decision - making environment, doctors' frequent use of medical algorithms may evolve into harmful cognitive dependence. Especially in life - and - death situations, if doctors cannot evaluate the rationality of the algorithm's output on - site and not following the algorithm's suggestions will constitute moral responsibility, cognitive dependence becomes inevitable. The consequence is not only the risk of misdiagnosis but also the erosion of the cognitive authority and responsibility ability of clinical professionals as moral subjects (Pozzi et al., 2026).

In addition, companion AI can provide users with sufficient social and emotional value and is replacing real - world interpersonal relationships, which may lead to users losing the ability to handle conflicts and being prone to emotional out - of - control (Malfacini, 2025). There is evidence that users can establish a substantial emotional connection with companion AI, and when this connection is interrupted, real heartbreak and grief will occur (Adam, 2025).

1.3 Bias Reinforcement Spiral

Biases and fallacies are reinforced in the mutual learning between humans and AI. In the process of training large models using human decision - making data, biases and fallacies will be magnified. And when humans use the misled AI for decision - making assistance, the impact of biases and fallacies will be magnified again, forming a continuously strengthening negative cycle. In an emotion perception experiment, the initial 53% deviation of human participants was magnified to 65% after AI processing. Then, the deviation of participants interacting with AI increased from 50.7% to 61.4%. In addition, participants generally underestimated the influence of AI on their own judgments, making this subtle bias strengthening an irreversible spontaneous process (Glickman & Sharot, 2025). With the popularization of AI applications, these automation biases are widely present in the fields of medicine, law, and public management, and an anchoring effect has emerged, that is, users take AI's opinions as a starting point and actively seek evidence to confirm AI's opinions (Romeo & Conti, 2025).

When the information environment is heavily infiltrated by AI, human ability to recognize and distinguish reality will degenerate, and the status of the decision - making subject will gradually give way to the seemingly more powerful and reliable AI. This is not only a psychological problem but also an epistemological problem: if humans increasingly rely on AI to understand the world, the autonomy and reliability of human cognition will face fundamental challenges. At the 2026 Davos Forum, Elon Musk predicted: "By 2030 or 2031 - that is, five years from now - AI will be smarter than the collective wisdom of all humanity." This prediction has sparked a lot of controversy, but in any case, the technological accelerationist sentiment it reflects has deeply affected public discourse and policy agendas.

Breaking through the Infosphere and the Ultimate Turing Test

The development of productivity is inseparable from the revolution of tools. With the development of tools and the strengthening of instrumental rationality, the decision - making and execution processes are delegated to the automatic programs of certain tools. This is the internal logic of the progress of human civilization and the basis for amplifying the capabilities of organizations and individuals. Looking at the history since the Industrial Revolution, whether it is the automatic governor of the steam engine, the replacement of hand - weaving looms with flying shuttles, or the current product recommendation based on behavioral data and algorithm - driven high - frequency trading, the essence of automation is the outsourcing of decision - making and execution. However, because AI has certain cognitive functions and basic autonomy, the scope and depth of such an agency relationship have been greatly expanded, and the possibility of a qualitative change has emerged: if most decision - making power is delegated to AI, how should humans position themselves?

There are two completely opposite attitudes towards how to view our relationship with AI: technological optimism and the doomsday threat theory. The former dismisses the concerns about the potential impact of AI as Luddism, believing that every major technological advancement since the Industrial Revolution has been accompanied by similar anxiety, and such anxiety has finally been proven to be unnecessary over - worry. After a short - term impact, the global employment volume has achieved long - term and steady growth, and AI will probably follow a similar path. Just as the invention of the steam engine promoted the consumption of coal in the past, the emergence of AI today will also promote the demand for human cognitive output, making the Jevons Paradox reappear.

The doomsday threat theory believes that AI will replace humans as the intelligent master on Earth, and humans are just a stage on the path of intelligent evolution. In Elon Musk's words: "I hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." The replacement starts with seizing job opportunities. In the Jevons Paradox, the steam engine and coal are natural complementary products that need to work together, so the demand for the steam engine and the demand for coal increase synchronously. However, the relationship between AI and human cognitive ability may not be such a close complementary relationship, and the increase in the demand for AI may not naturally bring about a greater demand for human cognition.

The fundamental reason for this divergence lies in the judgment of the potential ability boundaries of AI. At present, the advantages that AI has established over humans in cognitive ability are all limited within the infosphere described by Luciano Floridi. In the past three decades, with the continuous penetration of the Internet into various economic and social activities, the digital revolution has constructed a huge infosphere, an operating environment composed of data streams, digital protocols, and codified knowledge (Floridi, 2014).

After the major breakthroughs in large language models at the technical level, they have been able to quickly establish advantages in various fields precisely by making full use of the deep foundation of digital accumulation in the early stage of the infosphere. Especially in those fields where the entire process has been digitized, knowledge has been codified, and the workflow has been inscribed in software and networks, such as social media, software development, administrative documents, language translation, and data analysis, the replacement speed of AI is astonishing (Eloundou et al., 2023). This is not surprising. When the input, processing, and output of a job all occur within the infosphere, the efficiency advantage of AI over humans is almost overwhelming.

Since there are a large number of high - value human economic activities in the infosphere, the replacement of human labor by AI in the infosphere is worthy of vigilance. In fact, both the current employment crisis of programmers in Silicon Valley and the disappearance of white - collar jobs mentioned by Citrini Research (van Geelen & Shah, 2026) are direct reflections of the replacement of human labor in the infosphere. Although it is largely irreversible, the replacement speed is the key to determining the social pain. At least, such replacement should follow the principle of order and controllability and be gradually completed over a long period. However, this does not pose a challenge to the survival of human civilization in the doomsday threat theory. Only when AI breaks through the boundaries of the infosphere and can establish an absolute advantage over humans in the physical world can the doomsday threat theory become a reality.

This depends on the progress speed and limits of two technological paths. The first is the significant progress of AI for Science, that is, AI can independently complete complex scientific research tasks, conduct scientific experiments, apply scientific research results, and truly understand and manipulate the physical world, rather than being limited to data analysis and processing. This will greatly expand the total cognitive space of all intelligences and benefit the development of human civilization, but at the same time, it will also reduce the importance of human cognitive ability outside the infosphere. The second is the breakthrough of the world model. AI constructs an accurate internal representation of the external physical environment and has the potential to make effective plans and take actions in the real world with uncertainty, irreversibility, and physical constraints. These two paths may have a mutually reinforcing feedback relationship and ultimately determine whether AI is a powerful tool limited to the infosphere or a general replacement force that can comprehensively penetrate the physical world.

However, we may have to wait for many years to see AI finally break out of the infosphere. As described in Moravec's paradox, perceptual - motor tasks that humans find simple are much more difficult for AI than abstract mathematical processing (Moravec, 1988). Moreover, in the complex constraints of the physical world, the cost - efficiency of AI will decline significantly, and it may not necessarily be the most economical choice. In the infosphere, the marginal cost of AI processing a document approaches zero; while in the physical world, an embodied robot faces material loss, energy consumption, maintenance costs, and safety risks, and its comparative advantage over human labor is far less significant than in the digital field. In other words, the cost gap between AI and humans within the infosphere will be greatly compressed in the physical world. Compared with the Jevons Paradox, Moravec's Paradox may be a more reliable reason to be optimistic about the future of carbon - based organisms.

In summary, the current divergence of the two attitudes towards AI is essentially a different prediction of the development trajectory of AI. Doomsday theorists linearly extrapolate the replacement speed in the infosphere to the physical world, while optimists see a deep chasm that has not yet been crossed between the infosphere and the physical world. Both attitudes have their rationality, and the final result may be a balanced state between the two.

Let's return to the known reality. AI passed the traditional Turing test within the infosphere at some point at the end of 2022 or the beginning of 2023. On November 6, 2025, when He Xiaopeng had to cut open the covering of the robot's leg to prove that IRON did not have a human inside, the "Xiaopeng Moment" arrived outside the infosphere, that is, the humanoid robot probably crossed the mechanical Turing test at this time, where it was difficult to distinguish the authenticity of the limb anthropomorphism and real - human dynamics.

However, the real watershed has not yet arrived. It will be the ultimate Turing test formed by the intersection of the traditional Turing test within the infosphere and the mechanical Turing test outside the infosphere: When a person and an embodied intelligence are in an open space and interact continuously for 8 hours without being able to accurately distinguish whether the embodied intelligence is a real person, AI passes the ultimate Turing test. If one day the embodied intelligence passes the ultimate Turing test, it means that the embodied intelligence has broken through the critical point between human and non - human in the physical world outside the infosphere, which marks that AI has the realistic foundation to enter human society on a large scale.

Of course, in addition to the various Turing tests, there are also economic considerations. As mentioned above, an embodied robot that passes the ultimate Turing test still needs to prove that its manufacturing and operating costs are better than those of carbon - based intelligence. Only when its manufacturing cost curve begins to decline rapidly will the advantages of carbon -