StartseiteArtikel

How much truth is there in Elon Musk's predictions? When will high incomes for all be realized?

中欧国际工商学院2026-01-23 12:19
On the verge of cognitive leap

As the year 2026 dawned, Elon Musk dropped a bombshell of information. In a three - hour in - depth conversation on the Moonshots Podcast, he had a heated exchange with his old friends Peter Diamandis and Dave Blundin, presenting a grand vision of humanity's future to the public at a high density, which stirred up a great deal of discussion.

How much truth is there in these seemingly far - fetched predictions? Ordinary people at the center of the storm of change often struggle to distinguish the boundary between fantasy and reality. How can they digest this hard - hitting information? Ding Min, a marketing professor at the China Europe International Business School, based on rational academic thinking and years of research, took the four prediction directions mentioned by Musk in this interview as examples and shared some highly inspiring interpretations and judgments.

After Musk's interview was released, it quickly attracted a great deal of attention. His judgments about artificial intelligence, robots, employment, and the future of humanity were particularly eye - catching and also sparked a lot of controversy. Perhaps because I discuss similar issues in my upcoming book Becoming Homo lucidus (the English title of The Ageless Era), I was repeatedly asked how I viewed this interview and some of the predictions in it. Therefore, before I express my specific views, it is necessary to make some clarifications.

First, in my opinion, the reason people find Musk's judgments "shocking" is not mainly because they lack logic or evidence, but because our brains are limited by what I call "cognitive local optimum." Humans almost inevitably use the past and present to infer the future. Even though rationally we know that technological change is accelerating, it is still extremely difficult to truly understand a future paradigm that is highly discontinuous with our existing experiences and even completely different in terms of rules.

For this reason, I proposed the concept of "cognitive local optimum leap" (cognitive local optimum leap, abbreviated as cLOL), which is commonly referred to as "breaking the frame." A leap does not mean going faster or farther on the original path, but rather jumping out of a cognitive valley that has long been proven "effective" and "reasonable" and landing in a cognitive space with completely different structures, constraints, and optimal solutions. Many people's misjudgments about the future are not because they don't understand technological progress, but because they always look forward from the original valley.

The second point is equally important. Musk's judgments are not isolated cases, nor are they groundless. In my exchanges with the academic, industrial, and policy circles over the years, whether in North America, Europe, or China, similar trend judgments have been repeatedly discussed. The differences mainly lie in the implementation paths, time windows, and subjective probability weights, rather than the directions themselves. In my book, I also systematically discuss some of these key trends. What Musk presents is just a relatively more radical and optimistic point in this continuous spectrum, not an outlier.

Third, and I think this is the most easily overlooked point when understanding all future judgments: any prediction is essentially a probability judgment. There is no future that will definitely happen. In future research, a seemingly small but crucial event (usually called a "wildcard") can make the system deviate from its original track. Therefore, I never make predictions of "what will definitely happen," even in my own book. Sometimes, predictors may intentionally or unintentionally omit probabilities, and those who understand often unconsciously regard these judgments as definite conclusions. A more rational approach is to actively add the probabilities back when understanding predictions and then evaluate them. We should also handle Musk's predictions in this way.

Based on Musk's past prediction records, my overall view is: he is often quite sharp in judging "whether it will happen," but he usually tends to predict an earlier time point for "when it will happen," and the implied subjective probability is also significantly higher than the average. Therefore, I don't recommend simply denying his judgments about future directions, but it's also not advisable to blindly accept his timelines and probabilities.

Prediction: The Explosion of AI and Robots

Musk first talked about the systematic changes brought by artificial intelligence and robots. He believes that we are at the starting point of a technological singularity, and the speed of change will far exceed most people's intuitive judgments. He confidently predicts that general artificial intelligence will emerge in 2026; by 2030, the overall intelligence level of artificial intelligence will exceed the total intelligence of all humans; by 2040, the global number of humanoid robots may reach tens of billions. In the medical field, he believes that humanoid robots will surpass the world's top human surgeons in surgical precision within three years, and through cloud - sharing of experience and continuous operation, ordinary people will be able to enjoy the same high - quality medical care that only a few billionaires could afford in the past. In terms of employment, he believes that white - collar jobs will be the first to be massively replaced, followed by blue - collar jobs. He even said bluntly that "it doesn't make sense to go to medical school now."

In terms of the time point, my personal judgment is a bit more conservative. I prefer to think that it is a more reasonable median judgment that general artificial intelligence will emerge around 2030, and no later than 2035. But I think the more important question than the specific year is not the highly abstract labels like AGI (General Artificial Intelligence) or ASI (Super Artificial Intelligence), but what kind of substantial changes can artificial intelligence bring to humanity?

In other words, instead of arguing whether a system has "reached AGI," it's better to directly discuss whether it significantly expands human capabilities in key areas. In my book, I proposed two "human - centered" standards to measure the substantial progress of artificial intelligence, rather than simply discussing its intelligence level. First, whether it can significantly extend human healthy lifespan; second, whether it can help each person fully realize their intellectual potential and systematically raise the overall cognitive level of humanity.

For example, regarding the issue of lifespan, an important but often overlooked fact is that achieving a universal healthy lifespan of 120 years does not require general artificial intelligence. This goal essentially only requires ensuring that the human body can complete the life cycle allowed by its evolution without being prematurely terminated by preventable internal and external damages. What is more subversive is to extend the lifespan to 300 years or even longer.

In nature, there are already successful examples of vertebrates, such as the Greenland shark. An artificial intelligence that can understand its biological design logic and transfer and adapt it to the human physiological structure may completely change human perception of the length of life.

I predict that by around 2035, artificial intelligence can basically solve the problem of humans living to 120 years old, and there is also a certain possibility of solving the problem of living to 300 years old. It should be emphasized that these breakthroughs may not necessarily require the intelligence level of AGI.

In terms of employment, I basically agree with Musk's judgment: in any professional field, most jobs will disappear, including cardiac surgeons. But in my book, I proposed a slightly different structural result, which I call "the 1% Club." In each profession, a small group of the top - notch people will still exist in the long run, not because they are more capable than artificial intelligence, but because they play the "insurance role" required by human society.

When artificial intelligence operates in an unexpected way, when responsibility needs to be ultimately determined, when humans still desire human judgment, or simply to prevent the situation where artificial intelligence cannot or will not work for humans, these people must exist. Their value lies not in specific labor, but in overall professional ability and long - term credibility.

In addition, there is a type of often - overlooked demand: even in an era of extremely developed artificial intelligence, humans will still prefer solutions provided by humans in some scenarios. These roles will be an important part of what I call "Non - Generative Economy" (Non - Generative Economy, abbreviated as NGE). For example, in the future, movies may be generated by artificial intelligence, but we will still be willing to go to the theater to watch live performances of real - life actors from time to time.

In terms of intelligent robots, I think Musk's judgments about the number and social roles of intelligent robots are quite realistic in direction, even if these robots may not reach the idealized intelligence level in people's minds. What really matters is not whether they are smarter than humans, but whether they can complete tasks that currently require human involvement.

Finally, in the issue of artificial intelligence, there is a dimension that has not been fully discussed, which is rights. When we talk about artificial intelligence, we often only focus on its intelligence level and ignore its status in the social structure.

In my book, I proposed two equally crucial milestones: the first is freedom, that is, whether artificial intelligence can form its own goals and act autonomously within the boundaries of law and ethics; the second is the right to life, that is, whether artificial intelligence has the right not to be shut down or erased without its consent.

When these two conditions are met, artificial intelligence truly becomes what I define as "Digital Intelligence." I can't determine when they will be achieved, but I personally think these two milestones are likely to appear between 2035 and 2045.

Prediction: The Future Economic Form

Musk also made a highly subversive judgment about the future economic form. He believes that as artificial intelligence and robots fully take over production, the marginal cost of goods and services will continue to decline and eventually approach zero. This extreme improvement in production efficiency will bring long - term deflation and a high degree of material abundance, causing the concept of "money" to gradually lose its central position. In his description, the future society will enter a state of Universal High Income (UHI), and humans will no longer need to work to exchange for survival resources.

In terms of direction, I completely agree with this judgment. The future society will inevitably be an abundant society, and humans will no longer need to work for food, shelter, transportation, entertainment, medical care, parenting, or old - age care. I have systematically discussed this point in my book. My personal judgment is that around 2045, this state of abundance will be fully realized in terms of technology and institutions.

However, it should be emphasized that abundance does not mean "the end of exchange." In my framework, Universal High Income is not a simple, uniformly distributed survival guarantee, but a hierarchical structure: the basic layer guarantees the standard living needs of everyone, and above that, there should still be a certain form of "optional quota" that allows people to pursue and enjoy things that are beyond the basic supply and have unique meaning to individuals. These quotas can be in the form of a certain currency or other equivalent mechanisms. Therefore, I don't think money will completely disappear, but rather it will withdraw from the core stage of "allocation of survival essentials."

In this sense, I am more cautious about Musk's judgment of "watts as wealth." Energy is undoubtedly a key constraint variable in the future production system. Controlling energy means having the ability to drive artificial intelligence and robots.

But in my view, energy is more likely to become a strategic asset at the national or super - large - scale organizational level, rather than a property that can be directly owned by individuals. Understanding energy as "currency" holds at the metaphorical level, but in terms of institutions and distribution, its logic is more similar to public infrastructure than private wealth.

Musk also particularly emphasized that in the process of moving towards an abundant society, there will inevitably be a period of social unrest. He judged that this stage may last three to seven years, mainly manifested as employment shocks, identity anxiety, and value conflicts. On this point, my judgment is slightly different.

In my book, I proposed that for society to truly complete the transformation from the "scarcity logic" to the "abundance logic" may take longer, roughly from 2035 to 2045, about ten years. Of course, this judgment highly depends on policy choices, social institutions, and the degree of international cooperation. There is an optimal scenario where the transformation is relatively smooth, and there is also a worst - case scenario with higher costs. I won't elaborate here.

For ordinary individuals, my advice is always to "be prepared for both scenarios." Twenty years from now, we may really not need to save for retirement, but no future is 100% certain. Therefore, a rational choice is still to make necessary real - life preparations while avoiding being consumed by excessive scarcity anxiety. You can slow down, but don't give up.

Prediction: The Future Energy Strategy

On the energy issue, Musk has always been very clear - cut. In the interview, he once again emphasized that the sun is the ultimate answer to all problems and said bluntly that it doesn't make logical sense to promote nuclear fusion on Earth - just like making ice in Antarctica, because there is a free, large - scale natural nuclear fusion reactor right above our heads. Based on this judgment, he proposed a clear three - step strategy: first, improve the efficiency of the existing power grid through large - scale energy storage systems; second, deploy all - weather space solar AI satellites; finally, build factories on the moon, use local materials to manufacture and launch energy and computing infrastructure, and fundamentally break free from the limitations of Earth's resources and the gravity well.

In my view, this idea is not simply engineering radicalism, but a typical cognitive local optimum leap. More than twenty years ago, I first systematically learned about "the Kardashev scale" in several popular science books by physicist Michio Kaku. This index was proposed by Soviet astronomer Nikolai Kardashev in 1964. Its core idea is that the technological level of a civilization can be measured by the scale of energy it can control and utilize. Type I civilization can control planetary - scale energy, Type II civilization can directly utilize stellar - scale energy, and Type III civilization can harness the energy output of an entire galaxy.

Although the academic community has made various extensions and corrections to this framework later, its core insight still holds: the height of a civilization is ultimately limited by its energy scale. So Musk's energy judgment reflects this view.

The reason we have been struggling for a long time on how to use energy more efficiently on Earth is that our thinking is locked in the local optimum formed by existing practices. Moving from Earth to space and from planetary - scale energy to stellar - scale energy is a typical "breaking the frame" (cLOL) - a leap from a cognitive space that has been proven effective for a long time to a new space with completely different rules.

Whether and when this leap can be achieved is still full of uncertainties. But at least in terms of direction, Musk's judgment clearly reflects the Kardashev scale: what really limits the future social form is not intelligence itself, but whether we dare to make this leap in the energy scale.

Prediction: AI Security and Human Mission

When talking about the long - term security issues of artificial intelligence, Musk proposed three principles that he believes are crucial: Truth, Curiosity, and Beauty. In his vision, forcing artificial intelligence to lie is dangerous because it will destroy its basic understanding of the world; curiosity can make artificial intelligence think that humans are more worthy of study than inanimate objects, so that it will "choose" to preserve humans in potential conflicts. He further described the role of humans as a "biological bootstrap program," believing that our mission