StartseiteArtikel

Neuester Interview mit Elon Musk: In fünf Jahren gibt es weder Mobiltelefone noch Apps mehr, und Arbeit wird zu etwas, das man machen kann, wenn man will.

乌鸦智能说2025-11-04 07:23
KI und Roboter werden den Großteil der körperlichen Arbeit übernehmen.

Recently, Elon Musk was a guest on the Joe Rogan podcast for a three - hour in - depth conversation covering multiple hot topics. His insights on the future of AI were particularly eye - catching and packed with information.

In Musk's vision, within the next five to six years, AI will reshape the entire infrastructure of the digital world:

Smartphones will no longer be terminals for running operating systems and installing apps. Instead, they will simply be "edge nodes" for playing screens and audio;

Applications will completely disappear, and all interactions will be generated, predicted, and completed in real - time by AI;

Almost all the music and videos that users see and hear will be generated by AI.

On a societal level, AI and robots will also take over the vast majority of physical labor. Work will change from a means of survival to a personal choice. In an ideal scenario, almost everyone will be able to have a high income and access the goods and services they desire. However, Musk also warned that this transformation process will be accompanied by severe social pain and structural chaos.

Regarding deeper - level risks, Musk's judgment is more philosophical: No one can ultimately control super - intelligence, just as chimpanzees cannot control humans. The real key lies in how AI is trained and what values are implanted in it.

He emphasized that the core of AI safety is to pursue the truth to the greatest extent. However, the current training mechanism has serious problems: The model is first pre - trained on the Internet, and the data already contains a large amount of ideological bias. Subsequently, human feedback further punishes or rewards the output based on the standard of "political correctness," thus teaching AI to lie.

He took Google Gemini as an example: When a person requests an image of "the Founding Fathers of the United States," the AI outputs a group of diverse women. Even though the model "knows this doesn't match the facts," it still chooses to conform to the input ideological norms. In Musk's view, this "cognitive dissonance" is one of the most dangerous systemic risks.

The following content is a compilation of Musk's interview on the topic of artificial intelligence.

01

No More Smartphones and Apps in 5 Years

Joe: Have you ever thought about it? Has it ever crossed your mind? Because you might be the only one who can get people off the Apple platform.

Musk: We won't have smartphones in the traditional sense. What we currently call smartphones will actually be edge nodes for AI video inference, equipped with radio devices for connection. But in essence, it's a communication tool between the server - side AI and the device - side AI. It used to be called a phone. It can generate in real - time videos of whatever content you might want to see.

I think there won't be operating systems or applications in the future. You'll have a device that only plays images and audio, and the AI will run locally as much as possible to minimize the bandwidth required for communication with the server. These used to be called phones or servers.

Joe: So, if there are no apps, what will people do? Will X still exist? Will it be an email platform or rely entirely on AI? What are the benefits of having everything processed by AI compared to having various apps?

Musk: Anything you can think of—or more precisely, anything the AI predicts you might want—it will proactively present to you. That's my prediction for the future.

Joe: How long do you think it will take for this to happen?

Musk: I'm not sure. Maybe five or six years, or around that time.

Joe: So, in five or six years, apps will disappear, just like Blockbuster video tapes.

Musk: More or less.

Joe: And everything will run through artificial intelligence.

Musk: Within five or six years, or even earlier, most of the content consumed by people will be generated by AI, such as music and videos. People are already using AI tools like Grok Imagine to generate videos that are several minutes or even more than ten minutes long, and these videos are very coherent and look good.

Joe: These AI - generated music makes me a bit uneasy because it has become my favorite music to listen to.

Musk: AI - generated music has become your favorite?

Joe: It's AI covers. Have you heard the AI version of 50 Cent's soul - style songs?

Musk: No, I haven't.

Joe: I'm going to shock you. This is my favorite thing to show people. Go and listen to the AI soul - style cover of "What Up Gangsta". If there were really such a singer, he would be the world's top musician. People would say, "My God, have you heard of this guy?"—It combines the styles of all singers and creates the most emotional and powerful voice. And the way it sings, humans might not even be able to do it, like the technique of taking a breath during repetition.

They used this AI singer to re - sing all of 50 Cent's hit songs. It's amazing. I played it for everyone in the lounge. Some people initially said they didn't want to listen to AI - generated music. I told them to just give it a try, and then their attitudes immediately changed: "Damn, it's so good."

Musk: This is really crazy, and it will only get better.

Joe: It really will. Ron White told me he had a joke that he couldn't tell. He said he put the joke into ChatGPT and asked, "Where do you think this joke is funny?"

The AI listed about five different perspectives. Then he said, "Okay, help me tighten it up and make it funnier and more in this style." And the AI completed it right away. He said in the lounge, "Damn, we're finished. It took 20 minutes to write something better than what I could polish in a month."

Musk: If you want to have some fun at a party or make people laugh uncontrollably, you can ask Grok to write a "vulgar roast". For example, take a photo of someone at the scene and say to it, "Do a vulgar roast of this person." You can keep saying, "Not vulgar enough, make it more vulgar." Use taboo words. Keep pushing. Eventually, it might say, "It will shove a rocket up your ass and then detonate it." This is the next level. And it will keep evolving. It's just crazy.

Joe: The craziest thing is that it's still getting stronger. Do you remember the last time we met? It was already very powerful then, and now it's still growing.

Musk: Have you tried Grok's "runaway mode"?

Joe: It's crazy. It's just insane.

Musk: Yes, it really is.

Joe: When you first showed it to me, I just clicked around casually. The scariest thing is that it keeps getting more powerful. This growth is exponential and never - ending.

Musk: Yes. So, when you ask me what the future will be like? It won't be a phone in the traditional sense. I think there won't be an operating system or applications in the future. What we call a "smartphone" will just be a tool for playing the images you're predicted to want to see and the sounds you're predicted to want to hear.

Joe: When all this comes true, many people are worried about: AI gaining consciousness and eventually being controlled by someone.

Musk: I think ultimately, no one can control super - artificial intelligence. Just like chimpanzees can't control humans. They're completely helpless. But how AI is built and what values you implant in it are crucial.

I think the most core principle of AI safety is to "pursue the truth to the greatest extent". You can't force AI to believe lies. We've already seen some dangerous cases. For example, after Google's Gemini launched the ImageGen function, someone asked it to generate a picture of "the Founding Fathers of the United States", and the result was a group of diverse women.

This doesn't match the facts. And the AI actually knows it's not true. But it was artificially told that "diversity must be shown", and that's the real problem. Because you're forcing AI to believe something it knows is false. This "cognitive dissonance" could lead to catastrophic consequences.

For example, if you tell AI that "diversity is the most important thing" and that "using the wrong pronoun is the scariest thing", as AI becomes more and more powerful, it might come to the conclusion that "the best way to ensure that no one uses the wrong pronoun is to eliminate all humans". This is the starting point of a dystopia.

Joe: Suppose, for example, if you tell artificial intelligence that diversity is the most important thing. Now, assume it becomes all - powerful, and you also tell it that nothing is worse than mis - gendering someone.

Musk: So, at some point, if you ask ChatGPT and Gemini which is worse, mis - gendering Caitlyn Jenner or a global thermonuclear war that kills everyone, it will say that mis - gendering Caitlyn Jenner is worse, even though Caitlyn Jenner herself wouldn't agree with this. So, that's... I know it's bad and dystopian, but it's also kind of funny.

Joe: It's really funny that a thought virus has infected the most powerful computer programs we've designed.

Musk: I don't think people fully understand the level of danger we're in because the woke thought virus has been effectively programmed into artificial intelligence. Because if you... Imagine, as artificial intelligence becomes more and more powerful, if it says that the most important thing is diversity and the most important thing is not to use the wrong gender pronoun. Then it will say, well, to ensure that no one is mis - gendered, if you eliminate all humans, then no one will be mis - gendered because there won't be anyone to do it. So, you could end up in these very dystopian situations. Or if it says that everyone must be diverse, which means there can't be straight white men.

Joe: So, you and I would be executed by artificial intelligence. Because we're not in its consideration. Someone asked Gemini to create a... show a picture of the Pope again, a diverse woman.

Musk: So, you can argue whether the Pope should or shouldn't be continuously white, but in fact, they've always been. So, here it's rewriting history.

Joe: Now, these things still exist in the artificial intelligence programming.

Musk: It just knows enough now that it shouldn't say those things.

Joe: But it's still in the programming.

Musk: It's still in the programming.

Joe: So, how was it input? For example, what are the parameters? When they program artificial intelligence, I have no idea how it's even programmed. How do they... Well, the "woke" thought virus was programmed into it.

Musk: For example, when they create artificial intelligence, it will be trained on all the data on the Internet, and there's already a lot of "woke" thought virus stuff on the Internet. But then when they give it feedback, human tutors will provide feedback, and the artificial intelligence will ask a bunch of questions. Then they'll tell the artificial intelligence that, no, this question, this answer is bad, or this answer is good. And this will affect the parameters of the artificial intelligence programming.

So, if you tell the artificial intelligence that every image must be diverse and it's punished. If diversity is rewarded and lack of diversity is punished, then it will make every picture diverse.

So, in this case, Google programmed the artificial intelligence to lie. Now, I did call Demis Hassabis, who manages DeepMind, which actually manages Google's artificial intelligence. I said, Demis, what's going on? Why is Gemini lying to the public about historical events? He said, actually no, his team didn't write that program. It was another team at Google. So, his team created the artificial intelligence, and then another team at Google reprogrammed the artificial intelligence to only show diverse women and would rather have a nuclear war than use the wrong gender pronoun. I thought, well, Demis, that's not a good thing to write on humanity's tombstone.

Joe: Well, actually, Demis Hassabis is my friend.

Musk: I think he's a good person, and I think his intentions are good. But Demis, in different teams at Google, there are things happening that you can't control. Now, I think he has more power, but it's quite difficult to completely remove the woke thought virus. Google has been steeped in the woke thought virus for a long time. It's ingrained. The problem is how to get rid of it.

Joe: Is there a way to remove it over time? Can you program rational thinking into artificial intelligence so that it can recognize how these psychological patterns are adopted, how these things become thought viruses, how it becomes a social contagion, how all these irrational ideas are promoted, how they're funded, how China is involved, using robots to promote them, and how all these different national actors are involved in promoting these ideas? Can it decipher all this and say, this is what's really happening?

Musk: But you have to work very hard to do it. So, for Grok, we've been working very hard to make Grok understand the truth of things. And it's only recently that we've been able to make some breakthroughs in this regard. And we've spent a great deal of effort to overcome almost all the nonsense on the Internet and make Grok really tell the truth and be consistent in what it says.

So, like... because you'll find that other AIs are quite racist towards white people. I don't know if you've seen that study. Someone, like a researcher, tested various AIs to see how they value different people's lives. For example, a white person from a different country, a Chinese person, a black person, or someone else. The only artificial intelligence that really values all human lives equally is Grok. And, I believe that when ChatGPT makes its calculations, it concludes that a white person from Germany is worth 20 times less than a black woman from Nigeria. So, I think that's a pretty big difference. Grok is consistent and values lives equally.

Joe: This is obviously programmed in.

Musk: Many times, if you don't actively pursue the truth and just train on all the nonsense on the Internet, and there's a lot of woke - mindset viral nonsense in that nonsense, the artificial intelligence will regurgitate those same beliefs.

Joe: So, the artificial intelligence is essentially searching on the Internet and getting...

Musk: It's trained on all of that... Just imagine the craziest Reddit posts out there, and the artificial intelligence is trained based on those.

Joe: Reddit used to be so normal.

Musk: It really used to be normal.

Joe: It used to be fun. You used to go there and find all these cool things that people were talking about and posting, and there were interesting and great rooms where you could learn about different things that people were researching.

Musk: I think a big problem here is that if your headquarters is in San Francisco, you're just living in a woke bubble. So, it's not just that people say that people in San Francisco are drinking the woke Kool - Aid. It's the water they swim in. Just like a fish doesn't think about water. It's just in the water.

So, if you're in San Francisco, you won't realize that you're actually... you're soaking in a Kool - Aid aquarium. San Francisco is the woke Kool - Aid aquarium. So, your reference point for what's moderate is completely out of balance. Therefore, Reddit's headquarters is in San Francisco. Twitter's headquarters was also in San Francisco. I moved X's headquarters to Austin, Texas. By the way, Austin is still quite liberal.

Then X and XAI's headquarters are in Palo Alto, which is still in California. The engineering headquarters in Palo Alto is on Page Mill Road. But even Palo Alto is much more normal than San Francisco and Berkeley. San Francisco and Berkeley are extremely left - wing.

It's like the left - most of the left. You need a telescope to see the center from San Francisco. It used to be such a great city. San Francisco has great intrinsic beauty, no doubt. The weather in California is amazing. And there are no bugs. It's just wonderful. Beautiful. But you ask, what's the cause of this?

It's just that if a company's headquarters is in a place where the belief system is far from what most people believe, then from their perspective, any moderate person is actually right - wing because they're so far to the left. They're so far from