HomeArticle

AI is not electricity; it's more like an elevator: Why is "not being able to use it" the key signal?

AI深度研究员2025-12-16 08:17
The low usage rate of AI tools is due to the lack of daily application scenarios rather than technical capabilities.

On December 13, 2025, a16z's latest podcast, "AI Eats the World," was launched.

Technology trend analyst Benedict Evans (former a16z partner) and a16z partner Erik Torenberg started by asking: Is AI just another round of platform migration, or will it become a fundamental general - purpose technology that rewrites the world, like electricity and the Internet?

It sounds grand. But the real contradiction lies elsewhere: Tech giants are scrambling for computing power, adjusting teams, and building super - factories, while ordinary people can't even figure out what to do with AI every day.

This is not due to laziness or lack of technical knowledge.

Evans believes that it's not that AI isn't powerful enough; it's just that it hasn't become a daily tool. Just like an elevator, you have to step in and press a button for it to take you upstairs.

This conversation aims to answer three progressive questions:

Why have 900 million people registered, but they can't think of what to do with it?

Why is it so capable, but it hasn't become a daily tool?

Who will win the next entry point?

Finally, Erik asked a question that many people are avoiding: What needs to happen for AI to be truly considered more transformative than the Internet?

Section 1 | 900 million users, 95% can't think of what to do

Opening ChatGPT doesn't mean using AI.

An often - overlooked fact: The usage rate of AI tools lags far behind the registration rate.

ChatGPT already has 800 - 900 million weekly active users and is the most visited AI tool globally. But the real data is even more brutal: Only 5% of users pay, and the proportion of daily active users is less than 15%. That is to say, most people leave after a try.

This is not because the product is unusable but because users can't think of what to use it for.

Evans said that the state of such users is typical: They know about it, have used it, and have an account, but they just don't think of using it this week or next week.

This is actually another form of the cold - start dilemma. The technology is ready, and users are there, but the usage scenarios aren't. In comparison, electricity, telecommunications, search, and social media have all gone through the transformation from novelty to addiction. For ChatGPT, the retention rate is stuck after the experience, and the vast majority of people stop using it after trying.

This is not a problem with individual models. Claude, Gemini, and Grok all face similar bottlenecks.

The root cause is that most people lack not an account but a purpose for use; not an understanding of functions but specific usage scenarios.

AI is already very capable, but usage habits haven't been established yet.

Section 2 | What AI lacks is not ability, but a menu bar

So, how can people start using it?

Every real technological leap doesn't start with increased capabilities but with a change in the interaction method.

Benedict Evans pointed out that today's AI is misjudged as a platform upgrade, but in fact, it's more like a reconstruction of the user interface. Just as Excel is to finance, Photoshop is to design, browsers are to the Internet, and the iPhone is to mobile... The real turning point isn't the technology itself but how people start using it.

However, general large models like ChatGPT, Claude, and Gemini still remain at the prompt input box stage. A general chat interface and a vague command entry for everyone. They seem like products, but in fact, they're more like API wrappers, requiring users to think from scratch what they want to do every time.

This doesn't mean AI is useless; it's just that no one has told users how to use it.

Evans mentioned a classic case:

"Back then, Microsoft Office was called the shell of Win32. Most functions were processed by the operating system in the background, and users only needed to click buttons on a simple interface to complete complex operations they didn't know or need to know about."

Today's AI hasn't reached this stage yet.

Evans pointed out: We say AI can do everything, but more accurately, AI requires you to know everything yourself.

This is why so many AI - native SaaS companies are doing the same thing: Packaging general AI into specific buttons and defining tasks for users. Evans said that these companies are just re - doing some functions of Oracle, Excel, and Email.

Section 3 | Where did Claude lose? Not in ability, but in the entry point

If we only look at ability, many conclusions would be completely different.

In mainstream benchmarks, the differences between Claude, GPT, and Gemini are negligible. In some tasks, Claude is even more stable, more restrained, and more human - like. But the reality is that ChatGPT has almost become synonymous with AI, while almost no one uses Claude.

Why? Because models are becoming commodities, and distribution is the scarce resource.

When model capabilities are similar, users won't bother to compare and will just use the first one they see. This is the case with Firefox and Chrome. Firefox isn't bad, but Chrome has the entry point. Once the ability is sufficient, the entry point is more crucial.

This is why Claude has an excellent reputation in the developer community, but ordinary users hardly know about it. It's not that good models aren't needed, but people simply won't actively look for them.

ChatGPT's 800 - 900 million weekly active users may seem like a barrier, but in fact, it's just that people started using it first. Once the entry point changes, such as in search, browsers, or operating systems, users may quickly migrate.

In other words, ChatGPT's lead is just being the first to arrive, not truly being firmly established.

Evans' conclusion is that the moat of AI products isn't model parameters but the default entry point.

And this is the next crucial moment for AI.

Section 4 | The real moment: When you open it for the third time

But when will this crucial moment arrive? No one knows.

We know nothing about AI's capabilities and don't fully understand why it works so well. This isn't modesty; it's the truth.

So, you'll hear completely different predictions:

  1. Demis says human - level ability hasn't been reached yet.
  2. Altman says it can already be as good as a doctor.
  3. Karpathy says it may still take ten years.

This uncertainty is actually an opportunity: Users haven't developed fixed habits, and products are still in the trial - and - error stage. Whoever finds a scenario that is easy to learn and use every day first will occupy the entry point.

Some scenarios are already starting to emerge.

Evans mentioned some ongoing cases:

Software development: Some developers already spend 3 hours a day in AI, writing code, fixing bugs, and doing refactoring. These are the first people to find a daily workflow.

Marketing: Large companies are starting to use AI to generate 300 advertising materials in batches and then manually select the best 30. This is much more efficient than having people create 30.

Enterprise - specific solutions: Accenture is helping large companies solve very specific business problems by embedding AI into existing processes.

What's more interesting is the retail scenario mentioned by Evans: Today, Amazon's recommendation system logic is: If you buy a light bulb, it recommends bubble wrap; if you buy bubble wrap, it recommends packing tape. But what AI can do is identify that a person is moving and recommend home insurance ads.

This is the real value of AI: Not doing existing tasks faster but discovering tasks you've never done but should do.

This leap from "associative recommendation" to "scenario recognition" is the beginning of AI rewriting daily life.

And that turning point doesn't lie in a press conference or a benchmark ranking but on a Tuesday afternoon when an ordinary person opens the same function for the third or fourth time because it's truly useful.

Conclusion | The real door hasn't been opened yet

ChatGPT has 900 million users, and Claude and Gemini are also very capable. Models are improving every day.

But most people can't even think of what to use them for every day.

Benedict Evans' judgment is straightforward: Models will become commodities, and the entry point is the scarce resource. And the essence of the entry point isn't to make AI a more powerful tool but to hide it in daily systems.

Just as Google CEO Pichai said: Do less on tools and build more systems.

Because true success isn't about what AI can do but that users don't even realize they're using AI.

📮 Original link:

https://www.youtube.com/watch?v=RH9vJNxFKDA&t=2s

https://www.iheart.com/podcast/867 - the - a16z - show - 30965806/?utm_source=chatgpt.com

https://podcasts.apple.com/ro/podcast/ai - eats - the - world - benedict - evans - on - the - next - platform - shift/id842818711?i=1000741043656

https://www.contagious.com/en/article/news - and - views/analyst - benedict - evans - on - ai - in - 2024?utm_source=chatgpt.com

This article is from the WeChat official account "AI Deep Researcher." Author: AI Deep Researcher. Published by 36Kr with authorization.