HomeArticle

From Sam Altman's perspective, where are the AI entrepreneurship opportunities?

王智远2025-06-25 10:28
How to use it to drive a real "abundance revolution"?

How can entrepreneurs be more likely to succeed?

In the conversation between Sam Altman and the founder of YC, some key information was given. He said: Drastic changes always bring the most opportunities to new companies.

This sentence sounds simple, but there is a very profound logic behind it:

When the entire industry and society are in a period of drastic change, the original rules will be broken, and big players may become sluggish; unable to react in time, small teams and startups have a better chance to quickly enter the market and do things that others haven't done before.

01

Where does the change manifest itself? First of all, there is a qualitative leap in AI technology.

In the past few years, the development of AI has not been a linear progress, but an exponential breakthrough. Concepts like AGI (Artificial General Intelligence) and HI (High-Intelligence AI) sounded like something from science fiction novels a decade ago, but now, they are gradually becoming a reality.

OpenAI is a typical example.

When it was founded in 2015, many people thought its goal was too crazy. Who would believe that a machine could think and create like a human? But today, the GPT series of models has become the fifth-largest website in the world, serving hundreds of millions of users every day.

What does this indicate?

The technological inflection point has arrived. Just like when the Internet first emerged, who could have imagined that giants like Amazon, Google, and Facebook would be born later? Today's AI is like the Internet back then.

Sam said that in a stable period, the market structure is fixed. Big companies have resources, channels, and brands, making it difficult for startups to break through. But it's different during a period of technological upheaval.

For example:

In the past, speech recognition, image recognition, and natural language processing were all fields monopolized by large companies. But now? With open-source models, cloud computing power, and powerful toolchains, a small team of just a few people can accomplish in a few weeks what used to take hundreds of people several years to do.

Moreover, the more disruptive the technology, the more likely it is to make traditional players "lose focus." They are either restricted by their existing businesses and dare not fully invest, or they react slowly and miss the window period.

At this time, startups are more likely to find a breakthrough. You can focus on a niche area and use the most cutting-edge technology to create the most imaginative products.

Another often overlooked trend is: Entrepreneurship itself has become easier than before.

Tools like GitHub Copilot, Midjourney, and Notion allow one person to do the work that used to require a team; you don't need to gather your team in one city or even one country. Global talent can be at your service.

The financing threshold has also been lowered. More and more angel investors, accelerators, and venture capital funds are willing to support early-stage projects, especially those in the AI field.

So Sam believes that not only is the technology itself changing, but the entire entrepreneurial ecosystem is becoming more friendly.

02

I've always been thinking about what a startup team will face at the beginning?

Resources, products, financing? No. It's a bunch of uncertainties. No one knows if this thing will succeed, no one believes you can make it, and even you yourself often doubt: Have I taken the wrong path?

But it's precisely the uncertainties and doubts that determine whether a company can go the distance.

Sam Altman said:

In a stable period, the market structure is fixed. Big companies have resources, channels, and brands, making it difficult for startups to break through. But once there is a drastic technological change, the original rules are broken. At this time, whoever can find the direction in the chaos may stand out.

The question is how to find the direction? How to judge whether something is worth persisting in?

Sam talked about the early days of OpenAI.

When it was founded in 2015, many people thought its goal was too crazy. Who would believe that a machine could think and create like a human?

At that time, DeepMind was already far ahead, and AGI sounded more like a concept from science fiction novels. Sam said that he had other jobs, and they actually had many safer directions to choose from.

But they finally decided to plunge in. They spent a whole year repeatedly discussing whether to launch this project, and the process was as difficult to decide as flipping a coin.

Guess what? There were too many opposing voices. Thousands of reasons told you: Stop, this thing can't succeed. For example, one of the key theories they proposed later was called the "Scaling Law."

What is the "Scaling Law"?

People generally believe that the larger the model, the more difficult it is to control, and the cost is too high, which is simply not cost-effective. But Sam said that he and his team were not intimidated by the voices. They chose to persevere and truly look for opportunities in the face of doubts.

There is a very interesting saying that I remember deeply. He said: If half of the people think what you're doing is right, you're probably just following the trend.

Truly valuable projects are often only recognized by a very small number of people. He calls it the "One Percent Rule": As long as a small group of smart people believe that what you're doing is meaningful, you've already won half the battle.

Sam said that this made him think of Elon Musk's evaluation of GPT-1 back then. After seeing the first-generation model, he directly sent an email saying, "This thing is garbage. It doesn't make any sense."

But look at today?

We use ChatGPT and GPT-4 every day, and we've even started discussing GPT-5. The ridicule back then has become a reality.

So Sam later said a very important thing with emotion: When many people tell you you're wrong, it's very difficult to stick to your beliefs. This kind of perseverance will become easier over time, but it really takes a great deal of courage and conviction at the beginning."

Another point is that direction is more important than speed.

Many people in the early stages of entrepreneurship want to get going quickly, create products, attract users, and secure financing. But I've found that the companies that truly last and grow large are often willing to slow down in the early days to figure out one question: What exactly are we going to do?

Sam said that if what you're doing is the same as others, it's difficult to attract top talent. It's also difficult to make people truly believe in a mission.

But if you do something unique, even if no one is optimistic about it at first, you can attract those who truly identify with your ideas.

In 2015, OpenAI wasn't sure if it could achieve AGI, but precisely because this thing was unique enough and had enough potential, it was able to gather a group of people who were willing to persevere for a long time and overcome challenges together.

Finally, entrepreneurship has never been a straight line. It's a series of attempts, corrections, and new starts.

Many people give up after one failed entrepreneurial attempt. In fact, not every entrepreneurial project will succeed. Learning to persevere and keep working hard in such situations is very important.

So, in the face of so many uncertainties and challenges, how should startups determine their direction? There are four points:

Be brave enough to do what others dare not do; find real opportunities in the face of doubts; adhere to long-termism rather than short-term popularity; find those who truly identify with you and go forward together. These are the abilities that entrepreneurs should possess the most.

03

Since we know the future trends, then how can startup AI companies build their moats by leveraging these trends?

When it comes to the "moat," many people's first reaction is: technological leadership, data monopoly, strong financing ability...

But Sam Altman has a different view. He believes that the moat doesn't exist at the beginning; it's the result of gradual evolution. It comes from your perseverance in the direction, in - depth exploration of user value, and continuous exploration of things that others haven't done.

Speaking of this, he talked about OpenAI back then.

In the first stage, in 2018, when GPT - 3 was just released, many people thought it was a cool language model, but OpenAI made it available as an API interface, allowing developers to directly call it.

At that time, there were no similar products on the market. This "first - mover advantage" was the initial moat.

And GPT - 3 was a unique existence in the market at that time. Even though its technology wasn't the most profound, it was the first to be widely used, forming user awareness and an ecological entry point.

In the second stage, OpenAI's moat began to be upgraded. At this time, it's not just about technology, but also user experience, brand awareness, and user habits.

For example, the "memory" function allows the AI to remember who you are, your preferences, and your work style; another example is the "connection" function, which enables it to automatically search for information online and call tools for you.

This is in - depth productization centered around user needs. It also illustrates a truth: The essence of the moat isn't that you're smarter than others, but that you understand users better than others.

The third stage was achieved through open - source and ecological co - construction.

Many people think that OpenAI is a closed - off big company, but in fact, it chose to go open - source at many key points. For example, projects like GPT - 2, Codex, and DALL·E were all publicly released.

Why do this? Sam said: Instead of fighting alone, it's better to let the entire community help you evolve.

Through open - source, it attracted a large number of developers to participate; through the API and plugin system, it established a huge application ecosystem; through the concept of the "agent store," it's building a new economic system around AI. So, there's no monopoly, only co - construction.

Another point: Don't go for the five popular directions that everyone else is doing.

For example:

Building a large - model training platform, providing open - source model fine - tuning services, doing multi - modal interface encapsulation, creating large models for vertical fields, or developing AI customer - service robots. These may seem popular, but they're highly competitive. Unless you can do it extremely well, you're likely to become cannon fodder on the battlefield.

Instead, you should find directions that others haven't noticed or dare not touch.

Such as how to make AI autonomously complete complex task chains? How to design a scalable "agent" system? How to make AI truly understand and remember user intentions? How to deploy AI in the physical world? These issues aren't attracting attention in the short term, but once you make a breakthrough, you can form a real barrier.

Sam also mentioned some specific technical strategies that can help startups build a differentiated moat:

Something like a hybrid model. Not all computations need to be done in the cloud. The future trend is a combination of "lightweight models + powerful inference capabilities."

Or, the vision of a unified model. One of the goals of GPT - 5 is to achieve a unified multi - modal architecture, no longer distinguishing between input forms such as text, images, and voice.

Cost control is also important. Whoever can reduce the cost on the inference side more can win in the long run. Of course, product experience is also part of the moat. Whoever can create a more "intelligent" and "user - understanding" product will win.

So, there are four points:

First, the core of the moat isn't just the technology itself, but the continuous exploration of user value; second, the opportunity for startups lies in being small and refined, fast and accurate, daring to make mistakes, and continuously evolving.

Third, the key to a differentiated strategy is to avoid the "five popular directions"; fourth, OpenAI has gone through three stages: market uniqueness, brand and function innovation, and open - source ecological co - construction.

04

Since we've talked about how AI changes entrepreneurship, products, work styles, and even builds moats, we have to ask a question: What is technological progress for?

Is it to create smarter robots? Or to increase the company's valuation? No. What's truly important is to use technology to unlock greater possibilities and bring about real "abundance."

Sam Altman said:

If you want to make the world a better place, the most effective way is to find that lever. And he believes that the two major technological levers in the future are AI and energy.

Why energy? Many people only see that AI models are becoming more and more powerful, but they ignore a very real problem behind it:

Computing power isn't free. OpenAI once estimated internally that a full - scale training of GPT - 4 consumes as much electricity as tens of thousands of households use in a month. This isn't a small number.

So Sam said: If you really want AI to develop in the long term, you must solve the energy problem. In other words, without cheap and sustainable energy, AI can't expand infinitely; but the opposite is also true: AI can also be a key tool to promote breakthroughs in energy technology.

For example:

In nuclear fusion research, AI can help scientists simulate the reaction process faster; in the design of solar panels, AI can optimize the material structure and improve conversion efficiency; in power grid dispatching, AI can make energy distribution more intelligent and efficient.

This is a symbiotic relationship: Energy supports AI, and AI in turn promotes energy innovation.

Energy isn't something that ordinary people think about, but AI is. Many people worry that AI will make some people richer and others poorer, leading to an increase in the wealth gap.

Sam has a different view. He believes that our goal shouldn't be to redistribute the cake, but to make the cake bigger.

What does this mean? The real value of technology isn't to replace or deprive anyone, but to create unprecedented abundance.

Imagine if we can use AI + energy technology to achieve the following:

Make electricity as accessible as air; eliminate the scarcity of water, food, and housing; ensure that everyone has basic living guarantees; turn education, medical care, and transportation into public infrastructure. Wouldn't many of the "social contradictions" we face today automatically ease?

This is the so - called "exponential abundance": using technological levers to make resources almost infinite.

So, what will happen after resources become abundant? Sam put forward a very interesting view: In the future, humans won't need to rely so much on traditional jobs to survive.

This means:

Work will no longer be the only means of making a living. Creativity, interest, and cooperation will become the mainstream. Small teams can accomplish what used to require large companies. The cost of cooperation will be greatly reduced, the trust mechanism will be rebuilt, individual abilities will be magnified, and organizational forms will become more flexible.

You can understand this as a new social contract: Technology is responsible for providing basic guarantees, and humans focus on creation, exploration, and connection.

Many people think that technological change is only driven by entrepreneurs and capital, but Sam's view is very clear: The government plays an irreplaceable role in this.

For example, the return of manufacturing, the construction of energy infrastructure, and nuclear fusion research... These aren't things that small companies can accomplish independently.

What the government should do is to encourage investment in hard technology, build data centers and clean energy stations. It shouldn't impose a one - size - fits - all ban, nor should it turn a blind eye. It should avoid falling into the pessimistic thinking of "de - growth." Instead, it should promote "green growth."

So, instead of arguing about whether AI will destroy humanity, we should think about how to use it to promote a real "abundance revolution."

Reference link:

[1].Altman, S. The Future of OpenAI, ChatGPT's Origins, and Building AI Hardware. [Video file]. Available at: https://www.youtube.com/watch?v=V979Wd1gmTU (Accessed: June 22, 2025). Source: YouTube.

This article is from the WeChat official account