HomeArticle

Is your business model viable? You can't avoid these 6 questions.

红杉汇2025-10-30 10:20
@Entrepreneurs in the AI field, does the cake exist? How should it be sliced?

For entrepreneurs in the AI field, on the one hand, they need to focus on the technical indicators of large models, and on the other hand, they need to consider the product form and business model. For enterprises aiming for sustainable development, the latter may be more important than the former.

The author of an article in Tsinghua Management Review proposed the "Cake Model". Starting from six issues that any AI enterprise cannot avoid during its establishment and development, it helps entrepreneurs better judge their business models.

Jordan Fisher, the research leader at Anthropic (also a serial entrepreneur who has worked on security and model evaluation at OpenAI and DeepMind), also mentioned in a recent podcast that when the product reaches a commercially viable scale, the founders of AI enterprises need to rethink several fundamental strategic issues.

These contents are very forward - looking and targeted in terms of "the business model and sustainable development of AI enterprises". We have sorted out some of their important views, hoping to be helpful to our readers.

Question 1: One's Own Value Space

"Cake Model":

Does the cake exist?

The issue of value space can be further broken down into two sub - questions:

First, does the product create value? That is, does it meet user needs in some specific scenarios?

Second, does the value created by the product exist in the existing market or a new market? That is, does the product seize the existing market share or create new market demand?

Many large - model products are designed to help enterprises improve the production efficiency of simple and repetitive tasks and save labor costs, which means seizing the existing market share. However, whether these values can support a larger market still needs further discussion.

Jordan Fisher:

Is your company "building intelligence" or "renting intelligence"?

The biggest blind spot in current AI entrepreneurship is that too many companies are only "calling" models instead of "building" their own intelligent capabilities. The real difference lies not in model parameters, but in the learning loop. Only when the team can form a closed - loop in data, feedback, and user interaction does the intelligence truly belong to you.

"Rented intelligence allows you to act quickly, but cultivated intelligence enables you to grow steadily." The foundation of AI entrepreneurship is to have your own feedback loop.

Are you ready to become "social infrastructure"?

When your AI is used by millions of people, it is no longer just a product, but social infrastructure.

When AI reaches the bottom of society, it is no longer a neutral tool: the output of the model will affect education, employment, and public opinion; misinformation may be amplified; algorithmic bias will change social distribution...

Therefore, founders must redefine their responsibilities and design the system with a "public - service mindset" rather than just pursuing profits.

Question 2: The Approach to Realize the Value Space

"Cake Model":

Is the angle of cutting the cake correct?

The model of a truly "sharp" large - model product can find the most appropriate angle to accurately address users' pain points, making users truly willing to pay for it.

What is a truly "sharp" product form and business model? Take ChatGPT as an example. Its conversational product model, in the most intuitive and easy - to - understand product form, has quickly made the public strongly aware of the capabilities of large generative models, and has generated strong interest and confidence in large models around the world.

Only when an enterprise's product form and business model match a certain rigid - demand scenario of the target users can it find a "sharp" product and business model, cut into the target value space from the right angle, and have the opportunity to gradually achieve profitability.

Jordan Fisher:

Are you training the model or training people?

We often say that models can learn, but we ignore that people are also being trained. Founders must realize that the interaction logic of the product will shape user behavior. For example, when a language model constantly emphasizes "quick answers", users will gradually lose patience in asking questions.

You are not only designing the user experience (UX), but also designing thinking habits. AI entrepreneurs should think about how to make the system help people think better rather than just pursuing efficiency.

"Good AI makes people more human, while bad AI makes people more like machines."

Does your team have an interdisciplinary perspective?

AI entrepreneurship is not just a pure technical issue. The strongest teams in the future will definitely be interdisciplinary - they should understand both ML (Machine Learning) and psychology, sociology, and design. Humanities are the new engineering in the AI era.

The core competitiveness of an AI company is not the amount of code, but the ability to understand people. Therefore, it is recommended that founders invite philosophy and sociology consultants to the early - stage advisory group to make more comprehensive decisions.

"If there are no different voices in the team, your model is doomed to be one - sided."

Question 3: Sufficient Resources and Barriers

"Cake Model":

Can you resist others from grabbing the cake?

Just because an enterprise finds a "sharp" product form and business model based on the value space does not mean it can smoothly get a share of the market. It also needs to consider how to build high - enough barriers to resist other competitors in the market from seizing the same market share.

Many AI products are actually shell - applications of mainstream large models. They have greatly lowered the threshold for using large - model products in vertical fields, transforming large models with only general capabilities into tools with specialized capabilities. However, as the technology of large models continues to upgrade, the spill - over of their general capabilities often covers some specialized capabilities.

Jordan Fisher:

Are you building "defense" rather than just speed?

Speed without defense is just self - consumption. The current AI competition easily makes teams fall into "release anxiety" and ignore defense.

At the model level, defense is feedback data; at the organizational level, defense is culture. There are two common types of failures: leading in technology but having no feedback system (resulting in user loss); having a successful product but a collapsed culture (ethical scandals).

"Speed is a strategy, and defense is a structure." Truly excellent companies still maintain a review mechanism during the sprint.

Is your growth assumption sustainable?

Many AI companies have amazing early - stage growth but extremely poor long - term retention. The reason is that their growth comes from "technological novelty" rather than continuous value.

Therefore, founders should ask from the beginning: When a new model emerges, why won't my users leave? Can my data flywheel self - reinforce?

The real moat is the continuously accumulated real - usage data, not short - term media hype. Growth is a by - product, not a goal.

"AI companies are not afraid of slow progress, but afraid of idling."

Question 4: Profit Model

"Cake Model":

Is the input - output ratio of eating the cake reasonable?

To put it simply, the pricing methods in an enterprise's profit model can be divided into two extremes:

● Cost - plus pricing, which means setting a certain percentage of profit on top of the product R & D cost, with cost as the basis for pricing.

● Value - sharing pricing, which means that as the product provider, the enterprise charges a proportion based on the actual usage effect of the product and the benefits brought to the customers. The profit level is directly linked to the performance created by the product.

Generally, enterprises will find a balance between these two extremes. An important factor affecting this balance is the competitiveness of the market. If there is a long - term lack of a clear and sustainable profit model and enterprises are forced to engage in price wars, many AI companies may find it difficult to survive in the future. This is one of the core challenges that AI enterprises urgently need to address.

In addition to the profit model, enterprises should also consider the cost - control issue of large models. In addition to the high training cost, AI enterprises may also face more contradictory problems. Due to the high variability and uncertainty of AI products in the actual application process, AI enterprises (and users of AI tools) will face many unknown costs in addition to the visible costs, which poses a great challenge to the cost controllability of enterprises.

Question 5: Ecosystem Assistance

"Cake Model":

Is there anyone to help you eat the cake?

For a new technology to be popularized in the market, it often needs the cooperation of an ecological value chain, enabling other participants in the ecosystem to continuously apply this technology, and then creating a closed - loop for the sustainable iteration of the technology and the realization of commercial value.

Therefore, hard - technology enterprises must find an "ecosystem that allows the enterprise to operate stably and the technology to iterate continuously in application scenarios". This process is exactly the work of business - model innovation. Through business - model innovation, hard - technology enterprises can create a new niche market, build a new ecosystem, select stakeholders, and get multi - party cooperation, so as to design a transaction structure that can release sufficient value and have a chance of success. The same principle applies to AI enterprises.

When the product is mature enough, it has the opportunity to spawn a brand - new ecosystem to completely disrupt the original value chain and realize its own commercial value on a larger scale.

Question 6: Security and Openness

"Cake Model":

Is it safe to eat the cake?

The data - leakage risk of large models is the most concerning security risk in the market and for regulators.

On the one hand, various components of large models may have vulnerabilities to attacks and data - permission loopholes, leading to the accidental disclosure of sensitive information or unauthorized data access, which poses a great risk of customer - data leakage.

On the other hand, current large - model products may be manipulated by carefully designed prompts from attackers, and are prone to disclose the confidential data of large - model users under human inducement and deception. In addition, large models also face application risks caused by hallucinations.

Jordan Fisher:

Is your core resource computing power, data, or trust?

In the AI era, the scarcest resource is not algorithms, but trust. Data can be copied, and computing power can be rented, but only trust cannot be outsourced.

It is recommended that founders establish value boundaries early on: how to handle user privacy and how to explain model decisions. "Users will not stay because you are smart, but will use your product in the long term because you are trustworthy."

There is a rule within Anthropic: before any function is launched, three questions must be answered:

① Are we willing to publicly explain its behavior?

② Can users understand the source of the decision?

③ Does it still meet social expectations in the worst - case scenario?

The core barrier of an AI product is the trust brought by interpretability, not being closed - source.

Who is responsible for your model's decisions?

The issue of decision - making responsibility in AI systems is becoming the core of ethics.

Behind every AI output, there is an implicit human judgment. Therefore, founders must clarify the responsibility chain: who can modify the model, who can review it, and who is responsible for the deviations.

Anthropic improves decision - making by establishing an internal "responsibility log", requiring each model change to record the "impact assumption", and setting up ethical reviews and external advisors. These processes may seem cumbersome, but they keep the team alert during rapid iteration.

"Transparency is not a constraint, but a means to accelerate learning."

If the model makes an error, can you explain the reason?

Interpretability is the prerequisite for AI security. Most teams only perform hot - fixes when the model makes an error, rather than conducting a systematic analysis. Therefore, it is recommended to establish a "post - event traceability mechanism": record the input, version, context, and reproduction path - any output should be traceable to the responsible person and the training data.

Interpretability is not only a compliance requirement, but also a trust mechanism. When errors can be traced, users will not be afraid.

"An opaque system cannot build a long - term brand."

How do you balance openness and protection?

The contradiction between AI security and openness is a reality that all founders must face. Both complete open - source and complete closure are not sustainable.

"The healthiest state is controllable transparency." Start - up companies should establish a data - auditing interface early on to leave room for future compliance.

"Openness is not about giving up control, but about inviting supervision."

This article is from the WeChat official account "Sequoia Capital China" (ID: Sequoiacap), author: Hong Shan, published by 36Kr with permission.