HomeArticle

How to value AI companies? Goldman Sachs has calculated three accounts for MiniMax.

硅基观察Pro2026-03-04 20:38
Token will become the core unit of account for AI assets.

Recently, the controversy surrounding "AI's involvement in decapitation operations" has been making a lot of noise.

In the domestic AI investment field, Minimax is an unavoidable target.

Since its listing, MiniMax's stock price has tripled. Just on the day after the annual report was released on March 2nd, the stock price rose by 9%.

From a fundamental perspective, the company's growth has indeed been very rapid. In 2025, MiniMax achieved a total revenue of $79.038 million, a year-on-year increase of 158.9%.

However, precisely because of its rapid growth rate, the issue of its valuation has become one of the most discussed topics among investors. Currently, MiniMax's latest market value is HK$230.5 billion. A similar situation has also occurred with Zhipu.

The most core and thorniest question is: How should we value large - model companies like Zhipu and MiniMax?

If we use the methods of traditional SaaS companies, such as PS (price - to - sales ratio) or PE (price - to - earnings ratio), it often seems very unreasonable.

But if we completely abandon these methods, it is difficult to find a new consensus framework. In fact, this is not a problem unique to MiniMax or Zhipu.

More fundamentally, this is a problem that all investors cannot avoid:

When AI becomes a new business form, what kind of logic should we use to price AI companies?

Here, we propose a possible idea: The real "production capacity" of an AI company is actually the ability to generate and consume Tokens. And the key indicator for measuring the quality of its business model is gradually shifting to: How much money can this company earn for each Token it produces?

Today, I will talk about my own views on the valuation logic of AI companies based on Goldman Sachs' valuation of MiniMax.

Three scenarios, three valuations

The valuation of large models is much more complex than it seems.

For traditional software companies, PE or EV/Sales can usually give a rough estimate. But large - model companies are different - they are like platforms, like infrastructure, and also have a bit of the characteristics of the consumer Internet. The business model is still changing, and the time point of profitability is not clear.

Therefore, in this report, Goldman Sachs used a very typical investment framework: three - scenario valuation. These three scenarios correspond to the base scenario, the optimistic scenario, and the pessimistic scenario respectively.

Let's start with the base scenario. This is the most "academic" valuation method, using DCF, that is, the discounted cash flow model.

Goldman Sachs divided MiniMax's development into two stages. The first stage is the detailed forecasting period, which lasts until 2030; the second stage is the stable growth period, which extends until 2035; and finally, the long - term terminal value is calculated through the Gordon growth model.

For the key parameters, the discount rate is 12% and the perpetual growth rate is 2%. The report explains why the discount rate is 12%. The risk - free rate uses the 3 - year US Treasury bond, which is about 3.3%. The equity risk premium is about 7%. In total, it is approximately 12%.

In terms of business assumptions, a top - down forecast is adopted, that is, market size × market share. That is, first estimate the future size of the global large - model market, and then see how much the company can get.

The report assumes that from 2026 to 2030, Minimax's market share in the global AI large - model subscription + API revenue pool will increase by 0.3 - 0.7 percentage points each year. By 2030, the company's share in the global large - model subscription and API market can reach 2.5%.

Calculated according to this logic, MiniMax's revenue scale in 2030 should be $11.6 billion.

The company's profit inflection point will be in 2029. By then, the company's operating profit and free cash flow will turn positive. By 2030, the company's adjusted net profit will be $1.278 billion, and the free cash flow (FCF) will be $794 million.

In the stable period from 2031 to 2035, MiniMax will cumulatively contribute revenue of $157.771 billion, with an adjusted net profit of $26.7 billion and free cash flow (FCF) of $21.262 billion. In 2035, the company's adjusted EBIT margin will reach 21%.

Discount all the annual free cash flows from 2022 to 2035 and the terminal value at the end of 2035 to 2026 at a WACC of 12%, and the discounted enterprise value is $41.067 billion. Note that this is the enterprise value (Enterprise Value) at this time.

Then add back the net cash on the books, corresponding to an equity value of approximately $41.8 billion, equivalent to HK$326.295 billion.

Combined with the total share capital of 321 million shares, the final target price per share is HK$1,018, which means there is still a 38.5% upside from the current price.

Of course, this is just an assumption. Due to the high uncertainty in the large - model industry, Goldman Sachs also provided assumption deductions for the optimistic and pessimistic scenarios.

In the optimistic scenario, MiniMax needs to meet three conditions:

First, the company can obtain a 5% global market share by 2030, almost twice that of the base scenario.

Second, the model call volume will increase significantly, and the market share of Token consumption may reach 10%.

Third, the product pricing will reach 50% of the US SOTA model (currently only 10%).

In this case, assume that the company's ARR in 2027 will be $1.68 billion.

Referring to the valuation paradigm of the overseas leading AI company Anthropic, calculate the long - term enterprise value using the comparable company P/ARR multiple. Then the corresponding valuation is:

44 × $1.68 billion ≈ $73.9 billion.

Then discount it back to the present at an annualized discount rate of 12%, which is approximately $66 billion.

The last scenario is the pessimistic scenario.

If the industry competition is fierce and large companies continue to cut prices, and the gap in model capabilities narrows. Assume that the intensification of industry competition causes the growth of market share to basically stagnate. From 2027 to 2030, the global revenue market share will only increase slightly by 0.1 - 0.2 percentage points each year, and will only reach 1.2% by 2030.

Then the company's valuation has to return to the logic of traditional AI software companies. At this time, the report uses the EV/Sales multiple method, which can be simply understood as the price - to - sales ratio.

Referring to the valuation level of traditional Chinese AI listed companies, it is about 17 times EV/Sales. If the company's revenue in 2027 is $980 million, then the enterprise value is:

17 × $980 million ≈ $16.6 billion. Then discount it back at a 12% discount rate, which is approximately $16 billion.

All about competition

In addition to the valuation model, Goldman Sachs' report actually also mentioned several key variables that will affect the value change of MiniMax.

The first variable is the competitive landscape of the AI industry.

The question is actually very simple: In the large - model track where a bunch of Internet giants, cloud providers, and technology companies are simultaneously investing, do independent AI companies still have a chance to stand out?

Goldman Sachs' judgment is: There is still a chance.

The reason is that the AI industry is still in its early stage, and the technical path, business model, and product form have not fully converged. At this stage, although large companies have abundant resources, their organizational structures are often more complex, and the decision - making chain is longer. As a result, they may not necessarily be the most efficient players.

In contrast, if some independent AI companies can form advantages in technical routes, cost efficiency, and product rhythm, they still have a chance to build their own moats.

In Goldman Sachs' view, MiniMax's advantages are mainly concentrated in three aspects:

First, a full - modality native architecture;

Second, a global market layout;

Third, high computing power efficiency and cost control ability;

These factors combined give MiniMax a certain organizational efficiency advantage when competing with Internet giants.

The second variable is the business model and the profit inflection point.

Goldman Sachs gave a very clear judgment in the report:

In the future, the most profitable part of the AI industry is likely not the application, but the multi - modality API platform.

Because the API is the most standardized, scalable, and infrastructure - like business form.

Currently, the gross profit margin of MiniMax's API business has reached 69%. As the call volume continues to increase and the computing power efficiency further improves, there is still room for the gross profit margin of this part of the business to rise.

Some consumer products targeting the C - end, such as social applications, are still in the investment stage in the short term and are even still burning money. But in the long run, the company is likely to gradually reduce this part of the investment and focus more resources on the high - margin API platform business.

If this path is successful, MiniMax's business structure will gradually approach that of a typical AI infrastructure company. Therefore, Goldman Sachs' overall judgment is relatively optimistic:

The company is likely to reach the profit inflection point around 2029.

Token will become the core pricing unit of AI assets

After talking about Goldman Sachs' report, let me share some of my observations.

Actually, from this research report, it is easy to feel that the valuation of AI companies is much more complex than that of traditional software companies.

Just for the valuation of MiniMax, Goldman Sachs used three different methods for cross - verification.

But from the perspective of the industrial structure, I think there is an interesting perspective worth adding:

Token consumption is likely to become an increasingly important indicator in the valuation of AI companies in the future.

Simply put, Token is becoming a new basic pricing unit.

The logic behind this is actually very clear, mainly for two reasons.

First, Token connects both the revenue side and the cost side.

On the revenue side, Token is currently the only unified pricing unit that can span all AI product forms.

Now, the business models of AI companies are very complex: there are API calls, subscriptions, task - based charging, and seat - based charging.

But behind these charging methods, they all ultimately boil down to the same underlying indicator - how many Tokens are consumed.

On the cost side, the situation is the same.

The AI inference cost can usually be broken down into a very simple formula:

The number of Tokens consumed per request × the calculation cost per million Tokens.

In other words, Token is both a product unit and a cost unit.

This is very rare in the past software industry. The "Seat" in SaaS mainly corresponds to revenue, while Token corresponds to both revenue and computing power cost.

Second, Token is actually closer to the "real output" of an AI company.

The real AI - native is to see who can solve complex problems by using a large number of Tokens. The deeper AI is embedded in the business, the greater the proportion of Token usage. The application of multi - modality and agents will significantly increase Token consumption.

In a simple dialogue scenario, a single request may only consume dozens to hundreds of Tokens.

But when AI starts to actually perform tasks, such as writing a complete code or generating a complex report, the Token consumption may be dozens or even hundreds of times that of the traditional chat scenario.

Some industry insiders even predict that the Token usage will increase by at least 10 times in 2026.

This directly brings about a change: The average customer price of AI applications has increased significantly.

In the past, the subscription price of C - end products was usually $20 per month, or a little more expensive at $200 per month.

But now, the real usage data within some teams shows that the cost corresponding to per - capita Token consumption has reached the level of $500 per month.

As agents can take on more and more complex tasks, there have already been and are brewing AI agent service pricing of $1,000 per month or even higher in the market.

But the story doesn't end here. When Token consumption starts to become an important indicator for evaluating AI companies, a new discussion also emerges:

Tokens are not completely equivalent to each other.

Take a very intuitive example. A year ago, if you asked GPT - 4 to write a complex code, it might take 1,000 Tokens. And a small model might need 3,000 Tokens to get it right.

But now, the capabilities of small models have been greatly improved, and it may only take 1,200 Tokens to complete the same task.

What does this mean? The value density of small - model Tokens is increasing.

Similarly, the capabilities and cost structures of Tokens in the cloud and on the edge side are also completely different.

In the cloud, the cost of Tokens mainly comes from GPU computing power; while on edge devices, the cost structure of Tokens depends more on hardware efficiency and model compression.

That is to say, Tokens in the future AI world are not a completely homogeneous resource.

In this context, Silicon Valley investor Tomasz Tunguz put forward an interesting view:

The really important indicator is not the number of Tokens, but - the gross profit per Token.

In other words, the real business ability of an AI company does not lie in how many Tokens it generates, but in how much money it can earn for each Token.

What's more interesting is that when Tunguz made a statistical comparison between the valuation of AI companies and Token indicators, a clear pattern was indeed observed.

In the double - logarithmic coordinate system, the correlation coefficient between "gross profit per Token" and the company's valuation reaches 0.70, while the correlation coefficient between the total number of Tokens and the valuation is only 0.47.

If we put these clues together, we can actually see an interesting trend:

The real "production capacity" of an AI company is actually the ability to generate and consume Tokens. And measuring the quality of its business model has become how much money this company can earn for each Token it produces.

No matter how the model and product form evolve, Token is destined to be the most core variable in the value of AI companies.

This article is from the WeChat official account "Silicon - based Observation Pro", author: Lin Bai, published by 36Kr with authorization.