Morgan Stanley: Der Markt unterschätzt die potenziellen "großen Vorteile durch KI" im kommenden Jahr, aber es gibt entscheidende Unsicherheiten.
A significant leap in AI capabilities driven by computing power may be in the making.
According to Hard AI, Morgan Stanley said in a recent report that the market may have seriously underestimated a major positive development in the field of artificial intelligence that is expected to emerge in 2026 - a "non-linear" leap in model capabilities driven by exponential growth in computing power.
According to the report written by analysts such as Stephen C Byrd, several large US large language model (LLM) developers plan to increase the computing power used for training cutting - edge models by about 10 times by the end of 2025. This unprecedented investment in computing power is expected to yield results in the first half of 2026, constituting an "underappreciated catalyst".
The report cited the view of Tesla CEO Elon Musk, that a 10 - fold increase in computing power could double the "intelligence" level of the model. The report pointed out that if the current "scaling law" continues, the consequences could be "seismic" (seismic consequences), widely impacting the valuation of various assets from AI infrastructure to the global supply chain.
However, this optimistic outlook is not guaranteed. The report emphasized that the core uncertainty it faces is whether AI development will hit the "Scaling Wall". This refers to the disappointing result that after a huge investment in computing power, the improvement in model capabilities decreases rapidly.
01 A Ten - fold Increase in Computing Power May Catalyze a Leap in AI Capabilities
The report believes that investors need to prepare for a step - up improvement in AI capabilities that may occur in 2026.
The report described the upcoming scale of computing power: a 1000 - megawatt data center composed of Blackwell GPUs will have a computing power of more than 5000 exaFLOPs (five hundred quintillion floating - point operations per second). In contrast, a supercomputer named "Frontier" of the US government has a computing power of just over 1 exaFLOP. This level of computing power growth is the core basis for the market's expectation of a non - linear improvement in AI capabilities.
The report said that although many LLM developers generally agree that investment in computing power will bring about an improvement in capabilities, there are also skeptics who believe that there may be an upper limit to the intelligence, creativity, and problem - solving ability of cutting - edge models.
02 The Debate on the "Scaling Wall": The Key Uncertainty in AI Progress
Although the prospect is exciting, the report also clearly pointed out the key risk - the existence of the "Scaling Wall".
This concept refers to the situation where after the invested computing power reaches a certain threshold, the improvement in the model's intelligence, creativity, and problem - solving ability will rapidly decrease, even disappointingly. This is currently the biggest uncertainty in the field of AI. Many skeptics believe that simply increasing computing power may not continuously bring about a leap in intelligence.
However, the report also mentioned some positive signals. A recent research paper "Demystifying Synthetic Data in LLM Pretraining" jointly published by teams from Meta, Virginia Tech, and Cerebras Systems found that when using synthetic data for large - scale training, no performance degradation pattern within the foreseeable scale, namely the so - called "model collapse" phenomenon, was observed.
This finding is encouraging because it implies that there is still room for continuous improvement in model capabilities after a significant increase in computing power, and the risk of hitting the "Scaling Wall" may be lower than expected.
In addition, the report also listed other key risks, including financing challenges for AI infrastructure, regulatory pressure in regions such as the EU, power bottlenecks faced by data centers, and the possibility of LLMs being misused or weaponized.
03 How Will Global Asset Valuations Be Reshaped?
If AI capabilities do achieve a non - linear leap, how will asset values be reshaped? The report believes that investors should start evaluating its multi - faceted impact on asset valuations and pointed out four core directions:
Firstly, AI infrastructure stocks, especially those companies that can alleviate the growth bottleneck of data centers; the report believes that if AI can solve more problems in the global GDP at a lower cost and with higher performance, the infrastructure that supports this value creation will also appreciate significantly.
Secondly, the China - US supply chain. The intensification of the AI competition may prompt the US to accelerate the "decoupling" in areas such as critical minerals.
Thirdly, stocks of AI adopters with pricing power; the report analyzed that AI applications will create a market value of about $13 trillion to $16 trillion for the S&P 500 index. However, not all companies will benefit equally. Those companies with strong pricing power can convert the efficiency improvements and cost savings brought by AI into real profits, thus retaining most of the benefits.
Finally, in the longer term, the relative value of hard assets that cannot be "cheaply replicated" by AI, such as land, energy, and specific infrastructure, may increase.
Physically scarce assets: Such as waterfront real estate, land in specific locations, energy and power assets (especially power plants that can support data centers), transportation infrastructure (airports, ports), minerals, and water resources.
Regulatory scarce assets: Such as various protected licenses and franchises.
Proprietary data and brands: A strong IP library and unique brand image.
Unique luxury goods and human experiences: Such as sports events and music performances.
This article is from the WeChat official account "Hard AI", author: Focusing on technology R & D. It is published by 36Kr with authorization.