StartseiteArtikel

Wie kommt es, dass der MCP, der gerade seinen ersten Geburtstag gefeiert hat, plötzlich in der Künstlichen-Intelligenz-Szene nicht mehr angesagt ist?

三易生活2025-12-08 18:46
Abgesehen von der Allgemeingültigkeit ist der MCP in anderen Aspekten eher unauffällig.

A little while ago, on November 25th, the AI unicorn Anthropic published an article to celebrate the first birthday of the MCP protocol (Model Context Protocol). However, today, the entire AI industry is hardly interested, if not completely indifferent, to Anthropic's move. The discussion about this news on social media is almost non - existent.

Interestingly, the MCP dominated the headlines of the AI industry at the beginning of this year. Almost all experts exclaimed: "MCP connects AI with everything", "AI finally has its own USB port", and "Infrastructure for the Agent Era". But just six months later, the MCP has transformed from the "darling" in the eyes of industry insiders to a "disappointment".

Why was the MCP elevated to the heights and why did it fall into the abyss so quickly? In fact, the success of the MCP was inconsistent from the start and is a typical product of the "Over - expectation Phase". Moreover, it should be noted that the MCP was not at its peak from the beginning. Its success process is different from that of ChatGPT and DeepSeek.

Anthropic released the MCP in the winter of 2024, but it was not really widely noticed until the spring of this year. Given the current interest in AI, a product with a real "hype factor" would become well - known worldwide within a few days or weeks, such as Google's Nano Bonana. The fact that the MCP made the headlines in the AI industry in the spring seems to be more of a coordinated move by large corporations like Anthropic, Google, and Microsoft, a "pre - fabricated hype".

The MCP is supposed to solve the chaotic situation where AI products from different manufacturers operate independently, making the interaction between AI models and external tools complicated and unstable. At that time, agents (AI agents) based on different models had to write separate APIs for different functions to be useful, because the "languages" of different agents were different.

Therefore, Anthropic developed the MCP (Model Context Protocol) to enable the seamless integration of Large Language Models (LLM) with external data sources and tools through a standardized interface. The MCP is like a USB - C port for AI applications. Through a series of processes such as capability coordination, capability recognition, subscription/notification, it enables the AI model to know which tools and data are available and how these resources can be used.

The MCP can directly build a bridge between AI, data, and tools. Through the MCP server and the MCP client, the "Internet of Things" in the AI industry can be realized. One must consider that today's Internet world is built on the basis of openness and networking. So, the MCP follows the path of standardized protocols like TCP/IP, HTTP, and USB.

It's not hard to see that the MCP is a protocol for agents, which gives agents the opportunity to acquire "real capabilities". This is also the reason why the MCP became popular at the beginning of this year. In a way, first there was the statement that "2025 is the Year of Agents", and then the MCP stepped into the spotlight. The promotion of the MCP was a consensus among large AI corporations.

On the last day of 2024, Sam Altman, the CEO of OpenAI, announced the technologies and products that his company would release in 2025. At that time, he mentioned AGI (General Artificial Intelligence) first, followed by agents. A key point in 2025 will be that ChatGPT can perform tasks autonomously.

Without the MCP, developers would have to spend too much time and energy to make agents useful. The MCP provides agents with a unified standard for tool calls, so that developers can be freed from the tedious adaptation work. Within just three months, thousands of tools voluntarily connected to the MCP. Together with the strong support from OpenAI, AWS, and HuggingFace, the MCP really seemed to be a success.

However, developers who regarded the MCP as a "universal key" soon found that things developed differently from what was expected. Since the MCP has no context tracking, developers don't know which tools were actually called in the AI's decision - making route. Moreover, there are no mechanisms for deadline transmission. This means that if a called tool has problems, the agent will be blocked.

The challenges in implementing the MCP at the technical level lie in cloud deployment. For enterprise users, the MCP service often has to be extended to a multi - server architecture to meet high concurrent requirements. In this case, the two - connection model of the MCP brings the complexity of multi - machine addressing. If a persistent connection is established on one server and the request may be forwarded to another server, an additional broadcast queue mechanism is required to coordinate the distributed connections. This significantly increases the difficulty of implementation and the maintenance cost.

Moreover, the MCP is expensive. When agents use external tools, they first have to collect information and send it back to the base model to make decisions. Therefore, the MCP requires that all tool definitions, call requests, and return values must pass through the model's context window. This directly leads to the fact that the capacity of the model's context to be processed increases exponentially with the number of MCP calls.

In short, developers have found that although the MCP allows their agents to call various tools at will, the more tools are called, the more tokens are consumed. If one wants to reduce token consumption, very strict processes have to be applied to call certain tools. As a result, the advantages of the MCP's flexibility and universality cannot be fully utilized.

In fact, these problems are just minor issues. The real drawback of the MCP is that the probability of agents having hallucinations increases synchronously with the number of called tools. Because the more tools are called, the more the model's attention is diluted, and it starts to make arbitrary decisions. Unlike AI chatbots, agents have to "work". The problem of excessive hallucinations makes them useless.

When developers found that apart from its universality, the MCP was not impressive in other aspects, its numerous flaws directly prevented them from continuing to use it.

[The pictures in this article are from the Internet]

This article is from the WeChat account "3e Life" (ID: IT - 3eLife). Author: 3e Jun. Published by 36Kr with permission.