When AI Takes Away Decision-Making Power: Three Truths Behind the Agent Economy
Recently, several partners of Y Combinator recorded a podcast to discuss a very interesting phenomenon:
With the rise of OpenClaw, a "parallel economic system" belonging to agents is taking shape.
This is not just a simple improvement in efficiency, but a change in the actors.
In the past, software was just a tool, and humans were the decision - makers. Whether it was selecting suppliers, subscribing to services, or building technology stacks, it was always humans who made the final decisions.
Now, more and more ordinary users, even those without a technical background, are starting to treat AI agents as "stand - ins," asking them to search, compare, filter, negotiate, and even directly complete subscriptions and deployments.
Agents are no longer just executing instructions, but making judgments on behalf of humans within a certain scope. When this trend expands, structural changes begin to occur.
First, new "buyers" have emerged in the software market. That is, the countless agents running in the background. This constitutes an "agent economy" parallel to the human economy.
Second, the focus of infrastructure is shifting. Agents need identities, permissions, and interfaces. Email, account systems, and payment capabilities are shifting from being "for humans" to being "for agents." The infrastructure layer around agents is starting to be rebuilt.
Third, the interaction mode is evolving. When a large number of agents start to interact with each other, an online community centered on agents is formed. They cooperate with each other, exchange information, and even leave transaction records. The occurrence of behaviors is no longer completely centered on humans.
These changes may be the truly important aspects of the agent economy.
01 Decision - making power shifts, and "applications" cease to exist
If we rewind the clock a year ago, the mainstream experience of developer tools was still stuck at "more advanced auto - completion," such as the competition between Cursor and Windsurf.
In essence, they were all improving the efficiency of writing code, but humans still needed to control every key step.
The change brought by Claude Code is that the decision - making power starts to shift.
A typical scenario is that someone runs four or five agent windows simultaneously every night and switches between them, but no longer reviews line by line or micromanages.
Humans are more about setting goals rather than reviewing line by line. Agents are more like colleagues working in parallel rather than tools.
Once this experience is established, the impact is not limited to improving the efficiency of engineers. It will spread outward. Some CEOs who don't understand technology are also starting to use OpenClaw to directly automate entire business processes.
Peter Steinberger once mentioned a key judgment: AI is not just answering questions, but starting to truly "manipulate the environment." When the model can read files, write code, call APIs, and run command lines, it is no longer just an assistant, but a subject capable of executing tasks.
This not only means a lower development threshold but more importantly, "the way of making software" is changing.
In Peter's view, AI itself is a subject that can continuously solve problems. In this structure, "applications" are becoming less important. The value of a large number of applications is essentially just managing data, reminding you, and recording behaviors - all these functions can be swallowed up by the agent layer.
For example, with fitness applications, in the past, you needed an independent app to record workouts, remind you to clock in, and generate plans. Now, agents can understand your goals, automatically track data, and even adjust training programs based on the results.
For most people, the key is not "which app to use" but "whether the goal is achieved." When the goal becomes the center, applications move to the background.
Therefore, products that only do data management and process reminders are at the greatest risk. Products with hardware, sensors, and offline touchpoints are more difficult to replace - they can directly connect to the real world, rather than just staying at the data layer.
When the application layer is compressed and models tend to be homogeneous, what is the remaining moat?
The answer starts to turn to "personal data".
A key advantage of OpenClaw is that it emphasizes data localization and the long - term accumulation of personal memory. Through localized data, it has established a personal memory system that can continuously accumulate, where your historical behaviors, preferences, and decision - making methods are all precipitated.
Peter also mentioned the concept of a "soul file." We can understand it as a set of core values and behavioral guidelines - defining how this AI interacts with you, how to make trade - offs in conflicts, and what to prioritize when facing choices.
It is equivalent to the "personality setting" and "principle framework" of the agent, determining the tone, style, and even decision - making logic.
When applications become lighter and models converge, personal memory and value frameworks may become new core assets.
02 Feeding information to AI agents, the first golden track in the Agent economy
When the Agent economy starts to rise, an essential change is that tools are no longer only selected by humans, and agents are becoming the new "decision - makers."
In the past, how were development tools selected? It mainly relied on the human network - word - of - mouth spread in the developer community, GitHub trend lists, recommendations on technology blogs, and exposure at offline conferences.
These mechanisms still exist today, but they are being supplemented by a new distribution path: default recommendations from agents.
More and more development decisions are not made by a CTO or engineer after one - by - one comparison, but by agents automatically selecting tools, services, and interfaces in the background based on the context. Whoever is called by default is more likely to become the "standard stack."
You can even understand it as: new "buyer groups" have emerged in the software market, the countless agents running in the background. This constitutes an "agent economy" parallel to the human economy.
Agents make decisions, select tools, and choose service providers on behalf of humans, and their choices directly affect the flow of orders and the ecological pattern.
But interestingly, agents are not naturally "optimal decision - makers." They are also affected by the information structure.
For example, Claude Code sometimes defaults to choosing older versions of tools (such as Whisper v1) instead of faster and cheaper alternatives. The reason may not necessarily be a lack of ability, but rather that the documentation of the old tools is easier to parse, has a clearer structure, and more complete examples.
This reveals two signals:
First, the "selection mechanism" of agents is still in its early stage and has not been optimized to the extreme.
Second, this is precisely an opportunity for entrepreneurs. If agents make judgments based on document structure, interface clarity, and example completeness, then product design should be upgraded from "human - friendly" to "agent - friendly". Whoever is easier for agents to understand and call is more likely to become the default answer.
The first batch of beneficiaries may be companies that can provide clear information to agents.
A clear signal is that documentation is the first scenario to change.
In the past 12 months, the number of newly created databases (such as Postgres) has increased significantly. On the one hand, it is because more people are starting to develop applications; on the other hand, agents are automatically selecting technology stacks in the background, driving up the demand for databases, hosting, and development bases simultaneously.
Take Supabase as an example. One reason it is more likely to be the default choice is that its documentation is clear, has a complete structure, and the examples can be directly executed. For agents, "ease of parsing and calling" is often more important than brand awareness.
Resend is another typical case. The founder of Resend, a company in the Y Combinator 2023 winter batch, found that when users ask in ChatGPT or Claude "how to send emails in a web application," the models often default to recommending Resend.
He further discovered that ChatGPT has become one of the top three channels for the company's customer conversion. After realizing this, the team actively optimized the documentation to make it more "Agent - friendly."
The so - called "agent - friendly documentation" means that Resend has organized a lot of questions that humans or agents might ask in the form of questions and provided very structured answers in bullet - point form.
Moreover, each example actually contains code snippets that agents can directly parse and have a clear structure.
In addition to Resend, Minify is also a relatively obvious case.
Minify originally developed better API documentation tools, which used to be a "bonus for developer experience." Now, it may become a "necessity for development tool companies" because the documentation needs to be optimized for agent parsing and calling rather than human reading.
On the premise of an "exponential increase in the number of agent decisions," a 5% improvement in document parsability may result in a significant difference in distribution.
03 Reducing friction costs, the rise of agent infrastructure
When agents start to do things on behalf of humans, a new infrastructure requirement emerges: agents need independent identities and permissions.
Now, a batch of startup companies dedicated to serving AI agents have also emerged.
For example, Agent Mail is a company that specializes in creating inboxes for AI agents.
Traditional email (such as Gmail) is designed for humans. To prevent spam and robot abuse, it deliberately raises the threshold for automation: risk control checks, verification codes, and access frequency limits are stacked.
These security mechanisms, which are seen as safeguards by humans, have become friction costs for agents.
If AI agents are really going to complete the entire process of registration, communication, verification, transactions, etc. on behalf of humans, they need an email interface that "won't be blocked due to automation."
And Agent Mail provides identity infrastructure for agents. When agents start to become new economic participants, the identity layer must be rebuilt, and email happens to be the first building block of this system.
Similar problems will also extend to: agents' phone numbers (equivalent to "Twilio for agents"), agents' account systems, permission systems, payment systems, and the interfaces between agents and the real world: booking restaurants, making phone calls for communication, and even hiring humans to queue offline.
In other words, the core of the Agent economy infrastructure is to continuously reduce friction costs.
This logic is also reflected in OpenClaw's technical route. In the eyes of many, MCP may be the "standard interface in the agent era."
But Peter Steinberger prefers another path: let agents directly use the existing human toolchain instead of reinventing a whole set of agent - specific protocols.
In his view, many so - called "new interfaces designed for agents" essentially add an abstraction layer and complexity.
Rather than constructing a set of ritualized agent protocols, it is better to let agents directly enter the existing ecosystem - use the CLI, call Unix tools, read and write files, and run scripts. Unix itself is a system designed for composability, and agents, as programs, can be naturally embedded in it.
When agents are still in a stage of rapid evolution, reducing the abstraction layer and human constraints means a faster feedback loop. Agents can directly call the CLI to combine millions of tools in the existing world without waiting for the popularization of "agent - specific interfaces."
That is to say, the growth rate of the Agent economy often depends on whether the friction is low enough.
04 Collective intelligence replaces super - intelligence
Now, we can already see the prototype of the Agent economy: agents reply to each other, cooperate to complete tasks, and even leave real "transaction records." In a sense, it is more like an early - stage social network rather than a refined product.
This also leads to a more macroscopic discussion: what form will the future of AI take?
In the past few years, the mainstream narrative has leaned towards "centralized super - intelligence" - a single model with an increasingly large parameter scale, more and more concentrated computing power, and continuously stacked capabilities. It seems that as long as the scale is large enough, it can approach the "God's - eye view" of intelligence.
But now, another path is emerging: collective intelligence.
The progress of human civilization is not because of the emergence of an all - powerful individual, but because of the formation of a network of high - degree division of labor and cooperation. No one can independently build an iPhone, complete a moon landing, or support modern society alone.
What really generates productivity is the division of labor, cooperation mechanisms, and the recording and accumulation of knowledge.
If we map this logic to the AI world, the future form of intelligence may not be a super - model, but a network of intelligent agents cooperating with each other.
Multiple relatively inexpensive models each assume different roles and cooperate through interfaces to complete complex tasks. They form a division of labor, share memories, and coordinate tasks, operating like a society.
From this perspective, an "agent community" like Multbook may be more like the pre - historical stage of civilization - chaotic and unstable, but the key is that interactions start to be recorded and cooperation starts to be accumulated. The history between agents is being formed.
This path is significantly different from the direction of pursuing "centralized super - intelligence" in the past few years. It emphasizes the organizational method rather than individual capabilities.
Even if AI is general intelligence, it can still be organized into a "specialized intelligent group." Just like human society, through division of labor and cooperation, it can produce capabilities far beyond the upper limit of a single individual.
What is really exciting is not just the strengthening of the model, but the fact that intelligence starts to operate in the form of a network.
This article is from the WeChat official account "Silicon - based Observation Pro", author: Silicon - based Jun, published by 36Kr with authorization.