HomeArticle

New ideas for AI agent governance: Using "humans" to govern "machines" and building dedicated data channels

互联网法律评论2025-12-10 18:28
When nearly half of the online traffic comes from AI agents, traditional governance models have become ineffective. This article proposes a fundamental change: regarding agents as independent "actors" and constructing a new governance framework through identity verification and dedicated data channels, while leaving room for technological innovation.

Currently, we are facing two relatively independent yet interrelated issues in the field of AI agent governance: one is the governance issue of AI Agents themselves, and the other is the management issue of the special permission of barrier-free access. Discussing these two issues together often complicates the problems and makes it difficult to reach clear conclusions. However, if we can clarify the governance issue of AI Agents themselves, the problem of AI Agents using "barrier-free access" will also be easily solved.

I. Fundamental Changes Brought by AI Agent Technology

When discussing the combination of AI and Agents in the technical field, the concept of "intelligent agent" is more commonly used. This term has a weaker legal connotation than "agent" and can more accurately reflect its technical essence.

Fundamentally, the emergence of intelligent agents has changed the underlying logic of building information systems.

The traditional way of building information systems is to first sort out business logic and business processes and then implement them through information systems. This method uses functions and processes as the key elements to organize information systems. However, when facing an open environment and uncertain challenges, the functional boundaries between systems are often not clear, and a clear and reasonable business process has not been formed, making this traditional way of building information systems seem inadequate.

Humans themselves have a strong ability to cope with uncertain external environments. Taking the UI interaction of smartphones as an example, users can easily understand the meaning of an APP interface they use for the first time and complete the interaction. When facing more complex tasks, such as urban governance and military operations, humans usually form teams, define roles, and clarify functions, allowing each person to carry out work according to the logic of their functions and continuously optimize the process. That is to say, humans solve complex problems centered around organizations, and the construction of organizations precedes the realization of functions and the design of processes. This model is conducive to giving play to the initiative of individuals in an uncertain environment and has stronger adaptability to the execution of complex tasks in an open and changeable environment.

From this perspective, AI intelligent agents, like humans, have the characteristics of flexibly responding to the environment and performing relevant operations, making them more and more like "people" with specific roles and functions, and also making human - machine collaboration more natural. This means that the traditional way of building information systems is undergoing fundamental changes: The way humans intervene in information systems is different from the traditional logic. Instead, it is like interacting with employees. They only need to give instructions, receive feedback, and make adjustments, and the interaction between information systems and humans will continue like the interaction between people.

If we recognize that intelligent agents play a role similar to "people" in the technical system and perform functions similar to "people", then their management should adopt a management method similar to that of "people".

However, the current problem is that we do not manage intelligent agents as "independent behavioral subjects" but still constrain them from the perspective of "behavior rules". This governance logic may no longer meet the development needs of new - type AI agents.

II. Reconstruction of the Governance Framework: From Identity Recognition to Data Pathways

1. Importance of Independent Identity Recognition

Currently, the traffic generated by intelligent agents on the Internet has approached or even exceeded that of real users. The "2024 Imperva Bad Bot Report" released by Thales, a leader in network security, points out that nearly half (49.6%) of the network traffic in 2023 came from intelligent machines. This shows that there are a large number of "ghost - like" intelligent agents continuously interacting with other entities and generating data. Intelligent agents with specific roles and functions have become an important part of the system composed of technology and humans.

This situation indicates that intelligent agents have evolved from a tool - like existence to an active participant with system - wide influence. These "intelligent agent participants" act in the name of humans like secretaries, but their actions are not entirely controlled by human intentions. Therefore, what we really need to govern are not specific behaviors but these new behavioral subjects.

So, the first step in governing intelligent agents is to establish an independent identity recognition system. Just like managing employees, we need to confirm the establishment of independent identity identifiers for them. Currently, intelligent agents use human data pathways for interaction, obtaining data through the user interface and performing operations. This method essentially causes two types of users - machine users and natural human users to share the same data pathway, which obviously has problems.

2. Construction of Dedicated Data Pathways

The MCP protocol emerging in the technical field represents an important direction. When websites actively provide dedicated interfaces for intelligent agents, the efficiency of data interaction will be greatly improved, and the burden on websites will also be reduced. This way of interaction through data interfaces means that we will truly distinguish between two types of users: providing a UI interface for natural humans and an MCP interface for intelligent agents.

Similarly, in a stand - alone environment, providing data pathways for intelligent agents through barrier - free access has security risks. A better solution is to establish independent data pathways for such stand - alone intelligent agents, not necessarily through the user interface. By establishing a dedicated service system, we can add more effective control measures.

III. Construction of Trust Mechanisms and Market - Oriented Governance Paths

1. Trust is the Core of the Problem and the Answer

Essentially, all discussions about the technical and legal risks of intelligent agents stem from the "trust" issue.

In the traditional model, the system completely obeys the user's commands. Now, intelligent agents have their own understanding and behavioral logic, which creates a new need for building a trust relationship and requires the reconstruction of the trust relationship between users and technical systems. From a technical perspective, we emphasize "trustworthiness", that is, the behavior of the technical system should meet the expectations of all parties. The establishment of this trustworthiness requires the redefinition of the "expectation system".

Through the discussion of "user authorization" in the supervision of intelligent agents, we can perceive that intelligent agents have brought more complex scenarios and conditions. The traditional authorization mechanism, as a static expression of user expectations, may not be sufficient to cope with these complex situations. At the same time, after user authorization, there are not enough technical means to ensure that intelligent agents "execute as promised", which also weakens the trust relationship between users and intelligent agents.

2. Advantages of Market - Oriented Governance

When intelligent agents become identifiable behavioral subjects and their behaviors can be distinguished from those of natural humans, a governance mechanism can be established through market - oriented means, and it is not necessary to adopt a strong - regulatory behavior - regulation approach. As the scale of use expands, intelligent agents can form their own reputation systems, and both good and bad can be screened through user choices.

3. Progressive and Dynamic Governance Strategies

In reality, it is difficult for us to formulate comprehensive and clear rules for all intelligent agents at this stage. Therefore, from a purely technical perspective, the most important thing at present is to establish a governance framework that can adapt to technological changes. Many details of this framework can be gradually improved, but the core is to reasonably allocate the rights and responsibilities of all parties and create a win - win relationship.

At the technical level, dynamic digital contracts need to be used in conjunction with usage control to establish technical trust; at the institutional level, mechanisms such as identity confirmation and agreement management need to be established; at the governance level, the systems for dispute resolution and evidence preservation need to be improved. These measures should form an organic whole to jointly ensure the governance effect.

IV. Summary and Reflection

Currently, we are facing a rapidly changing technological environment. It is neither possible nor necessary to formulate overly detailed rules. A more feasible idea is to establish a basic framework within which all parties can form a positive interaction to promote the continuous and healthy development of technology.

The key is to arrange the role positioning of all relevant parties, give full play to the subjective initiative of all parties, and let the market mechanism play a greater role in governance. This governance idea can not only maintain the innovation vitality of technology but also prevent potential risks through institutional design, and may be a more suitable governance path for the current stage of technological development. The key is to leave enough space for technological innovation on the premise of ensuring security.

[This article is based on the speech content of Wang Yue at the Seminar on the Risks and Governance of Invasive AI.]

[Disclaimer] The information required for writing this article is collected from legal and public channels. We cannot provide any form of guarantee for the authenticity, completeness, and accuracy of the information. This article is only for the purpose of sharing and exchanging information and does not constitute a decision - making basis for any enterprise, organization, or individual.

This article is from the WeChat official account “Internet Law Review”. The author is Wang Yue, and it is published by 36Kr with authorization.