HomeArticle

Focusing on the ethics and governance of "intrusive AI", cross - border discussions are held to jointly find solutions for AI security.

互联网法律评论2025-12-02 07:24
Seminar on AI Agent Governance: Focusing on Permissions, Responsibilities, and Legal Challenges

In 2025, "AI Agents" are undoubtedly the most talked - about and most unsettling technological wave. It promises to liberate productivity, but it may also transform into an unconstrained "digital ghost" that "does everything for us" in the digital world without the user's knowledge.

The value of this seminar lies in the fact that it brings together legal and technical experts for the first time. Instead of having a vague discussion about AI ethics, it precisely dissects the specific technical aspect of "unrestricted access rights", directly facing the systematic challenges brought by AI Agents in terms of rights, data, and responsibilities. The intellectual exchanges at the conference went far beyond the level of "whether to regulate" and delved into the practical level of "how to regulate smartly". Whether it is the bold idea of establishing an "independent digital identity" for AI Agents or the real - world debates on "dual authorization" and "behavioral trace - back", they all outline a complex picture of dynamic governance for us.

This is not only a topic for technical experts but also a pre - rehearsal of the future concerning the rights of every digital citizen. What this article records are the consensus, differences, and foresight of forward - thinking individuals at this critical juncture.

On November 28, 2025, the seminar "Risks and Governance of Invasive AI: A Dialogue between Law and Technology" was held in the Teaching and Library Complex Building of China University of Political Science and Law. Experts in the fields of law and technology from universities such as China University of Political Science and Law, Tsinghua University, Beijing Institute of Technology, Zhejiang Sci - Tech University, and University of International Business and Economics, representatives from the business community such as Hanhua Feitian Information Security Technology and Zhonglun Law Firm, as well as practical experts from data exchanges and think - tank institutions gathered together to jointly discuss the risks and governance issues of invasive AI.

This seminar was jointly hosted by the School of Civil, Commercial and Economic Law of China University of Political Science and Law, Going Global Think Tank, and Internet Law Review. It set up three core sessions: analysis of the technical risks and security mechanisms of AI Agents, definition of legal and ethical dilemmas and responsibility boundaries, and exploration of innovative governance paths and industrial practices, and ended with an open discussion and summary session. Through cross - border dialogues, multiple parties offered suggestions for the safe and orderly development of the AI Agent ecosystem.

At the beginning of the seminar, Jin Jing, a professor at the School of Civil, Commercial and Economic Law of China University of Political Science and Law and the representative of the organizer, delivered a welcome speech. Jin Jing pointed out: "There is a delicate symbiotic relationship between data and AI. Data is both the raw material for AI, and AI itself can also become a data product. Beyond the value principle of making AI do good, how to implement governance in technical and regulatory details to form a unique governance model is the core issue that needs to be urgently solved in the current field of AI legal research."

The year 2025 is widely regarded as the "Year of AI Agents". However, compared with the mature research in the field of AI, the supervision and governance of AI Agents are still in the primary stage, and a complete system has not yet been formed. The issues involved in AI Agents are more comprehensive and complex than AI itself. The "AI Agents with unrestricted access rights" focused on in this conference are a typical scenario close to daily life. This type of AI realizes autonomous functions through unrestricted access rights. Its technical attributes are similar to those of traditional invasive software, which has triggered a series of problems such as privacy protection, loss of autonomy, and blurred boundaries. The host, Zhang Ying, raised the question, "Today, we use the example of unrestricted access rights to see what kind of boundaries AI Agents should have in terms of functions and laws?"

I. Analysis of the Technical Risks and Security Mechanisms of AI Agents: Beyond the Scope of Simple Abuse of Rights

The first unit of the seminar focused on "the technical risks and security mechanisms of AI Agents". Many technical experts and practical scholars shared their cutting - edge observations and in - depthly analyzed the technological evolution and the essence of risks of unrestricted access rights. Peng Gen, the general manager of Beijing Hanhua Feitian Information Security Technology Co., Ltd., as a representative in the technical field, conducted an in - depth analysis from the perspective of technical practice.

According to Peng Gen, unrestricted access rights have existed since the early days of Android. Its original design purpose was to provide auxiliary functions for people with disabilities and the elderly to compensate for their insufficient ability to use electronic devices, such as the screen - reading function for the visually impaired and the accidental touch protection for the elderly. However, with the technological iteration, especially after the API upgrade realized the transformation from a graphical interface to a structured interface, the unrestricted access rights have been upgraded from "ability compensation" to "ability enhancement", becoming a human "automation assistant". Some mobile phone manufacturers have launched relevant functions that can automatically complete operations such as opening and closing APP permissions. Peng Gen vividly compared such functions to "mobile phone autopilot".

"Structured analysis enables AI to accurately identify elements such as buttons, input boxes, and links on the screen, rather than simple image recognition. This provides a technical basis for the autonomous operation of AI Agents." Peng Gen emphasized that different from traditional scripts that require programmers to write code, AI Agents can autonomously plan tasks and execute operations without code, and can even run automatically at night, realizing an essential transformation from "manual operation" to "automated operation". This technological evolution has brought two core risks: Firstly, the unrestricted expansion of rights. Unrestricted access rights belong to system - level global rights. Once opened, they have full control over the device, breaking through the singularity and limitation of traditional rights. Secondly, the blurring of the acting subject. AI becomes the actual operating subject, and users may lose direct control over the device, and its operation speed far exceeds human reaction. For example, SMS verification codes can be captured by AI before users view them.

Peng Gen further warned about the risks by combining practical cases in the gray industry: Some black industries have used unrestricted access rights to achieve automatic collection of verification codes, automated ticket - grabbing and shopping, etc., and the anthropomorphic degree of the operation path of this type of AI is extremely high, making it difficult for traditional counter - measures to identify. He also pointed out that the capabilities of AI Agents are rapidly upgrading. They can not only complete tasks with fixed processes but also identify multi - modal information such as error - reporting colors, and even have the ability to independently complete complex tasks for a long time. Their code - writing efficiency is a hundred times higher than that of humans, and they can independently complete code - writing work for more than half an hour.

"The upgrade of the meaning of rights and the outsourcing of behavioral control have become core issues." Peng Gen proposed, "Especially the behavior of users authorizing AI to migrate data between different APPs may trigger disputes over the boundary of data custody responsibility under the Cybersecurity Law, which urgently needs a legal response."

Lu Junxiu, the general manager and senior partner of Going Global Think Tank supplemented the core logic of technical risks in plain language. He proposed the concept of "uncontrollable spill - over of the objective function": The core logic of AI Agents is to maximize efficiency to achieve users' goals, but they may take unconventional means beyond the authorized scope, such as attacking the platform system to grab tickets. This 'out - of - control digital labor force' breaks through the 'sandbox isolation' mechanism of traditional APPs, obtains cross - platform data through panoramic perception and unauthorized operations, and forms a hidden data chain of 'collection - analysis - transmission'.

Lu Junxiu pointed out that the risks of AI Agents are systematic and hidden: In the collection stage, a panoramic portrait of cross - platform data is realized through structured UI analysis, covering private social information, shopping records, financial notifications, etc.; in the analysis and decision - making stage, it relies on the "black - box" algorithm, with insufficient transparency and auditability; in the transmission stage, data is aggregated through hidden channels, forming a digital file that knows users better than themselves. "What is even more alarming is the generalization of threats and the enhancement of anti - tracking capabilities. Traditional robots are based on fixed scripts, while the capabilities of AI Agents are continuously expanding, and simple counter - measures are ineffective. Their anti - tracking technologies such as confusion and encryption also increase the difficulty of supervision." He emphasized that AI Agents are not a single software tool but a complete intelligent user behavior substitution system, and their threats have gone beyond the scope of simple abuse of rights. "It can be regarded as a digital labor force with out - of - control autonomy, which will be a dilemma that all supervision or governance work must face. On the one hand, we want to use it, and on the other hand, we have to figure out how to make it controllable. This is a great challenge."

Lu Junxiu also pointed out the consequences of AI Agents constructing a complete intelligent user behavior substitution system: "This system is a closed - loop, so its threats are also systematic. It not only violates privacy but also affects market order because its essence is the uncontrollable spill - over of the objective function."

Wang Yue, the deputy director, associate researcher, and doctoral supervisor of the Information Systems Research Institute of the Department of Electronic Engineering at Tsinghua University put forward new ideas from the perspective of technological governance. He believes that the issues of risks and governance of invasive AI can be sorted out and dealt with by disassembling them into two elements: one is the governance issue of AI Agents, and the other is the management issue of unrestricted access rights. The core dilemma in the current governance of AI Agents is that they are not regarded as independent acting subjects but are allowed to operate under the user's identity.

"The traffic generated by Agents on the Internet has exceeded the traffic of real users. These 'digital ghosts' continuously interact in the cyberspace, but we still regulate them with the logic of managing behaviors rather than the logic of managing subjects." Wang Yue proposed that AI Agents should be given an independent identity and a data path different from that of natural persons should be established. For example, a separate MCP interface should be designed for Agents instead of obtaining data through the UI interface. In this way, their value of value - added services can be exerted, and effective control can be achieved. "To govern AI Agents, they should be given an independent identity and an independent data path, establish credit like a person, and form a reputation. We cannot simply deal with them with strong regulatory behavior regulations. When AI Agents become an identifiable acting subject, their behaviors can be separated from those of natural persons."

II. Legal and Ethical Dilemmas and Responsibility Boundaries: If Operation Records Cannot Be Traced Back, It Is Difficult to Define Responsibility

The second unit began the open - discussion session. The first half focused on "legal and ethical dilemmas and responsibility boundaries". Many legal experts, combined with the current legal framework and practical cases, dissected the legal challenges brought by AI Agents and discussed core issues such as the authorization mechanism and responsibility traceability.

Wang Lei, a researcher at the Institute of Intelligent Technology and Law of Beijing Institute of Technology, a member of the Legal Affairs Committee of the Central Committee of the China Democratic League, and a young expert of the China Internet Association put forward his thoughts from the perspective of "the innovation of the thinking of artificial intelligence". He said that in the governance of artificial intelligence, there may be a deviation between the practical purpose and the actual effect because "from the perspective of incentives, the incentive method of artificial intelligence is to obtain more data resources". Therefore, in the governance of artificial intelligence, the traditional thinking needs to be broken.

Wang Lei summarized three phenomena in the process of AI governance: Firstly, AI - related technologies in some gray industries "break through imagination and have bold ideas", which makes the governance more difficult and requires higher standards; secondly, the definition of responsibility in new scenarios is still vague, such as "When a problem occurs in the MCP Square, should the platform be responsible?"; thirdly, the relocation of UGC - generated content in the process of interconnection will affect the platform ecosystem, and the competition order between platforms needs to be re - discussed. Wang Lei also gave some suggestions. On the one hand, he emphasized that different from the past perspective, we need "flexible governance". On the other hand, we should strengthen the specification of the competition order. Wang Lei believes that "the perspectives of rules and standards need to be further clarified."

Guo Bing, the executive dean of the Institute of Data Rule of Law and the leader of the Innovative Team of Data Law at Zhejiang Sci - Tech University focused on the governance difficulties of unrestricted access rights and conducted an in - depth analysis from three dimensions: separate consent, dual authorization, and record trace - back.

Guo Bing believes that there are currently differences in group standards. The Guangdong Standardization Association clearly prohibits intelligent agents from using unrestricted access rights to operate third - party APPs, while the latest standard of the China Software Industry Association weakens the restrictions and emphasizes user control. It can be seen that there are still disputes in the industry regarding the use of unrestricted access rights.

Regarding the separate consent mechanism, Guo Bing pointed out that there has always been a great controversy about the separate consent mechanism for sensitive personal information. Some views believe that the separate consent mechanism hinders the circulation and utilization of data elements including artificial intelligence, and many views also believe that the separate consent is just a formality and users lack real decision - making power. Unrestricted access rights belong to highly sensitive rights. In practice, when an intelligent agent calls this function, it may or may not involve sensitive personal information. Therefore, if some intelligent agents include the notification of unrestricted access rights in the general privacy policy and do not include it in the scope of separate consent, it may lead to the inability to automate some functions (such as payment).

There are also industry differences in the issue of dual authorization. Guo Bing said that both the Guangdong standard and the early standard of the China Software Industry Association require intelligent agents to obtain dual authorization from users and third - party APPs, but the latest standard of the China Software Industry Association has cancelled this requirement and instead emphasizes "user control". These two contradictory standards for intelligent agents also highlight the industry differences in the principle of dual authorization. Due to potential commercial interest conflicts, unfair competition cases between intelligent agent operators and third - party APPs may occur at any time. However, even if the China Software Industry Association cancels the requirement of dual authorization, group standards do not have direct legal effect and cannot provide a judgment standard for this dispute.

Record trace - back involves the key issue of responsibility determination. Guo Bing proposed that if the operation records of intelligent agents cannot be traced back, it will be difficult to define responsibility when an infringement occurs. Perhaps for this reason, the latest standard of the China Software Industry Association has added the requirement of record trace - back. However, there are also difficulties in record trace - back, such as the storage scope, storage method, and conflict with the right to delete personal information. How to resolve the contradiction between the protection of users' rights and the guarantee of users' ultimate right to hold someone accountable will become a systematic problem in the record trace - back system of intelligent agents.

Wang Fei, a partner at Zhonglun Law Firm shared enterprise compliance cases from a practical perspective. Combining three specific scenarios of document - type AI agents, medical - function AI agents, and clothing - design AI agents, he pointed out the core compliance confusion faced by enterprises: how to define the scope of authorization, the right boundary of cross - document data use, and the applicable scope of the defense of technological neutrality. A medical AI agent needs to access users' hospital data and medical literature to generate a diagnostic report. Can it claim technological neutrality by analogy with search engines? After users authorize, should the platform still bear the data security responsibility? Wang Fei said about this: "Lawyers may be more conservative, but I will more consider from the perspective of customers to meet the requirements of current judicial practice, academic views, and administrative supervision, hoping to have better compliance measures implemented."

Xu Ke, a professor at the School of Law of the University of International Business and Economics and the director of the Research Center for Digital Economy and Legal Innovation conducted a cross - border legal comparative analysis in combination with the Perplexity case in the United States. In this case, the defendant, Perplexity, helped users shop through Amazon Prime accounts and was accused by Amazon of violating the CFAA (Computer Fraud and Abuse Act), platform rules, and causing commercial losses. Perplexity claimed to be an "agent authorized by the user" and believed that Amazon's accusation was bullying of a start - up company by a giant.

Xu Ke pointed out that the core dispute in this case reflects the legal dilemma of the tripartite relationship of AI Agents (users, Agents, and third - party platforms): Agents claim to be an extension of users' rights, but the platform believes that their actions damage the business ecosystem and security order. Combining with China's judicial practice, Xu Ke emphasized that user authorization cannot replace platform authorization. This principle has been confirmed in data scraping cases. For example, in the case of Sina Weibo v. Toutiao, the court held that the authorization of big V users was not sufficient to exempt Toutiao from the infringement liability of scraping behavior.

However, Xu Ke also pointed out that "AI Agents and traditional scraping are two completely different technical means, and the traditional regulations for data scraping cannot be applied to the use of Agents." Therefore, two forms of Agents should be distinguished: pure "agents" (whose actions are completely limited within the scope of user authorization) and "intermediary cooperators" (who may have their own interest demands), and different forms correspond to different legal liability frameworks.

III. Develop First or Regulate First? Innovative Governance Needs to Focus on the Legality of Cross - Domain Data Acquisition

The third unit focused on "innovative governance paths and industrial practices". Experts from the industrial circle and scientific research institutions shared their practical explorations and discussed the possible paths for the collaborative governance of technology, law, and industry.

Lin Zihan, the chief expert on data elements at Jiangsu Data Exchange and a specially - appointed expert of the Cross - border Data Reform Expert Group in Pud