HomeArticle

Sovereignty out of control: Cross-border tool calls by AI agents break through traditional regulatory boundaries

互联网法律评论2025-11-26 19:32
Based on the concerns about infrastructure and data control under digital sovereignty, this article introduces the concept of "sovereignty erosion by AI agents" and how AI agents autonomously access third-party tools across borders, revealing a critical loophole in the compliant models of the EU's Artificial Intelligence Act.

01 Emergence of New Threats: What is the Challenge of "Agentic Tool Sovereignty"?

In real - life, the output capabilities of AI systems are rarely fixed or predetermined.

Firstly, cloud computing providers are the backbone of most web services (including APIs, databases, search engines, etc.) and exist across different jurisdictions and boundaries. This reduces "the certainty for customers regarding where their data is stored in the cloud and the legal basis of any contract with the provider".

Secondly, AI agents can be defined as "goal - oriented assistants" designed to act autonomously with minimal human input. That is to say, they are "not just tools but actors" that exercise decision - making power.

AI agents can invoke third - party tools (including APIs and web searches) and even other AI systems, which may be unknown before runtime and may operate under different jurisdictions and in different geographical regions.

Therefore, it is difficult to incorporate AI agents into the regulatory approach of the EU's AI Act, which is based on a static and predetermined compliance model.

The author refers to this challenge as "Agentic Tool Sovereignty Erosion" (Agentic Tool Sovereignty, abbreviated as ATS), which is the (lack of) ability of states and providers to legally control their AI systems. Digital sovereignty focuses on the control of one's own digital infrastructure, data, and technology, while ATS extends this concern to the runtime behavior of AI systems themselves, the ability of AI agent tools to act, select, and integrate tools that are beyond the effective scope of any single jurisdiction.

Imagine a scenario: An AI recruitment system in Paris autonomously invokes a US psychometric API, a UK verification service, a Singaporean skills platform, and a Swiss salary tool in less than five seconds. Three months later, four regulatory agencies issue non - compliance notices. In fact, the deployer clearly lacks visibility into the data flow, the audit trail is proven to be insufficient, and the agent has no geographical routing control.

Fifteen months after the EU's AI Act came into force, there is still no guidance to address this gap. The €20 million GDPR fines for cross - border AI violations (€15 million for OpenAI and €5 million for Replika) indicate how regulators might respond when the autonomous tool use of AI agents inevitably leads to similar violations. The disconnect between the static compliance model of the AI Act and the dynamic tool use of agents creates a liability vacuum that neither providers nor deployers can handle.

Defining the Dimensions of ATS

The ATS problem stems from the tension between the cross - border data flow of agent autonomy and digital sovereignty. The legal frameworks we will consider (the EU's AI Act and GDPR) assume static relationships, predetermined data flows, and unified control; these assumptions are incompatible with the runtime, autonomous, and cross - jurisdictional tool invocations of agents.

ATS has technical, legal, and operational dimensions, which can be summarized as follows:

Technically, AI agents may dynamically select tools from continuously updated centers/registries (digital 'catalogs' listing available tools), making the jurisdiction of import unknown before runtime.

Legally, when agents autonomously transfer data across borders, jurisdiction becomes blurred.

Operationally, liability is dispersed among model providers, system providers, deployers, and tool providers. No single actor has full visibility or control over the agent's decision - tree, data flow, or compliance status during tool invocation.

Gartner predicts that by 2027, 40% of AI - related data breaches will result from the abuse of cross - border generative AI. However, the AI Act does not provide any mechanism to limit where agents execute, prove their runtime behavior, or maintain accountability when control leaves the original boundaries.

02 Failure of the Legal Framework: Structural Defects of the EU's AI Act

Fuzzy Boundaries: The Failure of the Concept of "Substantial Modification" in the Face of Dynamic Tool Invocations

Article 3, paragraph 23 of the EU's AI Act defines "substantial modification" as a change "not foreseen or planned in the initial conformity assessment".

But does the invocation of other tools at runtime constitute the above - mentioned "substantial modification"?

Relevant legal academic research shows that these ambiguities are structural rather than transitional. Even if developers intentionally modify an AI system using documented methods, "upstream developers are unlikely to predict or address the risks posed by all potential downstream modifications to their models". If it is impossible to predict known and planned modifications, it becomes even more impossible for autonomous runtime AI agent tool invocations; providers cannot foresee which tools an AI agent will select from the continuously updated registry, what capabilities these tools have, or what risks they may bring.

If tools are documented during the conformity assessment, the liability may still rest with the original provider. If the selection and use of tools are unforeseen or fundamentally change the capabilities, Article 25, paragraph 1 may be triggered, transforming the deployer into a provider.

However, the threshold for "substantial modification" requires determining whether the change was "foreseen or planned"; when an AI agent autonomously selects tools that did not exist during the conformity assessment, this determination becomes structurally impossible.

An Impossible Task: How to Achieve "Post - Market Monitoring" for Cross - Border Tool Interactions?

Article 72, paragraph 2 requires post - market monitoring (for high - risk systems) to "include an analysis of interactions with other AI systems". Although this provides the strongest textual basis for monitoring external tool interactions, it still raises further questions, namely:

Do "other AI systems" include non - AI tools and APIs? Most external tools invoked by agents are traditional APIs rather than AI systems; some others may be black boxes that do not interface as AI systems on the surface but operate in the same way internally;

How can providers monitor third - party services outside their control? Providers do not have access to the infrastructure of tool providers, cannot force the disclosure of data processing locations, and have no mechanism to audit tool behavior; this is especially true if the tool provider is located outside the EU.

Academic analysis of the post - market monitoring framework of the EU's AI Act acknowledges this structural challenge, pointing out that post - market monitoring becomes particularly challenging for "AI systems that continuously learn, i.e., update their internal decision - making logic after being placed on the market". Agent AI systems with dynamic tool selection capabilities fall into this category.

Moreover, the Act assumes that monitoring logs "can be controlled by users, providers, or third parties according to contractual agreements", but this assumption completely fails when an agent invokes tools from providers that are unknown before runtime and with which there is no contractual relationship, creating a visibility gap that makes it impossible to fulfill the monitoring obligation in Article 72, paragraph 2.

Article 25, paragraph 4 requires providers and third - party suppliers (of high - risk systems) to stipulate "necessary information, capabilities, technical access, and other assistance" through written agreements. However, this assumes a pre - established relationship, which is impossible when an agent selects tools from a continuously updated center/registry at runtime.

Fuzziness and Vacuum of Liability Attribution in the Context of "Multi - Party Participation"

Liabilities and obligations are dispersed throughout the entire AI value chain: Model providers build the basic capabilities; System providers integrate and configure; Deployers operate in a specific environment; Tool providers (sometimes even unknowingly) provide external capabilities. Each actor has partial visibility and control, but the accountability framework of the Act assumes unified liability.

The Act does not provide any mechanism to force tool providers to disclose data processing locations, implement geographical restrictions, provide audit access, or maintain compatibility with compliance systems. When an agent autonomously selects a tool that transfers personal data to a jurisdiction without an adequacy decision, who decides this transfer? Is it the model provider that enables the tool - using ability? The system provider that configures the tool registry? The deployer that authorizes autonomous operation? Or the tool provider that processes the data?

The traditional liability attribution legal framework "regards machines as tools controlled by their human operators, based on the premise that humans have a certain degree of control over machine specifications". However, "since AI largely relies on the machine - learning process of learning and adapting to its own rules, humans are no longer in control, so it cannot be expected that humans are always responsible for the behavior of AI".

This liability gap multiplies when AI agent tools invoke other applications: Machine - learning (ML) systems may "behave very differently with almost identical inputs", making it impossible to predict which tools will be invoked or where the data will flow. The Act assumes unified control that no longer exists.

More problems arise when an AI agent selects a web service as a tool, and the service has never envisioned or authorized itself to be used as part of an AI agent's operation. They effectively become a tool without even knowing it.

Moreover, the Act does not provide any mechanism to force the cooperation of tool providers selected at runtime. Article 25, paragraph 4 does stipulate written agreements, but only between pre - established providers and suppliers. Therefore, neither of these two provisions can solve the problems caused by the selection of tools from ad - hoc sources.

This ambiguity may not be accidental but intentional. Legislators "are dissuaded from formulating specific rules and obligations for algorithm programmers to allow for future experimentation and code modification", but this approach "creates room for programmers to evade responsibility and accountability for the ultimate behavior of the system in society".

The EU's AI Act is a typical example of this trade - off: overly specific rules will limit innovation, while general rules will create an accountability vacuum. ATS exists precisely in this vacuum, the space between enabling autonomous tool use and maintaining legal control over this autonomy.

03 Fundamental Conflict: Incompatible Compliance Contradictions with GDPR

The intersection of the EU's AI Act and Chapter V of GDPR creates a fundamental tension. According to the Schrems II case, the standard contractual clauses under Article 46 require specific identification of the importer and case - by - case adequacy assessments. Similarly, these mechanisms assume pre - established relationships and intentional transfer decisions; they are also structurally incompatible with dynamic tool invocations.

The tool center/registry of AI agents is continuously updated. In many cases, specific tools are unknown before runtime. AI agent decisions occur too quickly for legal review, and the relationships are transient rather than contractual.

This structural tension is highlighted by AI agents: The "traditional data protection principles of GDPR - purpose limitation, data minimization, special treatment of sensitive data, and restrictions on automated decision - making" fundamentally conflict with the operational reality of AI systems, which involves "collecting large amounts of data about individuals and their social relationships and processing this data for purposes that are not fully determined at the time of collection".

When agents autonomously invoke cross - border tools, the data flows they generate neither conform to the predetermined transfer mechanisms of Chapter V nor the purpose - based collection principle of Chapter I (assuming the purpose is determined at the time of collection). The law requires knowing why and where the data is flowing, but AI agent systems make this decision autonomously at runtime.

When an AI agent autonomously selects a tool to transfer personal data, the established controller - processor framework relationship collapses; the tool provider does not act at the direction of the deployer, nor does it independently decide the purpose and method.

In this context, providers find themselves in a dilemma: either pre - approve a limited set of tools (eliminating the agent's flexibility), implement geographical restrictions (bringing the same problem through another constraint), or operate in a non - compliant manner.

Traditional "data sovereignty" focuses on territorial control of data within a jurisdiction, but AI agent systems make autonomous cross - border decisions that go beyond the sovereignty scope of any single jurisdiction. The AI Act cannot constrain agents that autonomously invoke tools operating under different regulatory regimes with different compliance levels.

Therefore, ATS requires a fundamental re - conceptualization: sovereignty must shift from static territorial boundaries to the dynamic governance of autonomous actions themselves.

04 The Way Forward: The Inevitable Shift from Static Compliance to "Runtime Governance"

Fifteen months after the AI Act came into force, the AI Office has not issued any guidance specifically for AI agents, autonomous tool use, or runtime behavior.

A report by The Future Society in June 2025 confirms that the technical standards being developed "may not fully address the risks from agents". This regulatory gap is not only technical but also conceptual; existing laws embed sovereignty in territory and data residency, while AI agent systems require sovereignty to be embedded in runtime behavior.

Before the guidance is issued, providers face extremely difficult - to - resolve ambiguities:

Whether tool invocation constitutes a substantial modification;

How to meet the monitoring obligation for third - party services under Article 72, paragraph 2;

Whether GDPR's transfer mechanisms apply to transient, agent - initiated relationships.

Deployers of successful AI agent systems with tool - using capabilities must, on the one hand, maintain human supervision (according to Article 14) while enabling the system's autonomous operation - which seems to be a compliance impossibility on the surface.

Therefore, the gap in the runtime governance of AI agents is not only technical but also fundamental: AI agents can autonomously execute complex cross - border actions (including tool invocations that trigger data transfers), which would violate GDPR and the AI Act if executed by a human with the same knowledge and intention.

However, neither of these two frameworks imposes real - time compliance obligations on AI agents themselves: Post - hoc fines cannot reverse illegal data transfers; conformity assessments cannot predict which tool an AI agent will autonomously select from thousands of continuously updated tools. The enforcement model of legal regulation assumes the time scale of human decision - making, but the actual operation occurs too quickly, and human supervision can only be a "pipe dream".

Therefore, the current situation of "Agentic Tool Sovereignty Erosion" (ATS) requires us to fundamentally re - define how we view digital sovereignty, not as static jurisdictional boundaries but as dynamic guardrails for autonomous behavior. This may require some mechanisms to potentially restrict which tools an agent can invoke, verify where the execution occurs, and maintain liability when control is dispersed.

This article is from the WeChat official account "Internet Law Review", author: Lloyd Jones. Republished by 36Kr with permission.