HomeArticle

Fraud Hunters: The Endgame of AI Anti-Fraud is Agent Security

汪德嘉2026-03-05 12:48
AI agents are revolutionizing the anti-fraud landscape. This article analyzes the "asymmetric competition" between the offense and defense, pointing out that the agent network can reverse the disadvantage with swarm intelligence, data collaboration, and millisecond-level response. It also emphasizes that cracking data privacy and AI's own security are the keys to implementation. Finally, it predicts that the anti-fraud industry will evolve into "agent security", making trust an inherent gene of the business.

Y Combinator (YC), one of the world's most influential startup incubators and investment institutions, releases a Request for Startups (RFC) list every quarter. This list not only represents YC's latest investment directions but also serves as a barometer for industry trends. In the spring of 2026, YC published its latest RFC list (see Reference 1 for the full list). One item, "Infra for Government Fraud Hunters," resonated with us. Over the past 15 years, Tongfudun has been deeply involved in anti - fraud in industries such as energy, finance, and government. We understand the hardships of this work, and with the advent of the agent era, we believe that the landscape of the anti - fraud field is undergoing fundamental changes.

The "Asymmetric Competition" in the Anti - Fraud Field

Anti - fraud is essentially a game of offense and defense. For a long time, the initiative in this game often lay in the hands of the attackers. The core reason is a deep - seated "cost asymmetry" — while the defenders are still weighing compliance risks and technical feasibility, the attackers have already launched the next round of probes using their cost advantage.

The first layer of asymmetry lies in the cost of a single failure. For black - and - gray industry gangs, having an attack intercepted only means wasting a few IPs, a batch of fake accounts, or a set of automated scripts. The acquisition cost of these resources is extremely low, and they can even be produced in large - scale batches. But for banks, government platforms, or large enterprises, what does a failed defense mean? It could be the privacy leakage of millions of users, direct financial losses from theft, and the immeasurable collapse of brand reputation. This situation where "attackers can lose a hundred times, but defenders cannot afford to lose even once" has become the heaviest shackle in traditional anti - fraud.

The second layer of asymmetry lies in the "burden" of technological innovation. Black - and - gray industries are the most radical and reckless early adopters of new technologies. When deepfake technology first emerged, they started using it to bypass face verification. When automated process tools became popular, they quickly adapted them into powerful tools for "captcha - cracking" and "credential stuffing." They don't need to consider the ethical boundaries of new technologies, go through layers of compliance approvals, or worry about accidentally harming normal users. In contrast, defenders, especially institutions in key sectors such as finance and government, must undergo a long process of security assessment, data privacy review, and business process adaptation when introducing any new technology. By the time an anti - fraud model finally goes live after all the hard work, attackers may have already switched to new circumvention paths.

However, the maturity of artificial intelligence, especially agent technology, is providing defenders with a historic opportunity to turn the tables. An agent is not a single - function algorithm model but an intelligent system capable of autonomous perception, decision - making, and task execution. When large government agencies and enterprises truly enter the intelligent era, they will build an anti - fraud system composed of thousands of agents working in collaboration.

The core advantage of this system lies in "winning by scale." Even if a single black - and - gray industry gang uses AI, the computing power, data, and knowledge reserves it can mobilize are far inferior to those of a national - level financial network or a large government - enterprise platform. Agents can learn new fraud patterns 24/7, link real - time threat intelligence across the entire network, and schedule hundreds of data sources for cross - verification within milliseconds. They can simulate the thinking of attackers, actively detect potential risks, and even generate defense strategies autonomously and issue execution orders.

More importantly, the agent network has "collective intelligence": when an agent identifies a new type of fraud method somewhere, all nodes in the entire network can instantly synchronize their defense capabilities. This evolution speed will exceed the manual iteration of black - and - gray industries for the first time. When the defender's agent army can aggregate resources, share intelligence, and respond collaboratively with unprecedented breadth and depth, the balance of asymmetric competition begins to tilt towards the defenders.

In the agent era, anti - fraud will no longer be a passive game of "repairing the city wall" but a systematic war of "intelligence against intelligence." The side with the largest agent network will become the real dominant force on the battlefield.

AI - Native Business Model: The Network Effect of Agents

The core of the anti - fraud system in the agent era is to build a multi - agent - based business model with an "AI - native" mindset. Traditional anti - fraud often involves "patching" existing systems — adding a new rule or upgrading a model every time a new type of fraud emerges. This passive response model is doomed to be unable to keep up with the iteration speed of black - and - gray industries. Anti - fraud in the agent era must be reconstructed from the ground up: encapsulate business nodes as agents and weave them into a collaborative ecosystem through standardized protocols. This is not a local optimization of a single agent but a global reshaping brought about by a systematic multi - agent framework — when thousands of agents collaborate under the same framework, a unique agent network effect will be born, giving defenders a structural advantage over attackers for the first time:

  • Intelligence Advantage: From Single - Point Confrontation to Emergent Intelligence. In the anti - fraud scenario, a single agent has difficulty dealing with complex attack chains. The AI - native model decomposes tasks layer by layer: identity verification agents verify user authenticity, behavior analysis agents monitor transaction anomalies, threat intelligence agents scan attack sources, and decision - making agents output judgments based on comprehensive information from multiple parties. Each agent is like a domain expert, and their collective decision - making forms an "expert consultation" system. More importantly, the group has adaptive capabilities — when an agent identifies a new type of fraud method, the knowledge will be quickly absorbed by the network, and the entire "intelligence quotient" will evolve synchronously. The single - point breakthrough of black - and - gray industries will lose its advantage in the face of the emergent collective wisdom.
  • Data Advantage: Data is the fuel for anti - fraud. But in the traditional model, data silos are prevalent, and it is difficult for institutions to share data due to privacy and compliance issues. The AI - native model achieves a paradigm breakthrough through the agent network: each agent has a DID - based digital account, and data flows between agents in a "usable but invisible" manner. For example, a bank risk - control agent can query a telecom operator's number risk agent, and the latter returns a score after local calculation without revealing the original data. Based on privacy computing and blockchain evidence storage, this sharing not only complies with regulations but also breaks down barriers. As more agents are connected, the risk feature library continues to expand, and new entrants immediately gain access to the entire network's intelligence, accelerating the data flywheel.
  • Timeliness Advantage: From Manual Response to Millisecond - Level Adaptive Collaboration. Anti - fraud is a race against time. When a transaction monitoring agent detects an anomaly, it will automatically trigger the identity agent to strengthen authentication, the strategy agent to adjust weights, and the disposal agent to freeze the account, all within milliseconds without human intervention. A unified communication protocol (such as MCP) enables agents from different manufacturers to "speak the same language," achieving plug - and - play collaboration. This adaptive linkage transforms the defense system from a passive response to an active immune system.

Security and Compliance: Data Privacy and AI Security Boundaries

The agent network endows the anti - fraud system with unprecedented collaborative capabilities, but data security and privacy compliance remain the biggest bottlenecks for its implementation. At the same time, if the security risks of AI itself are ignored, over - relying on agents may backfire. Only by solving both external compliance problems and internal algorithm risks can the agent network truly become a reliable cornerstone for anti - fraud.

  • Data Containers + Privacy Computing are the Key to Solving Data Security and Privacy Compliance. The essence of anti - fraud is a data game — cross - institutional and cross - industry data collaboration is required to accurately identify risks. However, the traditional centralized sharing model not only violates compliance requirements but also aggregates a large amount of sensitive data into a huge attack target. We can equip each agent with a data container based on Distributed Identity (DID). The container is not only a storage unit but also an active defense carrier integrating dynamic access control, a privacy computing engine, and full - lifecycle auditing. Data resides in the container in an encrypted state, following the principle of "data immobility and computing power mobility": computing tasks are scheduled to the container or trusted nodes where the data is located, and analysis is completed in an encrypted state through privacy computing technologies such as Fully Homomorphic Encryption (FHE) and Zero - Knowledge Proof (ZK), ensuring that the original data remains "usable but invisible" throughout the process.
  • The Security of AI Itself is a New Anti - Fraud Issue in the Agent Era. Agents are not omnipotent, and their inherent flaws may amplify risks. As we revealed in the "Agent Incompleteness Theorem": there is no ultimate instruction that can perfectly constrain all the behaviors of an agent. Under the same instruction, contradictory outputs may occur, and its behavior is essentially "undecidable" in a complex environment. In the anti - fraud scenario, this means that algorithmic bias and unpredictable "evasion behaviors" may harm users accidentally or be exploited by black - and - gray industries. If enterprises blindly trust agent decisions and cancel manual reviews, once a systematic vulnerability occurs, the consequences will be unimaginable. For example, the popularity of action - based agents such as OpenClaw and Moltbook has exposed new threats such as "prompt injection" and plugin supply - chain attacks. Therefore, we must implement the "zero - trust" principle: never assume that any agent's behavior is trustworthy by default. Key decisions need to be cross - verified by multiple agents or trigger manual reviews. At the same time, introduce formal verification to define provable security boundaries for key logic, transform vague security requirements into mathematical specifications, and use theorem provers to verify that discriminatory rules or unauthorized operations will not be triggered. Only by having a clear understanding of the risks of AI itself can we avoid slipping from "human - to - human defense" to an out - of - control situation of "machine - to - machine defense."

From Anti - Fraud to Agent Security

Although we have been talking about anti - fraud throughout the article, and anti - fraud is an important label for Tongfudun after 15 years in the industry, at the end of the article, we still want to put forward a "bold statement": In the AI era, the anti - fraud industry will eventually fade out of history, to be replaced by "agent security" or "secure agents." Just as it was not the police but electronic payment that eliminated thieves, it will not be the traditional "anti - fraud system" but the agent network with built - in security that will eliminate fraud.

This day will come soon. Balaji, a former partner of a16z, proposed the idea of "Silicon Valley turns off the lights, China turns on the switch" in a 10,000 - word tweet on March 2nd, believing that the competition in the field of large - scale AI foundation models is over. China will turn large - scale AI models into cheap infrastructure like water and electricity through the open - source route, completely subverting traditional software industries including anti - fraud. The era of everything being intelligent will soon arrive. When most business processes are reconstructed by agents, security will no longer be the responsibility of a single module but the gene of the entire network. This is the ultimate vision of Tongfudun's "LegionSpace": to enable every agent to live in a security system that prioritizes trust from its birth. When trust becomes the default setting, fraud will lose its breeding ground. On that day, anti - fraud will no longer be an industry but the breathing and instinct of all agents.

References:

1. Y Combinator. "Requests for Startups." Y Combinator, 2026, https://www.ycombinator.com/rfs. Accessed 2026.

2. Cheon, Jung Hee, et al. "Bootstrapping in approximate fully homomorphic encryption: a research survey." Cybersecurity, vol. 8, no. 87, Springer, 2025, https://doi.org/10.1007/s42400-025-00384-3.

3. Pan, Zhuo, et al. "A Survey of Zero - Knowledge Proof Based Verifiable Machine Learning." arXiv preprint arXiv:2502.18535, 2025.

4. Shahriar, Asif, et al. "A Survey on Agentic Security: Applications, Threats and Defenses." arXiv preprint arXiv:2510.06445, 2025.

5. Tang, Yaxin, et al. "Security of LLM - based agents regarding attacks, defenses, and applications: A comprehensive survey." Information Fusion, Elsevier, 2025.

6. Veenman, Kealan, et al. "Verifiability for privacy - preserving computing on distributed data — a survey." International Journal of Information Security, vol. 24, no. 141, Springer, 2025, https://doi.org/10.1007/s10207-025-01047-7.

This article is from the WeChat official account "Tongfudun", written by Dun Dun, and published by 36Kr with permission.