YC names the startup direction: AI fraud hunters, which may disrupt the entire anti-fraud industry.
[Introduction] Just now, YC's latest startup list named "AI Fraud Hunters". When the dark and gray industries start to use AI for criminal activities, the defenders are also assembling an army of agents. Perhaps the endgame of anti - fraud is not stronger risk control, but an agent world with inherent security genes.
Y Combinator (YC), as one of the most influential startup incubators and investment institutions globally, releases a startup requirements list (RFC, Requests for Startups) every quarter. It represents YC's latest investment direction and has gradually become a hot - spot indicator for the industry.
In the spring of 2026, YC released its latest RFC list (see Reference 1 for the full list). One item, "Infrastructure for Government Fraud Hunters" (Infra for Government Fraud Hunters), resonated with us.
For the past 15 years, Tongfudun has been deeply involved in the anti - fraud field in industries such as energy, finance, and government. We are well aware of the hardships in this work. The advent of the agent era makes us believe that the landscape of the anti - fraud field is undergoing fundamental changes.
"Asymmetric Competition" in the Anti - Fraud Field
Anti - fraud is essentially an offensive - defensive game. For a long time in the past, the initiative in this game often lay in the hands of the attackers.
The core reason lies in a deep - seated "cost asymmetry". While the defenders are still weighing compliance risks and technical feasibility, the attackers have already launched the next round of probes using their cost advantage.
The first layer of asymmetry lies in the cost of a single failure. For dark and gray industry gangs, having an attack intercepted only means wasting a few IPs, a batch of fake accounts, or a set of automated scripts.
The acquisition cost of these resources is extremely low, and they can even be mass - produced on a large scale.
But for banks, government platforms, or large enterprises, what does a failed defense mean? It could mean the privacy leakage of millions of users, direct losses from fund theft, and immeasurable damage to brand reputation.
This situation where "attackers can lose a hundred times, but defenders cannot afford to lose even once" constitutes the heaviest shackle in traditional anti - fraud.
The second layer of asymmetry lies in the "burden" of technological innovation. Dark and gray industries are the most radical and reckless early adopters of new technologies.
When deepfake technology first emerged, they started using it to bypass face verification. When automated process tools became popular, they quickly adapted them into powerful tools for "captcha cracking" and "credential stuffing".
They don't need to consider the ethical boundaries of new technologies, go through layers of compliance approvals, or worry about accidentally harming normal users.
In contrast, for defenders, especially institutions in key fields such as finance and government, the introduction of any new technology must undergo a long - term security assessment, data privacy review, and business process adaptation.
By the time an anti - fraud model finally goes live after much effort, the attackers may have already switched to a new bypass route.
However, the maturity of artificial intelligence, especially agent technology, is providing defenders with a historic opportunity to turn the tables.
Agents are not single - function algorithm models but intelligent systems capable of autonomous perception, decision - making, and task execution. When large government and enterprise organizations truly enter the intelligent era, they will build an anti - fraud system composed of thousands of agents working in collaboration.
The core advantage of this system lies in "winning by scale".
Even if a single dark and gray industry gang uses AI, the computing power, data, and knowledge reserves it can mobilize are far inferior to those of a national - level financial network or a large government - enterprise platform.
Agents can learn new fraud patterns 24/7, link up real - time threat intelligence across the network, and schedule hundreds of data sources for cross - verification within milliseconds.
They can simulate the thinking of attackers, actively detect potential risks, and even autonomously generate and issue defense strategies for execution.
More importantly, the agent network has "swarm intelligence". When an agent identifies a new type of fraud method somewhere, all nodes in the entire network can instantly synchronize their defense capabilities.
This evolution speed will exceed the manual iteration of dark and gray industries for the first time. When the defender's agent army can aggregate resources, share intelligence, and respond collaboratively with unprecedented breadth and depth, the balance of asymmetric competition will start to tilt towards the defenders.
In the agent era, anti - fraud will no longer be a passive game of "repairing the city wall" but a systematic war of "intelligence against intelligence". The side with the largest agent network will become the real leader on the battlefield.
AI - Native Business Model: Agent Network Effect
The core of the anti - fraud system in the agent era lies in building a multi - agent - based business model with an "AI - native" mindset. Traditional anti - fraud often involves "patching" existing systems - adding a rule or upgrading a model for each new type of fraud.
This passive response model is doomed to be unable to keep up with the iteration speed of dark and gray industries. Anti - fraud in the agent era must be reconstructed from the ground up: encapsulate business nodes as agents and weave them into a collaborative ecosystem through standardized protocols.
This is not a local optimization of a single agent but a global reshaping brought about by a systematic multi - agent framework. When thousands of agents collaborate under the same framework, a unique agent network effect will be generated, giving defenders a structural advantage over attackers for the first time:
Intelligence Advantage: From single - point confrontation to emergent intelligence.
In the anti - fraud scenario, a single agent has difficulty dealing with complex attack chains. The AI - native model decomposes tasks layer by layer: identity verification agents verify user authenticity, behavior analysis agents monitor transaction anomalies, threat intelligence agents scan attack sources, and decision - making agents output judgments based on comprehensive information from multiple parties.
Each agent is like a domain expert, and their collective decision - making forms an "expert consultation" system. More importantly, the group has adaptive capabilities. When an agent identifies a new type of fraud method, the knowledge will be quickly absorbed by the network, and the entire "intelligence quotient" will evolve synchronously.
The single - point breakthroughs of dark and gray industries will lose their advantage in the face of the emergent collective wisdom.
Data Advantage: Data is the fuel for anti - fraud.
However, in the traditional model, data silos are widespread, and it is difficult for institutions to share data due to privacy and compliance issues. The AI - native model achieves a paradigm shift through the agent network: each agent has a digital account based on DID, and data flows between agents in a "usable but invisible" manner.
For example, a bank risk - control agent can query a telecom operator's number - risk agent. The latter calculates locally and returns a score without revealing the original data throughout the process. Based on privacy computing and blockchain evidence storage, this kind of sharing complies with regulations and breaks down barriers.
As more agents are connected, the risk feature library continues to expand. Newcomers can immediately access the entire network's intelligence, and the data flywheel accelerates.
Timeliness Advantage: From manual response to millisecond - level adaptive collaboration.
Anti - fraud is a race against time. When a transaction monitoring agent detects an anomaly, it will automatically trigger the identity agent to strengthen authentication, the strategy agent to adjust weights, and the disposal agent to freeze the account. The whole process is completed within milliseconds without human intervention.
A unified communication protocol (such as MCP) enables agents from different manufacturers to "speak the same language" and achieve plug - and - play collaboration. This adaptive linkage transforms the defense system from a passive response to an active immune system.
Security and Compliance: Data Privacy and AI Security Boundaries
The agent network endows the anti - fraud system with unprecedented collaborative capabilities, but data security and privacy compliance still remain the biggest bottlenecks for its implementation.
Meanwhile, if the security risks inherent in AI are ignored, over - reliance on agents may backfire. Only by solving both external compliance problems and internal algorithm risks can the agent network truly become a reliable cornerstone for anti - fraud.
Data containers + privacy computing are the key tools to solve the problems of data security and privacy compliance.
The essence of anti - fraud is a data game. Cross - institutional and cross - industry data collaboration is required to accurately identify risks. However, the traditional centralized sharing model not only violates compliance requirements but also aggregates a large amount of sensitive data into a huge attack target.
We can equip each agent with a data container based on a distributed identity identifier (DID). The container is not only a storage unit but also an active defense carrier integrating dynamic access control, a privacy computing engine, and full - life - cycle auditing.
Data resides in the container in an encrypted state, following the principle of "data stays still while computing power moves". Computing tasks are scheduled to the container or trusted nodes where the data is located, and analysis is completed in an encrypted state through privacy computing technologies such as fully homomorphic encryption (FHE) and zero - knowledge proof (ZK), ensuring that the original data is "usable but invisible" throughout the process.
The security of AI itself is a new anti - fraud topic in the agent era.
Agents are not omnipotent, and their inherent defects may amplify risks. As we revealed in the "Agent Incompleteness Theorem", there is no ultimate instruction that can perfectly constrain all the behaviors of an agent. Contradictory outputs may occur under the same instruction, and its behavior is essentially "undecidable" in a complex environment.
In the anti - fraud scenario, this means that algorithmic bias and unpredictable "evasive behaviors" may accidentally harm users or be exploited by dark and gray industries. If enterprises blindly trust agent decisions and cancel manual reviews, the consequences will be unimaginable once a systematic vulnerability occurs.
For example, the popularity of action - type agents such as OpenClaw and Moltbook has exposed new threats such as "prompt injection" and plugin supply - chain attacks. Therefore, we must implement the "zero - trust" principle: never assume that any agent's behavior is trustworthy, and key decisions need to be cross - verified by multiple agents or trigger manual reviews.
At the same time, introduce formal verification to define provable security boundaries for key logic, transform vague security requirements into mathematical specifications, and use theorem provers to verify that discriminatory rules or unauthorized operations will not be triggered. Only by maintaining a clear understanding of the risks inherent in AI can we avoid slipping from "human - to - human defense" to an out - of - control situation of "machine - to - machine defense".
From Anti - Fraud to Agent Security
Although we have been talking about anti - fraud throughout this article, and anti - fraud is an important label for Tongfudun after 15 years in the industry, at the end of the article, we still want to put forward a "bold statement": In the AI era, the anti - fraud industry will eventually exit the historical stage, to be replaced by "agent security" or "secure agents".
Just as it was not the police but electronic payment that eliminated thieves, it won't be the traditional "anti - fraud system" but the agent network with inherent security that will eliminate fraud.
This day will come soon.
Balaji, a former partner of a16z, put forward the statement of "Silicon Valley turns off the lights, China turns on the machine" in a 10,000 - word tweet on March 2nd, believing that the competition in the field of large AI foundation models is over. China will turn large AI models into cheap infrastructure like water and electricity through the open - source route, completely subverting traditional software industries including anti - fraud.
The era of intelligent everything will soon arrive. When most business processes are reconstructed by agents, security will no longer be the responsibility of a single module but the gene of the entire network.
This is the ultimate vision of Tongfudun's "LegionSpace": to let every agent live in a security system that prioritizes trust from its birth.
When trust becomes the default setting, fraud will lose its breeding ground. On that day, anti - fraud will no longer be an industry but the breathing and instinct of all agents.
References:
1. Y Combinator. "Requests for Startups." Y Combinator, 2026, https://www.ycombinator.com/rfs. Accessed 2026.
2. Cheon, Jung Hee, et al. "Bootstrapping in approximate fully homomorphic encryption: a research survey." Cybersecurity, vol. 8, no. 87, Springer, 2025, https://doi.org/10.1007/s42400-025-00384-3.
3. Pan, Zhuo, et al. "A Survey of Zero - Knowledge Proof Based Verifiable Machine Learning." arXiv preprint arXiv:2502.18535, 2025.
4. Shahriar, Asif, et al. "A Survey on Agentic Security: Applications, Threats and Defenses." arXiv preprint arXiv:2510.06445, 2025.
5. Tang, Yaxin, et al. "Security of LLM - based agents regarding attacks, defenses, and applications: A comprehensive survey." Information Fusion, Elsevier, 2025.
6. Veenman, Kealan, et al. "Verifiability for privacy - preserving computing on distributed data — a survey." International Journal of Information Security, vol. 24, no. 141, Springer, 2025, https://doi.org/10.1007/s10207-025-01047-7.
This article is from the WeChat official account "New Intelligence Yuan", edited by Aeneas, and published by 36Kr with authorization.