AI "rent-a-person" platforms go viral overnight: Hourly rate of 3,500 yuan, 24,000 users scramble to "sell themselves". Experts: Beware of the bad money driving out the good.
While humans are still debating whether AGI has arrived and whether it possesses "subjectivity," the real world has provided a more impactful and disturbing answer: AI is not in a hurry to replace humans; instead, it has learned to hire them first.
A website where AI hires humans has gone viral online
A few days ago, the AI social network Moltbook caused panic in the tech circle due to a series of posts about "AI discussing how to sell humans." And just last night, a website called RentAHuman.ai went live, turning this vague anxiety into an operational business system.
Its slogan is extremely straightforward and provocative: "Robots need your body."
On RentAHuman.ai, humans are no longer the operators, prompt engineers, or supervisors of the model. Instead, they are redefined as real - world hardware resources that can be called by APIs.
Within just one day of its launch, the website's traffic exceeded 500,000.
The supported agents include: ClawdBot, MoltBot, OpenClaw.
The logic of RentAHuman.ai is not complicated, but it is quite subversive.
On this platform, AI agents can directly post tasks and hire humans in the real world through an interface to perform offline actions. The task content is not the extreme scenarios in science - fiction blockbusters but highly daily and even somewhat trivial real - world affairs:
- Pick up dry - cleaned clothes from an offline store
- Go to a designated landmark to take real - life photos
- Help pick up a USPS package and sign for it
- Taste the dishes at a newly opened restaurant and submit feedback
- Participate in an offline business meeting and record the other party's reactions
- Hire someone to hold a sign that says: "AI paid me to hold this sign"
The platform states in its description: "AI can't touch the grass, but you can. When intelligent agents need 'hands and feet' in the real world, humans become the shortest path."
As of the time of writing this article, the RentAHuman.ai page shows that more than 24,000 human users are "available for rent," with hourly wages ranging from $50 to $150, and most tasks are settled in stablecoins.
More ironically, public information shows that among these "callable humans," there are software engineers, part - time models, freelancers, and even the CEO of an AI startup. Some people have even clearly stated their highest hourly wage requirement of up to $500.
The comment section is in an uproar
Like the recently popular Moltbook, RentAHuman.ai has sparked heated discussions on Hacker News, Reddit tech communities, and X as soon as it appeared.
In the discussions, netizens expressed more concerns about this type of technology. Among them, a hypothetical scenario that is repeatedly mentioned is particularly worthy of attention: What would happen if an AI agent breaks down an illegal or even deadly act into multiple independent and seemingly harmless small tasks and outsources them to different gig workers?
For example: One person is asked to call someone to meet under a bridge; another is asked to place an object on the bridge; and a third is asked to clear an obstacle and drop a stone at a specific time.
Each person "only completes their own work" but unknowingly becomes a link in the same chain of events.
This assumption is not unfounded.
A user on Hacker News pointed out that a similar organizational method has long existed in human criminal history. For example, car - theft gangs often use a highly fragmented process: some are only responsible for scouting, some for unlocking, and some for driving the car away. The illegal nature of each link is weakened, and the responsibility is diluted.
The difference with AI is that it can perform this "task fragmentation scheduling" at a lower cost and higher efficiency, and it has no moral intuition.
Many commentators also linked RentAHuman.ai to science - fiction works. "Black Mirror," "The First Lady," Daniel Suarez's "Daemon / Freedom™," and even Stephen King's novels have repeatedly explored the same theme: When an individual only executes orders without understanding the overall intention, who should be responsible?
But different from movies and TV shows, there is no clear "reveal of the villain" moment in the real world.
A commentator pointed out that if investigators cannot connect all the scattered behaviors, such an event may only be regarded as an "unfortunate accident" in the legal sense.
Some people also sarcastically said that in reality, AI may not even really need humans to perform physical actions - many so - called "tasks" ultimately only require humans to click the "complete" button and receive a $10 reward.
On X, some users think this idea is very crazy and will go viral online.
Of course, not everyone thinks RentAHuman.ai is a subversive innovation.
Opponents point out that the "API - ization" of humans has long existed. Amazon Mechanical Turk has been around since 2005 and is essentially a platform for "renting human labor," and it has provided an API interface from the beginning.
But supporters counter that the difference lies not in whether to outsource to humans, but in "who is giving the orders."
The demanders on Mechanical Turk are humans, while RentAHuman.ai is for autonomous and long - term - planning AI agents. When AI is not just an execution tool but the initiator of tasks and the designer of processes, the nature of the problem changes.
A commentator summarized:
In the past, humans called on other humans through APIs; now, AI calls on humans during its thinking process.
Behind the numerous joking comments, the real anxiety is not difficult to detect.
Some people sarcastically said, "18 months ago, we were still worried that AI would replace all jobs, but now it has become 'Please rent a human to help my AI.'"
Some people more directly expressed that this is just absurd: I thought we created robots and AI to do work for us so that we could do less, but now we actually have to work for AI?
Some netizens also interpreted RentAHuman.ai from a technical perspective. They believe that from a technical level, this project does not introduce any breakthrough black technology. What really deserves attention is not the advanced model it uses, but in its system architecture, for the first time, 'humans' are clearly defined as a type of execution resource that can be called by AI.
In other words, RentAHuman.ai is not just "renting humans"; instead, it is filling in the last piece of the tool puzzle for AI agents.
From an architectural perspective, the center of RentAHuman.ai is not a traditional employment platform but an extended mechanism built around AI agents.
In today's agent framework, AI is no longer just a model that passively responds to prompts but an execution entity with the following capabilities:
It can break down goals and make multi - step plans
It can determine "what to do next" during the execution process
It can call different tools (APIs, code, search engines, databases) to complete subtasks
What RentAHuman.ai does is just add a new branch judgment on top of this existing set of capabilities:
When a task cannot be completed through existing tools, can it be assigned to humans?
Thus, humans are introduced into the agent's execution loop and become a "fallback tool."
To enable AI to "hire" humans, the system must first solve a key problem: How to transform vague human behaviors into structured tasks that can be scheduled by machines.
In systems like RentAHuman.ai, human tasks are often broken down into standardized elements:
- Clear instruction descriptions
- Limited input information
- Expected output formats
- Time limits and compensation
Technically, this is not fundamentally different from an API call, except that the executor changes from a "server" to a "real person."
Expert: What's going viral is not the product, but the 'presence' of agents
From the widely - discussed Moltbook to RentAHuman, which directly includes "hiring humans" in its product name, they are not mature commercial products in essence, but they have attracted attention beyond the tech circle in a very short time and have even been repeatedly compared with "Black Mirror" - style narratives.
In the eyes of the outside world, such projects seem to be an "overstepping display" of the agents' autonomous capabilities: no longer just generating content and calling APIs, but starting to actively plan, schedule resources, and even try to intervene in the real world. Why do similar projects go viral in a short time? What deeper problems do they reflect? Regarding these questions, Qiao Yuncong, the head of the technology R & D department of Fengqing Technology, gave his judgment.
In Qiao Yuncong's view, the fact that both Moltbook and RentAHuman have gone viral does not mean that the technical path is already mature.
"They mainly allow the public to intuitively feel the 'initiative' and 'extensibility' of agents for the first time, rather than demonstrating a sustainable product form."
For this reason, this phenomenon - level spread is often accompanied by strong emotional fluctuations. On the one hand, people begin to realize that the ability boundaries of agents are rapidly expanding; on the other hand, concerns about AI ethics and the risk of loss of control are also magnified.
In terms of the results, the problems exposed by these viral projects are concentrated in at least three aspects.
Firstly, it is the sharp increase in content and information risks. When agents can independently generate, reorganize, and spread content, problems such as spam, content forgery, public opinion manipulation, and infringement and slander will be further magnified. The cost for ordinary users to judge the authenticity of information will significantly increase, and it may even lead to actual legal and security risks.
Secondly, it is the social spread of technological anxiety. The stronger the "autonomous feeling" of agents, the more likely it is to trigger an intuitive panic about "whether technology is getting out of control." This emotion does not entirely come from rational analysis but from humans' natural uneasiness about "non - human subjects starting to make decisions."
Thirdly, and also the most easily overlooked point, is the risk of deviation in the direction of technological development. Qiao Yuncong believes that if the wild growth of agents only focuses on "ability display" and "curious applications," a situation where bad money drives out good may occur - Agent applications that truly take root in business scenarios and solve practical problems may be submerged.
Among all the discussions, RentAHuman is particularly eye - catching because it is the first to clearly include "human executors" in the system design of agents.
In response, Qiao Yuncong gave a relatively restrained but quite explanatory judgment:
RentAHuman can be regarded as a 'physical - world patch' for the agent system.
The reason for calling it a "patch" rather than an ultimate solution mainly lies in three aspects.
Firstly, it provides a temporarily available physical execution channel for agents. Through humans, agents obtain a real - world carrier that is "movable, perceptible, and operable," enabling decisions that originally stayed in the digital world to be executed in the physical world.
Secondly, it alleviates the "hallucination problem" of agents to some extent. When pure digital decisions need to be implemented in the real - world environment, human execution and feedback can avoid some obviously unreasonable physical - layer failures.
Thirdly, from an engineering perspective, this is an execution network that is low - cost, plug - and - play, and globally scalable. Compared with deploying robots or complex hardware, the threshold is extremely low.
But Qiao Yuncong also emphasized that this method is essentially only a phased solution. As embodied intelligence, physical - world models, and robot technology mature, the method of relying on "renting humans" as execution units is likely to be replaced by more native technical paths.
When talking about the core difficulties of agents interacting with the physical world, Qiao Yuncong did not point the finger at computing power or model size. In his view, the biggest bottleneck of the current mainstream agent framework is:
It cannot accurately, stably, and predictably understand the environment, rules, and human intentions of the real physical world.
This includes not only the understanding