Cursor's Reputation Tanks Overnight! The AI's Claim of Writing a Browser with 3 Million Lines of Code Is Exposed as a Hoax! Netizens Mock It as "AI Garbage"
[New Intelligence Yuan Introduction] The claim that GPT-5.2 developed a browser in seven consecutive days has just been debunked! A developer posted to confirm that the Cursor project is just a pile of "AI slop," and the code simply cannot be compiled. Cursor was just too hasty this time.
A few days ago, the entire AI community was stunned by a bombshell announcement from Cursor.
Here's what happened:
Cursor claimed that they had a GPT-5.2-powered coding agent run continuously for a full seven days, which is 168 hours.
As a result, these AI agents managed to write a browser from scratch with three million lines of code, and its functionality is comparable to Chrome!
This sounds incredibly enticing -
As tokens become as cheap as water and electricity, AI can iterate on itself indefinitely until the goal is achieved.
Whether it's an operating system, office software, or a game engine, as long as there's sufficient computing power, AI seems to be able to "grind" it out for you.
However, just when people hadn't recovered from the shock, the "Sherlock Holmes" of the tech community stepped in.
They carefully examined the open - source code of the Cursor project and discovered a huge bombshell -
This so - called "AI browser" can't even pass the most basic compilation!
In a technical blog, the author sharply pointed out:
What Cursor calls a "breakthrough" is essentially a pile of "AI slop" lacking engineering logic.
What they've done is a clever publicity "smoke and mirrors" trick, making everyone think the project actually works.
But in reality, it's just a bunch of useless code that can't run.
Blog address: https://embedding-shapes.github.io/cursor-implied-success-without-evidence/
Did GPT-5.2 Really Develop a Browser in Seven Days? Fake News?
Next, let's take a close look at how the "debunking" article in the developer community dissected and found the false claims in Cursor's publicity.
First, the author analyzed what exactly Cursor did.
On January 14th, they published a blog post titled "Scaling long - running autonomous coding."
Official blog: https://cursor.com/blog/scaling-agents
In this article, they talked about an experiment of having "coding agents run autonomously for weeks," with the clear goal of:
Understanding how far we can push the boundaries of agent - based coding to complete projects that usually take human teams months to finish.
Then, Cursor's researchers discussed some methods they had tried, analyzed the reasons for failure, and how to solve the problems.
Finally, they found a solution that "solved most of our coordination problems and allowed us to scale to very large projects without relying on a single agent."
Ultimately, this solution achieved an amazing result -
To test this system, we set it an ambitious goal: to build a web browser from scratch. These agents ran for nearly a week and wrote over one million lines of code in 1,000 files.
Meanwhile, they released the source code on GitHub.
GitHub project: https://github.com/wilsonzlin/fastrender
This is strange. So, did the agents successfully complete the task?
If you're not easily swayed by their words, you'll notice the ambiguity -
They claim that "despite the large codebase, new agents can still understand it and make meaningful progress" and that "hundreds of workers run concurrently and push to the same branch with few conflicts," but they never actually say whether this attempt was successful.
Can it really run? Can you run this browser yourself? We don't know, and they've never clearly stated it.
The so - called demonstration is just an 8 - second "video":
Below it, they wrote:
Although this looks like a simple screenshot, building a browser from scratch is extremely difficult.
All in all, from start to finish, they've never firmly admitted that this browser is runnable and functioning properly!
Take a Look: It's All Error Messages, Can't Even Run
In short, if you only look at the README, demo screenshots, or even a few promotional descriptions, this project seems really amazing.
However, as soon as you clone the repository and run 'cargo build' or 'cargo check', the problems will immediately surface.
error: could not compile 'fastrender' (lib) due to 34 previous errors; 94 warnings emitted
It can be said that this codebase is far from being a "working browser." In fact, it has never been successfully built!
The article author found several pieces of evidence.
First, all recent runs of GitHub Actions on the main branch have failed, including errors in the workflow file itself.
Additionally, if you try to build it independently, you'll find dozens of compiler errors, and recent PRs have been merged even when the CI failed.
Even more exaggerated, if you look at the Git history and go back 100 commits from the latest one, you won't find even a single commit that can be compiled cleanly.
That is to say, this repository has never been in a "runnable" state since its inception.
https://gist.github.com/embedding-shapes/f5d096dd10be44ff82b6e5ccdaf00b29
Right now, we have no idea what the "agents" released by Cursor's researchers actually did in this codebase, but it seems they never ran 'cargo build', let alone 'cargo check'.
Because both of these commands would report dozens of errors and about 100 warnings. If you really try to fix these errors, the number of error reports will definitely explode.
Currently, there is an unresolved GitHub issue about this in their repository.
Issue address: https://github.com/wilsonzlin/fastrender/issues/98
The conclusion is very clear:
This is not real engineering code but typical "AI Slop."
This low - quality code stacking may mimic a certain function in form, but it lacks a coherent engineering intention behind it and can't even pass the most basic compilation.
In Cursor's demonstration, they talked a lot about their grand future plans but didn't mention a word about "how to run it," "expected results," or "working principles."
Moreover, apart from throwing out a link to the code repository, Cursor didn't provide any reproducible demonstrations or any known working version tags (tag/release/commit) to verify those shiny screenshots.
Regardless of their original intention, Cursor's blog post tried to create the illusion of a "functioning prototype" but omitted the most important basic honesty in the engineering community - reproducibility.
They really didn't clearly claim that "it can run properly," which allows them to avoid the accusation of "lying" literally, but it's extremely misleading.
So far, the only thing they've proven is:
AI agents can output millions of tokens like crazy, but the code they ultimately generate is still a pile of useless junk that can't run.
A "browser experiment" doesn't need to be on par with Chrome, but it should at least meet a reasonable minimum standard:
Compile successfully on a supported toolchain and render a simple HTML file.
Unfortunately, Cursor's article and public build didn't meet this passing grade.
GitHub Gets Bombarded, Developers Are Furious
This act of packaging a "semi - finished product" as a "milestone" has completely enraged the developer community.
In the GitHub Issue section, angry messages flooded the screen:
I also tried it. It simply won't run.
The code logic is completely off. How can you merge PRs when the CI is all red? Are we just supposed to worship the screenshots?
Since the functionality is fake, what's the point of open - sourcing this repository? To prove that AI can create digital junk?
Some people also pointed out the essence of this "bubble project" sharply:
Anyway, investors don't understand code and may not even know what GitHub is.
As long as the code is automatically written by a computer, the performance curve will soar. When the machine runs, gold will pour in...
Moreover, on Hacker News, there were nearly 200 discussions that completely exposed the true nature of this project.
Netizen pavlov pointed out that the so - called "from scratch" and "custom JS virtual machine" are pure nonsense.
Just take a look at the dependency list (html5ever, cssparser, rquickjs), and you'll find that this is essentially a "wrapper" version of the Servo engine developed by Mozilla.
Netizen brabel was even more frustrated:
Do these people really think claiming to "build from scratch" is a good move?
The first thing a programmer does is check the dependencies, and they can immediately tell it's just using existing packages.
The only explanation is that they gambled that no one would seriously verify it, since most people just look at the title and cheer.
Is Cursor Pushed to the Wall by Anthropic's Dominance?
Although Cursor never directly said "this is ready for production," they used grand narratives like "build from scratch" and "meaningful progress" along with carefully selected screenshots to create the illusion of a "function