HomeArticle

70 million views! A viral article in Silicon Valley: The AI singularity has arrived, and AI will evolve independently, leaving humanity behind.

新智元2026-02-13 08:18
As the old order is loosening, AI has given everyone the same lever. The gap between creators and onlookers will widen at an unprecedented pace.

Just this month, AI has experienced a qualitative leap and can now independently complete complex tasks that used to take human experts hours to finish. AI has begun to participate in building the next - generation AI, and the cycle of recursive self - improvement has been initiated. An intelligence explosion may occur within one or two years. Almost all cognitive jobs completed in front of a screen will be affected, yet most people's understanding of this remains at the level of two years ago. This information gap is more dangerous than technological progress itself.

An article on X with only 5000 words (only about 3500 Chinese characters after translation) has been read by nearly 70 million people worldwide within 24 hours, and the reading volume is still soaring at a visible speed. What it says concerns every day of your future.

https://x.com/mattshumer_/status/2021256989876109403

On February 11, 2026, Brian Norgard, a serial entrepreneur in Silicon Valley, wrote on X: "Almost all the smart people I know who work in the tech industry are extremely anxious. It's as if everything is about to collapse completely."

https://x.com/BrianNorgard/status/2021409597517619353

On the same day, Jimmy Ba, the co - founder of xAI, announced his departure. What he wrote in his farewell post didn't seem like a goodbye, but more like a last will: "The recursive self - improvement cycle is likely to be launched within the next 12 months. The year 2026 will be a crazy year, probably the busiest and most decisive year for the future of our species."

https://x.com/jimmybajimmyba/status/2021374875793801447

Also on the same day, an article titled "Something Big Is Happening" spread like a nuclear explosion on Twitter, was reprinted in full by Fortune magazine, and was reported by mainstream business media such as Business Insider.

https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/

https://www.businessinsider.com/matt-shumer-something-big-is-happening-essay-ai-disruption-2026-2

The person who wrote this article is Matt Shumer, the CEO of the AI company HyperWrite, who has been in this industry for six years.

https://www.linkedin.com/in/mattshumer/

He said the reason he wrote this article was extremely simple: he had lied to the people around him for too long.

Every time his family and friends asked him "what's really going on with AI", he could only tell a watered - down version, a version that could be told at a cocktail party.

Because if he told the truth, people would think he was crazy.

But now, the gap between the reality he sees and the story he tells others has become so large that he can no longer pretend.

"Even if it sounds absurd, the people I care about deserve to know what's about to happen."

The water level has reached the chest

Shumer started with an analogy.

Think back to February 2020. The stock market was good, kids were at school, you went in and out of restaurants, shook hands with people, and planned trips.

If someone told you he was stockpiling toilet paper, you'd think he'd gone crazy on the Internet.

Three weeks later, the whole world had changed completely.

"We're now at that 'this is so exaggerated' stage. It's just that this time, the scale of the upcoming changes far exceeds the last time."

Does this sound like another Silicon Valley entrepreneur creating anxiety? Maybe.

But what sets this article apart from countless ordinary ones is a chilling confession:

"We're not talking about a prophecy. This has already happened to us."

Shumer described his real - life work routine now: he tells the AI in plain English what he wants to build, then leaves the computer for four hours. When he comes back, the work is done.

It's not a rough draft that needs to be patched up, but a finished product, better than what he could do himself, and doesn't need any modification.

He'll say to the AI: "I want to develop this app. It should have these functions and look roughly like this. You take care of everything."

Then the AI writes tens of thousands of lines of code, opens the app by itself, clicks the buttons, tests the functions, and goes through all the processes like a real user. If the experience is not good somewhere, it goes back to modify, iterates, corrects, and improves repeatedly until it's satisfied. Then it comes back and says: "It's ready. You can test it."

"This was my work state last Monday. A few months ago, I was still constantly deliberating, guiding, and modifying with the AI. Now I just need to describe the result and leave."

What makes this description cause an earthquake - level reaction is that it comes from someone who actually uses these tools to make products every day.

And he's just recording his personal experience.

February 5, 2026, a watershed

What completely changed Shumer's attitude was February 5, 2026.

On this day, two major AI labs released new models simultaneously: OpenAI's GPT - 5.3 - Codex and Anthropic's Claude Opus 4.6.

Shumer described that moment: "Something clicked. It's not like turning on a light switch and instantly getting bright. It's more like you suddenly realize that the water level has been rising, and now it's up to your chest."

What shocked him the most was GPT - 5.3 - Codex.

It's no longer just executing instructions, but making decisions, and decisions with taste.

Shumer used two words to describe this feeling - "Judgment" and "Taste".

The intuition of knowing what the right choice is, which people once asserted that AI would never have.

This fortress has collapsed.

Ethan Mollick, a professor at the Wharton School, wrote when retweeting this article on Twitter:

This viral article is worth reading. I agree that AI is a very big deal, and most people don't know how fast it's progressing.

A caveat to add is that AI is still "jagged", especially in cross - team and organizational collaboration, which creates a bottleneck - but this is temporary.

https://x.com/emollick/status/2021627729637158922

Mollick's attitude is thought - provoking: he affirms the core judgment of Shumer's article and gives a prudent supplement, but that "for now" itself is disturbing.

The bottleneck is temporary, and there's no sign of the progress trend line flattening.

When you look at the speed of progress together, the impact is even more obvious.

  • In 2022, AI couldn't even do multiplication correctly and would seriously tell you that 7×8 = 54;
  • In 2023, it could pass the bar exam;
  • In 2024, it could write runnable software and explain scientific knowledge at the postgraduate level;
  • By the end of 2025, some top engineers around the world said they had handed over most of their programming work to AI;
  • On February 5, 2026, the release of the new models made everything before seem like a bygone era.

There's an institution called METR that specifically uses data to measure AI's autonomous work ability.

The indicator they track is: how long an AI can independently complete real - world tasks without human help (based on the time it takes a human expert to complete the same task).

The answer was ten minutes a year ago. Then it was an hour. Later, it was several hours.

The latest measurement (in November 2025, with Claude Opus 4.5) shows that AI can complete tasks that take a human expert nearly 5 hours to finish.

This number approximately doubles every seven months, and recent data suggests it may be accelerating to double every four months.

If the trend continues, within a year, we'll see AI that can work independently for several days.

Within two years, for several weeks. Within three years, it can independently complete a month - long project.

Dario Amodei, the CEO of Anthropic, has publicly stated that in 2026 or 2027, AI models will be "much smarter than almost all humans in almost all tasks".

Shumer countered: If AI is smarter than most doctors, do you really think it can't do most office jobs?

AI is building the next - generation AI by itself

The most chilling part of all the discussions is about "Recursive Self - Improvement".

On February 5, when OpenAI released GPT - 5.3 - Codex, it wrote the following paragraph in the technical document:

GPT - 5.3 - Codex is our first model that played a key role in its own creation process.

The Codex team used its early version to debug its own training process, manage its own deployment, and diagnose test results and evaluations.

Our team was very shocked that Codex could accelerate its own development process so significantly.

https://openai.com/zh-Hans-CN/index/introducing-gpt-5-3-codex/

AI helped build itself.

This is a fact written in black and white in OpenAI's release document.

The famous sculpture "Man Carving His Own Destiny"

Dario Amodei's words can be seamlessly connected.

He said that "most of the code" at Anthropic is now written by AI, and the feedback loop between the current - generation AI and the next - generation AI is "gaining strength month by month".

Extended reading: Claude wrote Claude by itself! It finished two - month work in 2