HomeArticle

AI is accelerating its self-evolution: Only lifelong learning enables survival of the fittest.

神译局2026-03-31 07:06
Get yourself used to repeatedly becoming a beginner. This adaptability is the closest thing to a "sustainable advantage" at present.

God Translation Bureau is a compilation team under 36Kr, focusing on fields such as technology, business, workplace, and life, and mainly introducing new technologies, new ideas, and new trends from abroad.

Editor's note: AI has started self - evolution, and it may be an inevitable outcome that white - collar jobs will disappear. Before that endgame arrives, only those who can adapt extremely well can hold the only card for survival. This article is from a compilation.

Recall February 2020.

If you were observant at that time, you might have noticed a few people discussing a virus that was spreading overseas. But most of us didn't pay much attention. The stock market was performing strongly then, children went to school as usual, you went in and out of restaurants, shook hands with people, and planned trips. If someone had told you at that time that they were hoarding toilet paper, you would surely have thought they had spent too much time in some strange corner of the Internet and gone crazy. However, within about the next three weeks, the whole world completely changed. Offices closed, children went home, and life was reorganized into a state that you would never have believed if you had described it to yourself a month ago.

I think we seem to be in the stage of "it sounds a bit exaggerated" regarding this event, which has a much more profound impact than the pandemic.

I spent six years founding an AI startup and making investments in this field. I live in this world. I wrote this article for those people in my life who are not in this circle - my family, friends, and those I care about. They always ask me, "So, what's the deal with AI?" And the answers they get are often not enough to explain what's really going on. I've always given them a "polite version" of the answer, the kind of casual response at a cocktail party. Because the honest version sounds like I'm crazy. For some time, I told myself that this was a good enough reason to keep silent about the truth. But the gap between what I say and what's actually happening has become too large. The people I care about deserve to hear what's about to happen, even if it sounds absurd.

First, I must clarify one thing: Although I work in the field of AI, I have little influence on what's about to happen, and so do the vast majority of people in the entire industry. The future is being shaped by a very small number of people: a few hundred researchers in several companies... such as OpenAI, Anthropic, Google DeepMind, etc. A single model training managed by a small team over a few months can produce an AI system that is sufficient to change the entire technological direction. Most of us who work in AI are building on the foundation laid by others. We are watching all this happen just like you... it's just that we happen to be close enough to feel the tremors of the ground first.

But now the time has come. Not in the way of "we should finally talk about this," but in the way of "this is happening, and I need you to understand right away."

I know this is true because I've experienced it all firsthand

There's one thing that people outside the tech circle haven't quite understood: The reason so many people in the industry are sounding the alarm is that this situation has already happened to us. We're not making predictions; we're telling you what has already happened in our work and warning you: You're next.

For years, AI has been steadily improving. Although there have been significant breakthroughs from time to time, there have been enough intervals between each leap for you to digest. However, in 2025, new technologies for building these models unlocked a faster pace of progress. Then, the speed became faster and faster. Each new model is not just better than the previous one, but much better, and the release cycle is also getting shorter. I'm using AI more and more frequently, having fewer and fewer run - ins with it, and watching it handle things that I used to think required my professional knowledge to solve.

Then, on February 5th, two major AI labs released new models on the same day: OpenAI's GPT - 5.3 Codex and Anthropic (the developer of Claude, the main competitor of ChatGPT)'s Opus 4.6. Something "clicked" at that moment. It's not as abrupt as turning on a light switch... it's more like you suddenly realize that the water around you has been rising and now it's up to your chest.

I no longer need to handle specific technical tasks myself in my work. I describe in plain English what I want to build, and then it... just appears. Not a draft that I need to modify, but a finished product. I tell AI my requirements, leave the computer for four hours, and come back to find the work done. It's done extremely well, even better than I could do myself, and doesn't need any correction. A few months ago, I was still communicating, guiding, and editing with AI repeatedly. Now, I just describe the result and leave.

Let me give you an example to help you understand what this is really like in practice. I'll tell AI, "I want to develop this application. It should have these functions and look like this. You take care of the user flow, design, and everything." And it really does. It writes thousands of lines of code. Then - and this was unimaginable a year ago - it will open the application by itself. It will click on various buttons, test the functions, and use the application like a real person. If it thinks something looks or feels wrong, it will go back and modify it by itself. It will iterate, fix, and improve like a developer until it's satisfied. Only when it thinks the application meets its own standards will it come back to me and say, "It's ready. Please test." And when I test it, it's usually perfect.

I'm not exaggerating at all. This is how I worked last Monday.

But the model released last week (GPT - 5.3 Codex) shocked me the most. It's not just executing my instructions, but making intelligent decisions. It shows a feeling - for the first time ever - like "judgment." Like "taste." That indescribable feeling of knowing what the right choice is, which people have always claimed AI would never have. This model has it, or is close enough that the difference is starting to become unimportant.

I've always been an early adopter of AI tools. But the developments in the past few months have shocked me. These new AI models are no longer incremental improvements. This is something completely different.

This is why this matters to you, even if you don't work in the tech industry.

AI labs made a deliberate choice. They focused on making AI good at writing code first... because building AI itself requires a lot of code. If AI can write code, it can help build its next version. A smarter version that writes better code, which in turn builds an even smarter version. Making AI proficient in programming is a strategy to unlock all other capabilities. That's why they started with programming. My work changed before yours not because they targeted software engineers; it's just a side - effect of their choice of priority.

They've achieved it now. And then, they're moving on to all other fields.

What tech workers have experienced in the past year - watching AI go from "a useful tool" to "better at the job than me" - is what everyone else is about to experience. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say it will be in three to five years. Some say even less. Given what I've seen in the past two months, I think the "less" scenario is more likely.

"But I've tried AI, and it's not that good."

I hear this a lot. I can understand, because it was true before.

If you tried ChatGPT in 2023 or early 2024 and thought, "This thing is making stuff up" or "It's nothing special," you were right. Those early versions did have limitations. They would hallucinate and spout nonsense seriously.

That was two years ago. In the time scale of AI, that's ancient history.

Today's models are unrecognizable compared to those six months ago. The debate about whether AI is "really getting better" or "hitting a bottleneck" - which has lasted for over a year - is now over. The dust has settled. Anyone still holding this view either hasn't used the current models, is deliberately downplaying the situation, or is making an assessment based on the long - outdated experience of 2024. I'm not saying this out of contempt. I'm saying this because the gap between public perception and the current reality is extremely large, and this gap is dangerous... because it prevents people from getting prepared.

Part of the problem is that most people use the free version of AI tools. The free version is more than a year behind the technology accessible to paying users. Judging the level of AI based on the free version of ChatGPT is like evaluating the development status of smartphones using a flip phone. Those who pay for top - tier tools and really use them intensively in their daily work know exactly what's about to happen.

I think of a lawyer friend of mine. I've always advised him to try using AI in his law firm, but he can always come up with a bunch of reasons why it won't work. Things like "It's not designed for his area of expertise," "He made mistakes when testing it," "AI can't understand the nuances of his work." I get what he means. But partners in large law firms have come to me for advice because they've tried the current version and seen where it's going. One managing partner of a large firm spends a few hours a day using AI. He told me it feels like having a whole team of junior lawyers at his fingertips. He uses it not because it's fun and new, but because it really works. Something he said to me impressed me deeply: Every few months, AI's ability to handle his work makes a significant leap. He said that if this trend continues, he expects that before long, AI will be able to complete most of his work... and he's a managing partner with decades of experience. He's not panicking, but he's keeping a close eye on everything.

Those who are leading in their respective industries (those who are really serious about trying) aren't scoffing at this. They're deeply impressed by what AI can already do and are adjusting their positions accordingly.

How fast is AI developing?

Let me make the speed of progress more concrete, because I think this part is the hardest to believe if you haven't been paying close attention.

In 2022, AI couldn't even reliably perform basic arithmetic. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write runnable software and explain scientific problems at the graduate level.

By the end of 2025, some of the world's top engineers said they had handed over most of their coding work to AI.

On February 5, 2026, new models were introduced, making everything before seem like something from the last era.

If you haven't tried AI in the past few months, you won't recognize today's technology at all.

There's an institution called METR that specifically uses data to measure this. They track the length of real - world tasks that models can successfully complete end - to - end without human intervention (measured by the time it takes for a human expert to complete). About a year ago, this number was about 10 minutes. Then it was an hour. Then it was a few hours. The most recent measurement (Claude Opus 4.5 in November last year) showed that AI could complete a task that would take a human expert nearly 5 hours. And this number approximately doubles every 7 months, and recent data shows that this process may be accelerating to double every 4 months.

But even this measurement doesn't include the models just released this week. Based on my experience of using them, this leap is extremely large. I expect the next update of the METR chart to show another significant jump.

If you continue this trend (which has been maintained for years and shows no sign of slowing down), we'll see AI being able to work independently for several days within a year. For several weeks in two years. And complete month - long projects independently in three years.

Amodei once said that an AI model that is "much smarter than almost all humans in almost all tasks" is expected to emerge in 2026 or 2027.

Think about that statement. If AI is smarter than most doctors, do you really think it can't handle most office jobs?

Think about what this means for your job.

AI is building the next - generation AI

There's another thing happening that I think is the most important development but is the least understood.

On February 5th, OpenAI released GPT - 5.3 Codex. In the technical documentation, they included the following paragraph:

"GPT - 5.3 - Codex is our first model that plays a key role in its own creation process. The Codex team used earlier versions to debug its own training process, manage its own deployment, and diagnose test results and evaluations."

Read it again: AI assisted in building itself.

This is not a prediction of what might happen one day in the future. This is OpenAI telling you right now that the AI they just released was created by itself. One of the keys to improving AI is to apply intelligence to AI development. And AI is now intelligent enough to make a substantial contribution to its own improvement.

Dario Amodei, the CEO of Anthropic, said that AI is now writing "most of the code" in his company, and the feedback loop between the current AI and the next - generation AI is "accelerating month by month." He said that we may be "only 1 to 2 years" away from the singularity where "the current generation of AI autonomously builds the next - generation AI."

Each generation helps build the next generation. The next generation is smarter, builds the next - next generation faster, and has a higher level of intelligence. Researchers call this an "intelligence explosion." And those in the know - the people who are building it - believe that this process has already begun.

What this means for your job

I'll be straightforward with you because I think you deserve the truth more than comfort.

Dario Amodei, probably the most safety - conscious CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry - level white - collar jobs within 1 to 5 years. And many in the industry think he's being conservative. Given the capabilities of the latest models, the ability to cause a large - scale impact may be available by the end of this year. Although it will take some time to affect the entire economy, the core capabilities are already evident.

This is different from any previous automation wave, and I need you to understand why. AI is not replacing a specific skill. It's a general substitute for cognitive work. It's progressing in all fields simultaneously. When factories were automated, unemployed workers could switch to office clerk jobs. When the Internet hit the retail industry, workers could move to logistics or service industries. But AI won't leave a gap for you to easily switch to. No matter what you retrain for, AI is also improving in that field.

Let me give you some specific examples to make this impact more tangible... but I must state that these are just examples, and this list is not exhaustive. Even if your job isn't mentioned, it doesn't mean it's safe. Almost all knowledge - based jobs are being affected.

Legal work. AI can already read contracts, summarize case law, draft legal documents, and conduct legal research at a level comparable to that of junior lawyers. The managing partner I mentioned uses AI not because it's fun, but because it outperforms his assistants in many tasks.

Financial analysis. Building financial models, analyzing data, writing investment memorandums, and generating reports. AI can handle these tasks with ease and is making rapid progress.

Writing and content creation. Marketing copy, reports, news articles, technical writing. The quality has reached a level where many professionals can't distinguish between AI - generated content and human - written works.

Software engineering. This is the field I'm most familiar with. A year ago, AI could hardly write a few lines of code without errors. Now, it can write thousands of lines of correct - running code. Most of the work has been automated: not just simple tasks, but also complex, multi - day projects. There will be far fewer programming jobs in a few years than there are now.

Medical analysis. Reading images, analyzing laboratory results, providing diagnostic suggestions, and reviewing literature. AI's performance in multiple fields is approaching or exceeding human levels.

Customer service. Capable AI agents - not the frustrating chatbots from five years ago - are now being deployed to handle complex, multi - step problems.

Many people find comfort in the idea that "some fields are safe." For example, they think AI can handle trivial tasks but can't replace human judgment, creativity, strategic thinking, and empathy. I used to say the same thing. But I'm not sure if I still believe it now.

The decisions made by recent AI models give a feeling of "judgment." They show something that looks like "taste": an intuition about what the "right choice" is, not just technically correct. This was unimaginable a year ago. My current rule of thumb is: If today's models show even a glimmer of a certain ability, then the next - generation models will be extremely good at it