StartseiteArtikel

A tweet by Karpathy triggers a magnitude-9 earthquake among developers

品玩Global2026-01-14 07:42
Why did a tweet from Karpathy plunge the entire developer community into anxiety?

On December 27, 2025, Andrej Karpathy posted a long tweet on Twitter.

This was not an ordinary technical sharing but a public self - examination. This former head of Tesla's Autopilot, a member of the OpenAI founding team, and a technical idol in the hearts of countless developers, admitted that he "had never felt so far behind" so strongly.

More importantly, he said he "could become 10 times more powerful" as long as he could correctly string together the tools that emerged in the past year. And failing to do so, "it felt like a skill issue."

This tweet quickly caused a stir in the tech circle. It was retweeted over 10,000 times and liked tens of thousands of times. Because it hit on a reality that all developers could feel but few could clearly express:

The profession of software engineering is being completely reshaped by a "magnitude 9 earthquake."

Two weeks later, well - known tech YouTuber Theo (founder of t3.gg and CEO of Ping Labs) made a video in response to this tweet. The title of the video was brutally straightforward: "You're falling behind. It's time to catch up."

Theo's core point was clear: Karpathy's feeling was not an isolated case but a collective transformation that the entire industry was going through. Those who were still on the sidelines were already "officially late."

This article fully translates Theo's video and, combined with the core insights of Karpathy's tweet, breaks down the ongoing revolution and how to avoid being left behind in this transformation.

Original link: https://www.youtube.com/watch?v=Z9UxjmNF7b0&t=152s

Permanent Inflection Point: The Rules of the Game Have Changed

Theo's core assertion was concise and powerful: The field of software engineering has reached a permanent inflection point.

This was not just another technological iteration, not a change of the level from jQuery to React. It was something more fundamental - The profession of developers themselves is being redefined.

He used an apt metaphor: This was a "magnitude 9 earthquake." Not an aftershock, not a minor thing, but a major quake that could change the landscape.

AI is no longer an "assistant" but a "partner."

In the past few years, we have witnessed the birth of various AI programming tools: GitHub Copilot, Tabnine, Codeium... But in Theo's view, these tools were essentially "intelligent code completion" - they could help you finish a line of code, but you were still the one actually programming.

But now, things are different.

Theo revealed a shocking statistic: In his own work and in several teams he managed and advised, 70% to 90% of the code is now generated by AI.

Not generated with assistance, not generated for reference, but directly generated.

Let's compare the timelines:

2023: AI could help you write functions, and you needed to check and modify them.

2024: AI could help you write modules, and you needed to integrate and debug them.

2026: AI could help you write entire functions, and you needed to review and optimize them.

Where is the end of this trend? Theo believed that there might not be an end at all, only continuous acceleration.

The window period for "waiting and seeing" has closed.

Theo quoted an interesting saying: "It's better to be late than too early... but we've passed that point."

From 2023 to 2024, it was reasonable to take a wait - and - see attitude. At that time, the tools were immature, the cost was high, and the reliability was questionable. Many developers would say, "Let the bullet fly for a while and see if this thing really works."

But by 2026, this attitude had become a burden.

The capabilities of the base models have reached the production level, the inference cost is halved every 8 weeks, and the tool ecosystem has matured to the point where it can be used directly. Tools like Cursor, Claude Code, and Windsurf are no longer "experimental products" but productivity tools.

Theo's judgment was straightforward: Those who start to adapt to AI now are already "officially late." If they wait any longer, it's not just about being late; they'll miss the whole game.

Your role is being "refactored."

The traditional development process is linear: requirements → design → coding → testing → deployment. The core value of developers lies in the "coding" part - how quickly and accurately you can translate logic into code.

But now, this process is being deconstructed and reorganized.

Theo used a programming term to describe this change: “The role of the programmer is being dramatically refactored.”

What is the refactored role? No longer a "craftsman who writes code by hand" but a "conductor who orchestrates AI Agents."

What you need to master is no longer syntax details, algorithm implementation, or framework features, but:

Agents: How to design and use AI agents

Sub - agents: How to break down tasks for different AIs

Contexts: How to provide appropriate information to AI

Memory: How to make AI remember the project's history and decisions

Workflows: How to orchestrate the collaboration process of AI

MCP, LSP: New protocols and interface standards

This is a brand - new programmable abstraction layer. Just as the transition from assembly to high - level languages was a leap in the abstraction layer, now we are experiencing another leap from "writing code by hand" to "orchestrating AI."

Evidence from the Real World: Ramp's Inspect Bot

No matter how much theory is discussed, a real - world case is more convincing. In the video, Theo specifically introduced Ramp (a fintech unicorn)'s internal tool: Inspect Bot.

The workflow of this tool is so simple that it's almost "terrifying":

Automatic monitoring: Connect to Sentry (an error monitoring platform) and scan errors in the production environment in real - time.

Intelligent filtering: Automatically identify the top 20 most common errors.

Automatic repair: Start a "child session" for each error, that is, an independent AI Agent.

Code submission: The AI independently writes the repair code and submits a Pull Request.

Manual review: Engineers only need to review the PR and decide whether to merge it.

Throughout the entire process, humans only appear in the last step.

Let's compare it with the traditional bug - fixing process:

Traditional: Discover a bug → Assign it to an engineer → Troubleshoot the problem → Write the repair → Test → Submit → Review → Deploy (taking hours to days)

AI process: Discover a bug → AI automatically repairs it → Manual review → Deploy (taking minutes)

The role of engineers has changed from "bug - fixers" to "reviewers of repair plans."

agent.md: The "Bible" between you and AI

Rahul, Ramp's vice - president of engineering, and engineer Nicolas Bevacqua also shared another key strategy: Maintain an agent.md or claude.md file.

The core idea of this strategy is simple: Whenever you need to manually modify the code generated by AI, don't just make the change and be done with it. Instead:

Record the reason for this modification.

Extract it into a general rule.

Update it in the agent.md file.

Make the AI automatically follow this rule in the future.

Rahul's team reported that they update these documents multiple times a day. As a result, the quality of the AI's output continues to improve, and the need for manual intervention is decreasing.

Theo's comment on this was on - point: “Every manual edit you make is an opportunity for agent.md improvement.”

This is like training an apprentice, but this apprentice learns hundreds or thousands of times faster than a human.

Custom Fine - Tuning Is Dead, Long Live Prompt Engineering

In the video, Theo announced something that many people might not have realized yet: Custom fine - tuning is outdated.

This judgment may seem counter - intuitive at first. In the past few years, fine - tuning has always been regarded as the best way to make AI adapt to specific tasks. But Theo gave three reasons:

First, the base models are evolving too fast. It takes 8 weeks to fine - tune a model, but the base models have significant upgrades every 8 weeks. By the time your fine - tuned version is trained, a new base model has already emerged, and it is often stronger than your fine - tuned version. It's like spending three months building a bicycle when cars are already on the market.

Second, the inference cost has dropped sharply. From 2024 to 2026, the inference cost is halved every 8 weeks. One of the main values of fine - tuning is to "improve efficiency and reduce costs." But when the cost of the base model is already negligible, the cost - effectiveness of fine - tuning disappears.

Third, general models are stronger. The latest general models like Claude 4.5 and GPT - 4o have outperformed custom - fine - tuned models in most scenarios. Unless your scenario is extremely special, a general model + good prompts will yield better results.

So, what's the new strategy? Prompt optimization + Agent Docs + Workflow orchestration. The iteration cycle of this combination is not "weekly" but "hourly." You can quickly test, adjust, and improve.

The "Skateboarder's Perspective": Rethink Every Repetitive Task

In the video, Theo used a wonderful analogy: The way skateboarders see the world.

Ordinary people see stairs and handrails and think, "These are obstacles. I need to be careful and go around them."

Skateboarders see stairs and handrails and think, "These are opportunities. I can have a good ride."

Developers in the AI era should also have this perspective shift. When you see a repetitive task, you shouldn't think, "This is annoying," but rather, "This is an opportunity for automation."

The new value of Slop Code

In the past, we would simply ignore many small tasks:

Batch rename files

Generate test data

Write one - time data migration scripts

Automate a manual operation

The reason was simple: It might take 30 minutes to write the script, but it only takes 10 minutes to do it manually. It's not worth it.

But in the AI era, this calculation has completely changed. It only takes 2 minutes for AI to write this script, and this script can be reused, improved, and shared with other team members in the future.

Theo called this type of code "slop code," but this term is not derogatory. It refers to the code that wouldn't have been written in the past because of the low return on investment. AI has reduced the creation cost, making these "marginal projects" feasible.

He himself used AI to build an asset management tool with 10,000 lines of code just to support a small - scale game project. In the traditional world, such a tool with a "low return on investment" would never have existed.

The message is clear: Don't use the "cost of writing code by hand" to evaluate whether to do something. Use the "cost of prompts" instead. This will open up countless new possibilities.

Five - Step Guide to Catching Up: From "Late" to "Caught Up"

After presenting the theory, Theo gave a very specific action plan. He broke this process down into five progressive steps.

Step 0: Immediately Integrate AI - Powered Code Review

The first step is the simplest and lowest - risk: Integrate AI - driven code review tools into your codebase.

Recommended tools include Graptile and CodeRabbit. These tools will automatically check code quality, detect potential bugs, offer optimization suggestions, and mark security risks during the PR phase.

Why is it Step 0? Because this step is cost - free, risk - free, and immediately effective. You don't need to change any work process. You just need to add a step to your CI/CD process. As a result, before human review, AI has already filtered out 90% of the minor errors.

Step 1: Test the Limits of AI

The second step is to build intuition: Find a task that you spent a week on in the past and try to complete it with AI in just a few minutes.

Don't expect AI to complete it perfectly. The key is to build an intuition about the boundaries of AI's capabilities. Only by knowing where the boundaries are can you effectively orchestrate it.

Theo's advice was straightforward: “If you’re not at least a little bit uncomfortable, you are not trying hard enough.”

Step 2: Learn to Read AI's Thought Process

The third step is to understand AI: Use the "Plan Mode" to observe how AI reasons.

Most AI programming tools have this feature. The AI will first analyze the codebase structure, formulate an implementation plan, break it down into subtasks, and then execute them step by step.

The purpose of watching this process is not to learn specific skills but to understand how AI views code, organizes logic,