HomeArticle

HSG AI Conference 2026: The Way We Work Has Completely Changed

AI深度研究员2026-04-30 10:53
In the past year, most people's understanding of AI still remained at the level of "it is a faster tool." However, the signal conveyed by this summit is not about efficiency improvement, but about rewriting the rules.

“I don't remember when the last time I manually modified the output of AI was.”

At the Sequoia AI 2026 Conference, Andrej Karpathy said this calmly. A person who has written code for more than a decade no longer needs to modify the output of the tool.

In the past year, most people's understanding of AI still remained at “it is a faster tool”. However, the signal conveyed by this summit is not about the improvement of efficiency, but the rewriting of rules.

What kind of rules? The rules about how to do things, who does things, and why to do things.

In the next four years, the changes may be more drastic than those in the past decade.

Section 1 | The Way of Doing Things Has Completely Changed

In that conversation, Andrej Karpathy said: Two or three years ago, when we wrote code, we specified step by step what the machine should do; now, in many cases, you just need to clarify the goal, and the model will complete the path on its own.

The underlying paradigm of software has changed.

In the past, software was a collection of explicit rules, and people were responsible for writing every step clearly. Later, in the era of machine learning, people no longer wrote rules manually but trained models with a large amount of data. Now, large language models make “doing nothing” a reality: you just need to provide context and instructions, and an all - around interpreter will complete the whole work.

On the surface, this is just an acceleration of efficiency. But the real subversive part is that the middle layer has been removed.

He took a project he had done as an example. In order to allow users to see what the dishes looked like by taking a photo of the menu, he once wrote a complete application: first recognize the text, then call the image model, and finally re - format and display the results. This is the most typical software engineering thinking, decomposing complex problems into several steps and then implementing them one by one.

But later he found that by directly throwing the photo to the latest model with an instruction, it could directly generate the result on the original image. There was no intermediate application and no long - winded process.

The traditional software design thinking is like this: I have a functional requirement, so I need to build an implementation path. This path itself is the moat of the enterprise. To a large extent, users pay for your product because you make this path smoother, more stable, or lower in cost.

But when the model's ability is strong enough, this path itself is no longer scarce. What users really want is not an application for image annotation. What they really want is to see what the dish looks like. Once the result can be directly generated, the reason for the existence of products built around how to achieve it needs to be re - examined.

First of all, those tools whose core value lies in format conversion are affected. Various document conversion services, data cleaning tools, and format adaptation layers used to solve the problem that the system couldn't understand the original file. But when the model can directly understand the original input, these intermediate conversion steps become redundant.

Similarly, part of the core value of low - code platforms is also being re - defined. Their original mission was to lower the threshold of decomposing steps. But if the future way of doing things changes from decomposing steps to describing goals, such platforms either have to take root deeper in professional industry fields or answer a fatal question: when the generation ability is almost free, what exactly are you helping users solve?

If your value is to break down complex things into step - by - step operations, this is becoming less and less scarce. But if your value is to help users judge what the correct result is, this is becoming more important than ever.

In the past, doing things meant clarifying the steps; now, doing things means clarifying the goal.

Section 2 | The Division of Labor between Humans and Machines Is Being Reconstructed

Regarding the popularity of intelligent agents like OpenClaw, Andrej Karpathy's view is:

“Current intelligent agents are very much like interns.”

They can independently complete a large amount of work, perform stably in some aspects, and make unexpected mistakes in some details.

Karpthy believes that in the future, there is a core standard that determines which things will be completely taken over by machines: verifiability.

If the result of a thing can be clearly judged as right or wrong, such as whether the code can run or whether the data conforms to the format, it is easy to be automated. On the contrary, if a thing requires a trade - off among multiple possibilities and there is no single standard, it is difficult for the model to complete it alone.

But it doesn't mean that the things that machines can do are fixed. Because the scope of verifiability is constantly expanding, and machines can do more and more work.

In the past, writing code was considered to require creativity and was unverifiable. But when you break it down into achieving a specific function, it suddenly has a verification standard: whether the test can pass. Similarly, designing an interface seems very subjective, but if the verification standard becomes whether it conforms to the brand's visual specifications, it also becomes verifiable.

To judge whether a position will be affected, you can't just look at the job title. You need to see what proportion of its work can be decomposed into tasks with clear verification standards.

A senior designer may spend 80% of the time on verifiable execution work and only 20% on real creative decision - making. This 20% is safe, but that 80% is being rapidly transferred.

The same logic also applies to accountants, lawyers, and customer service. A large amount of rule verification, document review, and standard question response are being automated. But the final signing responsibility, risk judgment, and complex dispute handling still need to be borne by humans.

Thus, a new division of labor is gradually becoming clear: the model is responsible for execution, and humans are responsible for setting boundaries and directions.

Humans no longer need to stare at every detail or memorize cumbersome operation steps, but they must be clear about a more core thing: how this thing should be defined and what kind of result is considered correct.

The doing part is handed over to the machine, and the defining part remains in human hands.

But how long will this division of labor last?

Section 3 | Preparing for AGI

The one who answered this question is Demis Hassabis. In the following conversation, he gave a clear time judgment: 2030.

This is his prediction of the time when AGI (Artificial General Intelligence) will be realized. There are only four years left until then.

If you start a business today, according to the usual rhythm, it will take 5 - 8 years to succeed. And AGI will appear in the middle of these 10 years.

This is not a variable that can be ignored. You must create something that still has irreplaceable value when AGI arrives.

Demis said:

“He has been observing the development of intelligent agents. We are just at the beginning. In the past few months, people have just started to find truly valuable application points. There will be greater progress in the next 6 to 12 months.”

So what should entrepreneurs do?

His advice is: Combine AI with Deep Tech. Engage in materials science, medicine, and those truly difficult physical fields. These directions have one thing in common: they involve the real atomic world, and there is no shortcut to success. In other words, they will not be easily reshuffled by the next wave of updates of the basic model.

On the pure software level, the changes are too fast. The middle layer you build today may be covered by the improvement of the model's own ability tomorrow. But if you are solving a problem that requires in - depth understanding of the physical world, biological systems, or material properties, the moat will be much deeper.

Companies need to find moats, and so do individuals.

At this summit, Andrej Karpathy said: You can hand over the thinking process to AI, but you can't hand over your understanding to AI.

What is understanding?

Knowing where the problem really lies,

Knowing why this thing is worth doing,

Knowing how the goal should be defined.

Andrej found that now he has become the bottleneck of the entire workflow. The bottleneck lies in figuring out what to do and why it is worth doing. LLM can generate infinitely and execute efficiently, but it can't help you understand why a thing is important to humans.

So, what has become more important?

Remembering API parameters is no longer important, and mastering prompt - word skills is not the core. What is really important is to establish a deeper understanding. Understand the underlying logic of an industry, understand the real needs of users, and understand where the commercial value comes from.

You can hand over how to do things to AI, but you can't hand over knowing why to AI.

Four years later, when AGI really arrives, the value of this understanding will become clearer than ever.

Because by then, there will be countless actions that can be automatically completed and countless solutions that can be generated with one click. But there still needs to be someone to define: what is really worth being completed and what is worth being generated.

AI can do countless things, but only humans know which thing is worth doing.

Conclusion | The Last Moat

In the past, humans arranged work around tools.

Now, tools are starting to participate deeply in work.

Next, humans are only responsible for deciding the direction and completely hand over the implementation.

You can hand over the thinking process to the model and let it help you generate and execute. But there is one thing that can never be handed over: knowing what is right and what is really worth doing.

This is the starting point of 2026 and the theme of the next four years.

Reference materials:

https://www.youtube.com/watch?v=96jN2OCOfLs&t=461s

https://www.youtube.com/watch?v=AFpeWo1GTeg&t=1s

https://www.youtube.com/watch?v=JNyuX1zoOgU

Source: Official media/Online news

This article is from the WeChat official account “AI Deep Researcher”, author: AI Deep Researcher, editor: Shen Si. It is published by 36Kr with authorization.