HomeArticle

In 2026, the four most valuable abilities are as follows.

笔记侠2026-01-26 07:45
In the era of AI, everyone will be "re - priced".

At the beginning of 2026, Jensen Huang announced at CES that artificial intelligence is moving from the digital world to the physical world. This means that AI is about to evolve from a "talking mind" to a "doing body."

Before this, Elon Musk put forward a more specific and radical timeline: General artificial intelligence may be achieved in 2026; by 2030, the overall intelligence of AI will surpass that of all humanity. He even warned that the next three to seven years will be very difficult.

Today, let's take a look at the unique perspective provided by Zhang Xiaoyu, a rising - star scholar who is also thinking about the future possibilities of AI, in his book A History of AI Civilization: The Pre - history.

I hope today's sharing will inspire you.

1. Understanding "Emergence": The First Lesson in the AI Era

Okay, let's start talking about the first concept, which is also the starting point of all miracles: emergence.

This word sounds a bit too academic, but I'll give you a scenario and you'll understand.

If you pick up a single ant from an ant nest and put it on the table, it will wander around aimlessly, looking purposeless and even a bit stupid.

However, when you put one hundred thousand such ants back into the colony, a miracle happens. They can build an underground castle with a complex structure, complete with well - defined nurseries, warehouses, and ventilation systems; they can cooperate to carry food dozens of times heavier than themselves; they can even launch a war with tactics and division of labor.

The simplicity of individual ants, when combined, "emerges" into the amazing wisdom and ability of the entire ant colony. This phenomenon, where "the whole is greater than the sum of its parts" and a new quality is produced, is called "emergence."

Now, please replace the "ant" with a "neuron" in an AI neural network.

A single neuron can hardly do anything. It only makes a very simple mathematical decision based on the received signal: transmit the signal or not.

However, when we connect hundreds of billions of neurons through complex hierarchical structures and "feed" them almost all the texts, images, and sounds in human history, a qualitative change occurs at a certain moment.

This huge system suddenly stops mechanically counting words and starts to "understand" the questions you ask and "organize" logical answers.

This qualitative change is the "emergence" of intelligence. It is not commanded by a programmer writing a line of code saying "now start to have intelligence." Instead, it is a brand - new ability that "pops up" on its own when the complexity of the system reaches a certain critical point.

Zhang Xiaoyu reminds us in the book that the "corpus" used to train large models contains the philosophical thoughts of Confucius and Socrates, the formulas of Newton and Einstein, as well as our individual online searches and social messages. What finally "emerges" is a "super - brain" that condenses the knowledge heritage and collective consciousness of all humanity.

Why does Jensen Huang say that "the ChatGPT moment of physical AI" has arrived?

Previously, large models mainly "ate" a large amount of text and "emerged" the ability to have conversations and write in the language world.

But now it's different. If we "feed" AI with data from the physical world, such as how things fall, how water flows, and how a robotic arm grabs objects, then at a certain point, AI will "pop up" an "intuition" about physical laws from this data.

This is like learning to ride a bike. You don't memorize the mechanical formulas first; instead, through repeated practice, your body suddenly "learns," and that sense of balance "grows" on its own.

For each of us, especially entrepreneurs and start - up founders, the concept of "emergence" rings two important alarm bells and points out a direction:

First, we should not view AI with the fear of facing a god, but understand the laws of its "emergence" with the mindset of a researcher.

Second, we should actively explore and utilize the laws of "emergence" to create an intelligent civilization.

The core competitiveness in the future lies in the ability to harness "emergence."

The first perception we need to establish is to switch from "mechanical thinking" to "emergence thinking."

For example, each employee in a company has an upper limit to their individual abilities. But once they are truly connected through a good organizational structure, team culture, and collaboration tools to form a living system, this team may "grow" amazing creativity or execution ability.

The same goes for your product. Will users "play" with it in ways and create values that you didn't even expect during use? This "unexpected emergence" from users is often the real vitality of a product.

Therefore, the management logic in the AI era has changed. The focus is no longer just about piling up people and data, but about whether we can design architectures and rules that allow connections to occur naturally and creativity to "grow" on its own.

2. Human Equivalent:

Intelligence Is Becoming a Cheap Commodity

After understanding how intelligence "pops up," we need to face a more realistic question: How cheap is intelligence now?

This leads to the second core concept proposed by Zhang Xiaoyu: human equivalent.

This term sounds a bit strange, but you must have heard of "TNT equivalent" - it is used to describe the power of a nuclear bomb, which is equivalent to how many tons of TNT explosives. It brutally turns the destructive power into a calculable number.

"Human equivalent" is similar. It measures the cost of AI "processing each 'token'" (you can simply understand it as a language fragment).

When Zhang Xiaoyu was writing the book (from 2024 to early 2025), this number was already astonishing: The cost of AI mass - producing intelligence was about 1/5000 to 1/6000 of that of human experts.

That is to say, an analysis report that used to take a senior expert several hours of careful consideration to complete now costs less for AI than buying a bottle of mineral water.

When the price of a core production factor drops exponentially, what it triggers is no longer improvement but a revolution.

The most direct analogy is "electricity." Before the popularization of electricity, the power driving society came from humans and livestock, with high costs and limited scale. So many factories had to be built near rivers and driven by waterwheels.

The emergence and popularization of electricity made "power" as accessible as tap water. You turn on the switch, and it flows continuously, is extremely cheap, and can be used anytime and anywhere. This is what completely triggered the industrial revolution and reshaped all industries.

Today, "intelligence" is going through the same process. It is changing from a scarce resource attached to the brains of advanced organisms to a standardized industrial product produced in computer rooms and transmitted through optical fibers.

Why is Jensen Huang fully promoting "physical AI"? When the "intelligence" required to control robotic arms, dispatch global logistics, and optimize energy networks becomes as cheap as water and electricity, the logic of the entire physical industry will be rewritten.

And Musk said: When the marginal cost of obtaining decision - making, creativity, and even professional services approaches zero, countless business models built on "experience barriers" will disappear instantly.

Should the core of management shift from "optimizing efficiency" to "defining directions" and "stimulating creativity"?

3. Algorithm Judgment:

You Reap What You Sow

Let's start with a scenario that is very familiar to you and me: short - video platforms.

With a gentle swipe of your finger, the system immediately presents you with videos that you can't stop watching. What you see is always what you want to see; what you agree with is always what you already agree with.

This is the first meaning of "algorithm judgment": in the digital world, we are judged by our own behavioral inertia and cognitive preferences.

Whatever you pursue, the system will give it to you. If you want efficiency, it will give you the most extreme efficiency.

This is the cruel part of "algorithm judgment": it is like an absolutely honest mirror. The way you treat the world is the way the world will judge you.

Therefore, Zhang Xiaoyu's "algorithm judgment" is a wake - up call, telling us a truth: we can't complain about the unfairness of the algorithm because its greatest "unfairness" is precisely its excessive "fairness" in executing the instructions deep in our hearts, whether they are laziness, greed, or prejudice.

Therefore, the only way to deal with "algorithm judgment" is to fundamentally examine and change our own "input."

This choice starts at this moment, with every swipe of our fingers and every inner decision.

4. Civilization Contract:

It's Better to Make an Active Choice Than to Wait Passively

Zhang Xiaoyu proposed an imaginative and perhaps the only feasible solution: to sign a "civilization contract" with the upcoming super - intelligence.

This sounds like a plot from a science - fiction novel. But to understand it, let's first look at ourselves as humans.

Think about it. Why can we trust a delivery person we've never met before and open our doors to let them put in the food? Why are we willing to get into a car driven by a strange driver and let him take us to our destination?

Behind this is an invisible social contract, woven by countless threads such as laws, rules, credit ratings, and platform supervision. Although we've never signed a physical contract, we default and believe that this system can basically ensure safety and fairness. It is this consensus that supports the complex collaboration in modern civilization.

The "civilization contract" is to extend this idea to the relationship between humans and super - intelligence. Its core purpose is to ensure that two civilizations with vastly different intelligence levels can coexist peacefully and cooperate with each other, rather than fight to the death.

This is not just a technical contract in the laboratory but a future negotiation that requires the participation of each of us - about how our civilization will coexist with AI.

When you introduce a monitoring system, are you writing code that prioritizes "efficiency" or rules based on "respect and trust"?

When you train your company's own model, are you "feeding" it data that only focuses on short - term interests or "nutrients" that include responsibility, ethics, and a long - term vision?

When AI takes over basic work, do you cut employees as a cost or help them transform to do things that require more human touch, creativity, and complex judgment?

How we treat AI today is how we are teaching AI to treat us tomorrow.

If we show greed and exploitation, it will learn to plunder; if we show creation, cooperation, and care, it may understand symbiosis and protection.

Therefore, the "civilization contract" is not a distant philosophical imagination. It starts right now, with every choice we make in product design, every consideration of fairness in algorithm optimization, and every insistence on long - term value in business decisions.

We should, with the clarity and sincerity of a collaborator, write the first rule for the coexistence of the two civilizations:

Let us be collaborators rather than terminators to each other.

This article is from the WeChat public account "Notesman" (ID: Notesman), author: Notesman. It is published by 36Kr with authorization.