HomeArticle

Behind AI automation: Whatever can be quantified is not spared.

哈佛商业评论2025-06-24 09:40
The accelerated development of AI has brought new uncertainties.

With the rapid development of AI, almost every labor field is facing challenges. For tasks that can be quantified, such as creative work and data analysis, automation is a likely outcome. For leaders guiding their organizations through this turbulent transformation, the approach is straightforward. Support risky bets with unclear return on investment, and reward teams that redefine problems and bravely explore the unknown. Set aside free time to promote cross - team communication, thus inspiring serendipitous discoveries and creative recombinations. Consider these intentionally retained ambiguous areas as strategic assets rather than burdens.

AI doesn't need to reach the level depicted in science - fiction to disrupt the economic landscape. Current models, along with cheaper and more powerful versions in development, are almost certain to impact every corner of the labor market. Their remarkable performance in text, image, and video fields may not only change the work patterns of the creative class, including writers, designers, photographers, architects, animators, and brand advertisers, but also affect professionals dealing with spreadsheets, such as financial analysts, consultants, accountants, and tax filers. Even fields with professional barriers, like law, medicine, or academia, are not immune: AI can sift through vast amounts of content, provide customized advice or courses at a fraction of the current cost, and its quality is rapidly improving to match that of humans.

There are still many significant questions regarding how powerful AI tools can become and how quickly this process will unfold. Dario Amodei of Anthropic and Sam Altman of OpenAI claim that Artificial General Intelligence (AGI) may be achieved in just one or two years. Yann LeCun of Meta is more skeptical, arguing that current models lack a solid understanding of the physical world, long - term memory, coherent reasoning ability, and strategic foresight. A new study just released by Apple also points out that today's models only operate within the scope of their training data. However, even if the development of AI were to stop tomorrow, its impact has already begun.

To navigate this new situation, leaders need to understand and plan how automation will affect their businesses. This requires them to identify which tasks and responsibilities are most likely to face pressure and chart a path for their companies to move up the intelligent value chain before time runs out.

What is not at risk of automation?

Academic researchers and practitioners have widely discussed which jobs and tasks are most vulnerable to automation. Some threats are obvious: self - driving cars may soon replace millions of ride - hailing, bus, and truck drivers. Meanwhile, language translation, a large amount of creative writing, design, and even routine programming work are gradually being handed over to AI.

In February this year, Anthropic shared thought - provoking user data: although the chat mode naturally encourages human - machine collaboration, about 43% of interactions are already a form of automation, where users ask AI to directly perform tasks rather than assist them in thinking through and solving problems. As modular AI agents enter the workplace and exchange data and coordinate tasks through protocols like MCP, this proportion will continue to rise. Environments that are widely quantified or standardized, whether through laws, tax codes, compliance agreements, or sensor data streams, face the greatest risk of being replaced by machines in the short term.

In 2018, AI research pioneers Ajay Agrawal, Joshua Gans, and Avi Goldfarb proposed that as AI develops, the last human advantage will be judgment, the ability to weigh options and make decisions in uncertain situations. However, this view poses a challenge: precisely defining what constitutes judgment at any given moment.

Tasks that currently require human judgment, such as choosing a treatment plan, reviewing legal contracts, or writing a movie script that captures the spirit of the times, may soon be taken over by AI as models can access more data and more powerful computing capabilities. Recent research also shows that we can't assume that people always prefer human therapists, consultants, or mediators. The corresponding AI can work around the clock at a fraction of the cost of humans, and except for a few top human experts, the quality of service provided by AI may be more consistent.

So, how can we distinguish which tasks AI can automate soon and which tasks require new breakthroughs in AI technology to be automated? To answer this question, we must return to the basic principles and look back at the origin of all this.

From laboratory competition to industrial revolution

Back in the early 2000s, computer scientist Fei - Fei Li found that the field of computer vision, which focuses on enabling computers to "see" and interpret images, had hit a bottleneck: algorithms lacked pixel data and received too little visual information to match human performance. Her solution was simple yet effective: she created ImageNet, a large and carefully annotated image library, collecting data through the Amazon Mechanical Turk crowdsourcing platform. But her true genius was in 2010 when she set up a global leaderboard on this dataset, turning image recognition into a fierce competition among researchers.

In the first two years, progress on the annual leaderboard was slow.

However, in 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton stood out, leaving other competitors far behind. This three - person team from Toronto used two off - the - shelf NVIDIA GTX 580 graphics cards and trained a breakthrough convolutional neural network in just a few days. This pioneering approach proved that even with a graduate - student budget, one could rewrite the history of computer vision.

This moment ended decades of AI winter, placed neural networks at the core of technological progress, and revealed a successful model still in use in the field today. First, collect relevant data, about 14 million annotated images in the ImageNet project; second, rely on metrics to quantify and drive progress; finally, provide the model with a large amount of data and powerful GPU computing power until it learns on its own. It is this model that has enabled AI to evolve from object classification to writing fluent articles and now to reasoning, planning, and using external tools in emerging "thinking" systems.

Data, rewards, computing power

The framework that drove the breakthrough in image recognition is more versatile than most people realize. It can work as long as the following conditions are met:

First, define the task environment and collect its data, whether it's a text corpus, an image and video library, recorded driving miles, or data streams from robot sensors;

Second, define the target reward, which can be explicit (e.g., "Did the model predict the next word?") or implicit (inferred by observing human behavior);

Third, provide computing power to allow the system to iterate continuously.

With these three elements, we have a general automation engine. Currently, two data trends are accelerating this process. First, models can generate an infinite number of synthetic samples, for example, generating virtual "driving miles" covering various extreme scenarios without relying on real - world driver data. Second, AI is increasingly applied to various devices and sensors, such as mobile phones and cars. It acts like a low - cost monitor, capturing and quantifying real - world signals that were once too expensive or impractical to measure.

As long as you can convert a phenomenon into data, AI can learn and reproduce it on a large scale. Moreover, this technology is constantly reducing the cost of this conversion, making measurement cheaper, faster, and seamlessly integrated into everything we interact with. More and more things are becoming quantifiable, this cycle resets continuously, and the models keep evolving. This means that in theory, any quantifiable work can be automated.

Low - cost measurement, ubiquitous application

Economist Zvi Griliches' landmark 1957 study on the spread of hybrid corn clearly shows the future direction. Initially, farmers only planted this expensive seed on the best land because on these lands, the increase in yield could easily offset the additional cost and learning cost of using the new product. As the hybrid seeds improved and word - of - mouth spread, even marginal lands soon reached the break - even point. For AI, the cost of investing in measuring things follows the same return curve. When the cost of converting reality into data is high, companies often only invest in key areas, such as credit card fraud detection, algorithmic market - making, and jet engine fault prediction.

But today, AI has significantly reduced the cost of precise measurement, making continuous and detailed perception the norm. Lightweight models work in tandem with sensors to reduce bandwidth and latency, while synthetic data fills the gap when real - world data is slow or difficult to obtain. Every additional decimal place can quickly bring returns: in millions of AI - driven decisions, small error reductions will quickly accumulate. As the cost of precise measurement decreases, even marginal areas become profitable, and tasks that were once considered insignificant are gradually included in the scope of automation.

We may soon not only have intelligence that is almost free but also measure the world more to expand and continuously upgrade the application scope of this intelligence. We are already in the era of "artificial measurement intelligence," and anything we can quantify will quickly be put on the automation agenda.

Thrive in the unknown

Humans are evolutionary generalists, born to navigate with incomplete information. We can not only survive in unknown unknowns but also thrive in them. This adaptability is our decisive advantage. Through countless generations of evolution, we have continuously optimized our vocal cords and social - thinking brains until language emerged, opening the door to knowledge accumulation, abstract reasoning, and symbolic thinking. Since then, we have broken through our biological limitations, created various tools, expanded our perception, enhanced our memory, and improved our abilities.

But the cornerstone of our advantage lies in our highly flexible and closely connected prefrontal cortex. This neural command center allows us to envision countless "what - ifs," rehearse possible futures, and adjust strategies immediately when conditions change. Unless the singularity is truly achieved, even quantum computers will struggle to match our talent in open - ended, cross - domain counterfactual planning.

As AI develops at an accelerating pace, it brings new unknowns, and our cognitive map is constantly being redrawn. At the same time, it routinizes the predictable parts, just as mechanized agriculture freed us from the subsistence - based survival mode, allowing us to devote more of our counterfactual thinking resources to higher - level problems.

AI will also face difficulties in areas that are almost impossible to measure. For example, the Event Horizon Telescope took a decade of global efforts to capture an image of a black hole. There are still unsolved problems in exploring extreme - scale physics, the deep Earth's mantle, and the deep sea, as well as in studying the interactions of living cells in the human brain. In areas restricted by privacy, ethics, or regulations, in areas where society requires transparent reasoning processes (at least until model interpretability catches up), and in areas where people prefer human participation, AI development will lag. However, like the spread of hybrid corn, future generations will continuously re - evaluate the cost - effectiveness of these areas and may reach very different conclusions from ours.

But in the measurable category, there is a crucial exception that may be decisive: tasks that are difficult to quantify because the probability of the outcome is fundamentally uncertain, that is, in the realm of Knightian uncertainty. In this realm, since the risks themselves are not clearly defined, you cannot assign any probabilities. For example, founding a startup, investing funds or talent in highly uncertain projects, controlling new pathogens, formulating central bank policies during a financial system transformation, drafting AI ethics guidelines, inventing a new art medium, starting a fashion trend, or creating a new cross - genre blockbuster all fall into the realm where probability disappears. Some creative acts and discoveries are just clever recombinations of known things, but truly groundbreaking results rely on our unique ability to envision new and complex counterfactual worlds.

This list is not static. Once certain tasks become measurable, they will disappear from the list, and new tasks will appear just as quickly. Each transformation will bring painful economic and social adjustments, concentrating more jobs in a superstar economic system and concentrating huge rewards at the peak of creativity, talent, and capital. However, AI also brings a seemingly paradoxical gift: by popularizing education and becoming everyone's personal assistant, it provides more people than ever with the tools to climb these peaks. Work itself will continue to evolve, and any breakthrough that converts the unknown into the quantifiable will spread and be imitated at an extremely fast pace.

For leaders guiding their organizations through this turbulent transformation, what lies beyond the spreadsheet? It's everything that cannot be reflected in a cell: skills that cannot be quantified, open - ended questions without reliable precedents, intangible factors such as trust and taste, as well as the subtle dimensions of quality and experience, and the belief to forge ahead even when all indicators say "wait." Managing only what you can measure means ceding the most valuable areas to competitors who cultivate the unquantifiable.

Amar Bose, the audio and electronics engineer who founded Bose Corporation, proved this: while others were chasing the data on the specification sheet, he focused on the auditory experience of music in a real room, a quality that no existing indicator can capture, and he rewrote the rules of the audio industry.

Overall, the solution is simple. Support risky bets with unclear return on investment, reward teams that redefine problems and bravely explore the unknown, and rotate talent among positions full of uncertainty, such as R & D, new markets, and complex interactions with customers, partners, and policies. Set aside free time to promote cross - team communication to inspire serendipitous discoveries and creative recombinations.

Only those leaders who not only focus on the quantifiable but also value what remains unquantifiable can calmly face the next transformation.

This article is from the WeChat official account "Harvard Business Review" (ID: hbrchinese), author: HBR - China. Republished by 36Kr with permission.