HomeArticle

Anthropic CPO: In 2026, for enterprise AI to actually deliver results, it must first overcome this hurdle.

AI深度研究员2025-12-29 11:43
The difficulty in AI implementation is mainly due to the insufficient organizational preparation.

During the year - end review recently, many companies have a common feeling:

The models are getting more and more powerful, and the budget has been spent, but the business remains the same.

If you ask an AI three questions, it can answer them all. But when it comes to assigning a task to it? It often gets stuck halfway. Sometimes it can't find the required data, sometimes it doesn't have the permission to open a file, and sometimes the process breaks off at a certain step. In the end, no one dares to say that the task is completed.

What's the problem?

The issue isn't that the models aren't smart enough. It's that the companies aren't ready to assign tasks to AI.

During an interview last week, Mike Krieger, the CPO of Anthropic, didn't spend time boasting about how powerful Claude is. Instead, he raised a more practical question:

Can AI really take over part of your work?

The answer depends on the companies themselves.

Over the past year, Anthropic has found in enterprise deployments that the real obstacle isn't the technology, but the organization itself.

Where exactly is this hurdle?

Section 1 | AI Isn't Just for Writing Code: It's Trying to Do Real Work

Now, you'll notice that almost all AI companies are doing the same thing: They no longer just emphasize how smart the models are. Instead, they emphasize whether their AI products can do real work.

Let's see what Anthropic is doing.

They don't design Claude as a smarter chatbot. Instead, they design it as a colleague who can take on tasks.

The earliest version of Claude Code was just a development tool. When users input a sentence, it could complete code, build a webpage, and generate a demo. Six months after the release of this tool, its annualized revenue exceeded $1 billion. Its clients include Netflix, KPMG, Spotify, and L'Oréal.

Mike Krieger found that within Anthropic, many teams use Claude not just to write code, but to take over entire workflows.

For example:

Some teams use it as an "SRE in a box" to monitor systems and automatically troubleshoot logs;

Some use it as an assistant for biological research to search for literature and build data - processing scripts;

Some even let it act as a project manager to summarize requirements and assign subtasks.

The role of AI has changed in these scenarios.

By the end of 2025, Anthropic renamed Claude Code to Claude Agent SDK. It's no longer just an assistant that can write some code. Instead, it's a role unit that can receive instructions, execute processes, and deliver results.

In Mike's words: We're redefining Claude's role. It's not about generating answers, but about delivering results.

To deliver results, AI needs to work continuously and execute stably.

So, Anthropic started to build a whole set of support mechanisms. It's not just about giving an input and seeing an output. Instead, it's about letting AI push forward on its own under more vague goals and finally submit results.

It's not about auto - completion. It's about auto - finishing.

But when the capabilities are in place, a hurdle appears: It's not that AI isn't capable. It's that the organization isn't ready.

Section 2 | The Real Hurdle for Implementation Isn't the Model, It's the Organization

Many companies think that bringing in AI is like hiring a smart intern. They just open an account and issue an instruction, and it can start working on its own.

But when they try, they find that it responds slowly, gives vague answers, and often gets stuck. But the problem isn't with AI. It's with the companies: The tasks aren't clearly defined, and the information isn't provided adequately.

For example, if you ask AI to help you find a report and analyze customer data, where should it look?

Many companies' own employees don't even know where the data is stored, let alone AI. Some spreadsheet column names are like "Sheet3_Temp" without any explanation. Who knows what that means?

For AI to understand these files, it depends on the labels, annotations, and source relationships behind the data. But most companies haven't done this basic work. Facing a bunch of files, AI doesn't know where to start.

This is what Mike said:

You have to organize the data in a way that AI can understand before it can be helpful.

But having data isn't enough. You also have to give it permission.

No matter how capable AI is, if you don't give it access, it can't get in.

Some companies' processes require jumping through more than a dozen layers of systems;

Some files need approval to open;

Some processes haven't been sorted out at all, and you can't even find the entry point.

When Anthropic was helping clients solve these problems, they found that although the obstacles seem to be about systems, permissions, and processes, in fact, the organization hasn't thought clearly:

What should AI do?

What information does it need?

Who should the results be delivered to?

This is the first layer: Prepare all the necessary data, permissions, and systems.

Section 3 | From "Question - Answering" to "Task - Assigning", the Mindset Needs to Change

Once you've prepared the data and permissions, can you start using AI?

There's a second layer: Learn how to assign tasks.

How to assign tasks? Many people still use the old methods.

AI isn't a search engine, a knowledge - answering system, or a plugin that you can use with just one click. It's more like a newly - hired employee. You have to tell it what to do, where to look, and what is considered qualified before it can start working.

What's the difference between the two?

Many teams are used to saying to AI: Help me create a financial report.

Of course, AI will try to write it, but it doesn't know your company's report format, where to get the data, or what the standard for the required indicators is.

The truly effective way is to treat it like an intern:

Tell it: You're a financial assistant;

Give it access to the spreadsheets and permission to read the data;

Specify that it's only responsible for calculating the difference between customer revenue and refunds;

Ask it to output a draft spreadsheet in the Q4 monthly format.

Only in this way can AI really start and finish the work.

The PR Agent developed by Anthropic in cooperation with GitHub is designed based on this logic. How to use it specifically? When a programmer tags Claude on the code review page, it will:

  1. Review the code and find possible problems
  2. Summarize the main content of this modification
  3. Give improvement suggestions
  4. Automatically complete a round of modifications

You don't have to keep an eye on the whole process. Just go and have a cup of coffee, and when you come back, it's done.

Why does it work?

Because they've sorted out three things clearly:

  • Clear tasks: It only does these things each time, with clear boundaries
  • Adequate permissions: It can read the code repository, make modifications, and submit results
  • Stable process: Review → summarize → suggest → modify, with a fixed path

After it can be used effectively, there's a more crucial question: What if something goes wrong?

People often say online:

"AI will never replace humans because AI can't take the blame. Taking the blame is the unique competitiveness of humans."

This statement sounds reasonable, but the problem isn't whether AI can take the blame. It's whether companies dare to let it take the blame.

What does "taking the blame" mean? Simply put, when something goes wrong, you can find the person responsible. In the GitHub example, there are records, reviews, and version control for the code modifications submitted by Claude. When something goes wrong, you can trace back to which step the problem occurred. This is "being able to take the blame".

What Mike emphasizes is this: AI isn't just about adding a sidebar to answer questions. It has to be integrated into the actual workflow, with clear division of labor and the ability to deliver results.

The key isn't the technology. It's whether the organization dares to give it clear responsibility boundaries.

Conclusion | Only When the Organization Is Ready Can AI Be Put to Use

The technology is already in place.

Mike Krieger's observation over the past year is simple: It's not that AI isn't capable. It's that the organization isn't ready.

Starting from 2026, companies need to ask themselves:

  • Is the data well - organized?
  • Are the permissions liberalized?
  • Are the tasks clearly stated?
  • Is the responsibility clearly defined?

Once you've thought through these four questions and prepared what's needed, AI can go from "being able to do work" to "actually doing work".

It's not a problem with the technology. It's a problem with the organization.

Only by overcoming this organizational hurdle can companies really start using AI in 2026.

📮 References:

https://www.youtube.com/watch?v=VSLEGpCemtE

https://www.theverge.com/2024/5/15/24157240/mike-krieger-anthropic-instagram-ai

https://medium.com/%2540GlobalGPT/the-secret-sauce-at-anthropic-cpo-mike-krieger-says-stop-bossing-eebcc8e28fbe

https://techcrunch.com/2024/05/15/anthropic-hires-instagram-co-founder-as-head-of-product/

This article is from the WeChat public account "AI Deep Researcher". Author: AI Deep Researcher. Republished by 36Kr with permission.