HomeArticle

Stop Chatting with Large Language Models: A Guide to Reconstructing AI Workflows for Product Managers

人人都是产品经理2026-04-07 07:45
The productivity gap between "being able to use AI" and "using AI well" is not 30%, but an order of magnitude of 10 times.

The key to a tenfold productivity gap lies in the match between workflow and AI capabilities—shifting from the ChatGPT-style Q&A to a new collaborative paradigm of closed-loop execution, seamless context, and asset accumulation. This article will break down the three - tiered "dimensionality reduction attack" of the new - generation Agentic Workflow and demonstrate how to reconstruct the entire product analysis process, enabling you to upgrade from an executor to an architect in the AI era.

In 2026, the penetration rate of AI is quite high. Almost all product managers, operators, and R & D personnel use AI in their daily work. However, if you observe carefully, you will find a common phenomenon: the way most people use AI is no different from when ChatGPT first emerged two years ago.

People still open a web chat window, enter a prompt, and then wait for an answer. The only difference is that the underlying model has changed from GPT - 4 to GPT - 5 or a more intelligent domestic large - scale model.

This is certainly better than not using AI at all, but it far from unleashes the real potential of AI.

In actual work, the productivity gap between "being able to use AI" and "using AI well" is not 30%, but an order of magnitude of ten times.

The way many product managers use AI is like using a car as a horse - drawn carriage after the invention of the automobile: the same route, the same speed, just with a different engine.

What exactly is the root cause of this tenfold gap?

The answer lies in whether your workflow matches the capability structure of AI.

The first step in using AI well is to stop treating the AI in your hand merely as a "chatbot".

Why is the "chat window" the ceiling of your efficiency?

In the past year, AI tools represented by Cursor have completely revolutionized the workflow of programmers. Many people think it is just a "ChatGPT for programmers", but looking beyond the phenomenon to the essence, it represents a new AI collaborative paradigm for all knowledge workers.

The traditional web - based dialog box (Chat) has three inherent and insurmountable flaws. If you want AI to become a real productivity lever, you need to understand the three - tiered "dimensionality reduction attack" brought by the new - generation AI workflow (Agentic Workflow):

First layer: From "manual transfer" to "feedback closed - loop"

In the chat window, you ask AI to write a competitor analysis or a Python script for data processing, and it gives you the result.

You copy it to a document or a running environment and find that the format is incorrect or there are errors.

You paste the problem back into the dialog box, it makes changes, and you copy again... In this process, humans become human carriers in the "feedback closed - loop". AI produces, we verify, we transfer, and AI makes changes again.

The core difference of truly efficient AI tools (such as Cursor connected to the local environment or an Agent with execution capabilities) is that they are connected to our execution environment.

After it writes content or code, it can directly run/preview, detect errors and correct them by itself, and then run again. AI changes from an "external advisor who only gives advice" (leaving after speaking and not being responsible for the results) to a "worker who can work independently and correct errors by itself".

Second layer: From "limited prompts" to "seamless context supply"

Product managers often complain: "The PRD written by AI is so mediocre, just a bunch of correct but useless words." In fact, in many cases, the bottleneck of AI output quality does not lie in how intelligent the model is, but in how much relevant "context" it can see.

In the dialog box, it is difficult for you to clearly explain the historical background of the project, the minutes of multiple previous meetings, and the specific format of buried point data all at once.

However, in an AI environment where the working directory is connected, you only need to directly @ several internal requirement documents and @ last week's meeting records, and AI will immediately have all the context. Even if you don't write a long and detailed prompt, the result it gives will be highly in line with your business reality.

Third layer: From "consumptive" to "investment - type (asset accumulation)"

The usage mode of ChatGPT is consumptive: you invest time, get an answer, close the web page, and everything resets. While the advanced AI workflow is investment - type: Did you use an internal data document? Save it in the local project folder. Does AI repeatedly make mistakes in a certain business logic? Spend two minutes writing a global rule (Rules). Does the team have a specific template for PRD? Write it down and let AI remember it too.

Over time, the flywheel effect will appear: the more you accumulate, the more AI will understand your company's business, your writing preferences, and your workflow. The chat box will always be a stranger that requires you to start from scratch to give a brief, while the AI with accumulated assets will become an increasingly tacit co - PM.

The "three strategies" for information processing

In daily product work, a large amount of information is generated at each step. How to process this information determines how much AI can help you. Here is a very practical evaluation framework:

Worst strategy (information disappearance):

After a meeting, there are only oral conclusions. People forget them after a few days, and AI can't see them either.

Medium strategy (Human - first):

Write the conclusions in an online document on Feishu/DingTalk/Confluence. This is very standardized and user - friendly. But it is not friendly to AI because the format is mixed and permissions are required. Every time you want AI to refer to it, you have to manually copy and paste.

Best strategy (AI - first):

First, let the information exist in a format that AI can directly read (such as Markdown) in the local or knowledge base. After AI consumes these raw materials, it processes and outputs the results for humans to see.

If most of your work still uses the worst and medium strategies, there is still a lot of room for you to achieve a tenfold increase in efficiency.

A complete example of product workflow reconstruction

Let's use a common scenario for product managers - "analyzing the failed cases after a feature goes live and outputting an optimization plan" to demonstrate how to run the entire process using the "best strategy".

Step 1: Requirement and pain - point collection (from meeting to document)

At last week's product weekly meeting, everyone discussed the problem of low conversion rate of a certain recommendation strategy among a specific user group and put forward various hypotheses.

Traditional approach (medium strategy): You spent half an hour writing a meeting minutes and sent it in the group.

AI workflow (best strategy): Use an AI meeting assistant (such as Feishu Miaoji, Zoom AI) to automatically transcribe the meeting and export it as a .md file. Then directly throw it into the meeting_notes directory of your project - specific folder. You hardly spend any time, but AI can quote all the details of this meeting word for word from now on.

Step 2: Data and case analysis

You need to see the performance of this strategy on different data and record the specific scenarios of failure.

Traditional approach (medium strategy): Paste a few screenshots and several buried - point links in an online document.

AI workflow (best strategy): Create an analysis_notes.md in the project folder and put in the characteristics of typical failed cases, error logs, or user feedback texts.

Step 3: Let AI execute the closed - loop (the moment to witness the miracle)

This is where the best strategy truly shows its power. Since the information from the first two steps is in the same project space, you can directly open an AI tool that supports local context (such as Cursor. Even if you don't write code, using it to write Markdown documents and do data analysis is also a "dimensionality reduction attack") and give instructions to AI:

"Please sort out three optimization directions based on @ meeting records and @ failed case analysis. And verify whether these directions cover all failed cases."

Note how complete the context AI gets at this time: It knows why to make changes (from the meeting records), knows the specific failure modes (from the analysis notes), and knows what the success criteria are. As long as you set clear "success criteria" (Success Criteria), AI can independently sort out the logic, cross - compare, and even if you are dealing with a CSV data, it can write a Python script by itself to run the data, correct the chart by itself if it is incorrect, and finally directly feed the conclusion to you.

Step 4: Output the final deliverable

All the analysis conclusions are in the folder. Finally, you only need to ask AI: "Generate an outline of PRD/report PPT in line with our team's format based on all the above discussions and verification results." After generation, you copy it to the company's Confluence or Feishu for publication.

Note the change in order here: AI first, then humans. This is the deepest level of thinking transformation in the workflow. In the past, the habit was "humans write the document first and then let AI polish it"; now the logic is "humans provide structured context raw materials, AI takes the lead in generation, and humans are responsible for final acceptance and publication".

Conclusion: Be the "question - setter", not the "question - solver"

Looking back at the entire process above, you will find a fundamental role reversal: In the traditional workflow, the product manager is the "executor" and AI is the "assistant"; in the reconstructed workflow, AI becomes the main "executor", and the human role becomes the "architect" who sets the direction, standards, and makes judgments.

In other words, our positioning of AI should be upgraded from "let AI help me write some text" to "let AI help me solve this business problem".

As long as you give AI rich enough local context and set clear success criteria, it can completely independently complete the cycle of analysis, design, and verification. And your core value as a product manager lies in knowing which direction the product should go and what kind of result is considered "good". This high - dimensional "judgment" is exactly what AI most depends on you to provide.

Action suggestion: Tools are always changing. Today's carrier is Cursor, and tomorrow it may be a more integrated product workbench. But the three underlying logics of feedback closed - loop, context supply, and asset accumulation will not change. Today, you can try to pick a project in progress, create a local folder, and put all relevant research documents, user voices, and meeting minutes in Markdown or text form. Then, resist the urge to ask questions on the web - based ChatGPT and start a conversation with an AI tool that supports the local workspace (such as Cursor, Dify local knowledge base, etc.).

You will immediately feel the sense of tacit understanding that understands your business, is always available, and keeps evolving. The change starts from reconstructing the working directory.

This article is from the WeChat official account "Everyone is a Product Manager" (ID: woshipm), author: PM's Cultivation. Republished by 36Kr with authorization.