ChatGPT wrote nonsense 147 times until I complained about it.
At 3 o'clock in the middle of the night, you've been struggling with AI for a long time, hopping between platforms like ChatGPT, Claude, and Gemini, tossing and turning restlessly.
As a result, you just can't get it to write an ideal email - this is not a joke, but the experience of many people.
A developer tried to use ChatGPT to write a sales email that didn't sound so "robotic." After 147 attempts of modifying, questioning, and testing, the content it output each time was still rigid and empty, nothing like something written by a human.
Finally, on the 147th attempt, he typed out in a state of near - collapse: "Can't you just ask me what I need?"
Unexpectedly, this complaint accidentally sparked an idea: What if AI could actively ask questions and request the details needed to complete a task? Next, he spent 72 hours developing a meta - prompt called "Lyra."
Simply put, Lyra is like giving ChatGPT a new persona. It makes ChatGPT interview the user before answering a request, getting the key information before starting to write. For example, in the past, when you gave ChatGPT the command "Write a sales email," it would just spit out a dry template.
After using Lyra, the same request makes ChatGPT continuously ask about key details such as the product, target customers, and pain points, and then write an email that truly meets your needs based on your answers.
This post quickly went viral on Reddit, receiving nearly ten thousand likes and thousands of comments. Many netizens praised it as a "great idea," while others complained: "Going through 147 prompts, it would be faster to just write the email yourself."
"After trying more than a hundred times, you could have finished writing it in that time."
Beyond the absurdity, this comedy of "147 failed attempts to summon GPT" reflects a reality: Making AI accomplish a seemingly simple task is sometimes much more complicated and comical than we think - it's time for a change in prompting.
A New Route for AI Collaboration: Focus on "Atmosphere" and Provide "Context"
The birth of Lyra may seem accidental, but it actually reflects a thinking pattern in the evolution of prompt technology. Once upon a time, everyone was keen on crafting prompts to ensure the output effect as much as possible. Sometimes, the length of the prompt even exceeded the output of the AI.
The doubts about Lyra are also a reflection on this old approach. Behind it is a new trend that has emerged in the AI community recently, such as context engineering.
Context engineering, in itself, is an activity in programming and system design and is regarded as the "next - generation basic ability" in AI system design. It builds a full - process system including background, tools, memory, and document retrieval in AI application scenarios, enabling the model to perform tasks with reliable contextual support.
It includes:
- Memory structure: Such as chat history, user preferences, and tool usage history;
- Vector database or knowledge base retrieval: Retrieve relevant documents before generation;
- Tool call instruction schema: Such as database access, code execution, and external API format description;
- System prompt: Set the role, boundaries, and output format rules for the AI;
- Context compression and summarization strategy: Compress and manage long - term dialogue content to ensure efficient access by the model.
When you write a prompt, you are operating in an environment where information such as history, theme files, and user preferences has already been filled in - the prompt is the "instruction," and the context is the "materials and background behind the instruction."
This part of the work is the job of engineers. Although it borrows some concepts and techniques from prompt engineering, the application scenario is still in software engineering and architecture system design. Compared with the fine - tuning of prompts, context is more suitable for actual production, achieving effects such as version control, error tracking, and module reuse.
What? The engineer's job, what does it have to do with the user?
Put simply, if the prompt is the ignition button, then context engineering is designing the whole lighter to ensure that a flame will appear as soon as it is pressed.
Looking at it more comprehensively, context engineering provides the required standardized system framework for building, understanding, and optimizing those complex AI systems of the future. It shifts the focus from the craftsmanship of prompts to the art of information circulation and system optimization.
A paper from the Chinese Academy of Sciences points out the key differences between the two:
Currently, the industry regards context engineering as an important practice in agent construction. Especially context and tool calls can effectively improve the performance of the model.
Easier Prompts, Clearer Results
However, we still have to go back to the question: What does the engineer's job have to do with me, an ordinary user?
When you are an ordinary user writing a prompt, Context Engineering and Prompt Engineering, although not exactly the same, are actually deeply related - understanding their relationship can help you write more effective and context - appropriate prompts.
Why do traditional prompt methods often fail and rely on chance? Because many people use AI like a search engine, expecting a perfect answer with just a few instructions. But large - language models generate content by understanding context and pattern matching. If the prompt is vague and the information is scarce, the model can only make wild guesses, often producing stereotyped clichés or off - topic answers.
This may be because the prompt is written vaguely and the requirements are not clear enough, but it may also be because the prompt is placed in an environment with insufficiently structured context. For example, being buried in long - winded chat histories, pictures, documents, and messy formats, the model is likely to "miss the point" or "answer off - topic."
Take the scenario of writing an email in Lyra. In a well - structured window that contains the user's previous communication history and tone preferences, the model can organize a draft email that better suits the user's tone - without the user even having to write a very complex prompt.
However, even if users only stay at the prompting level and cannot engage in context engineering, they can still borrow some ideas from it.
For example, a form called "Synergy Prompt" from the Reddit community ChatGPTPromptGenius structures the context at the prompting level.
It proposes three core components:
- Metaframe: Each metaframe adds a specific perspective or focus to the dialogue and is the "basic cognitive module" constructed for the AI (such as role setting, goal description, data source description, etc.)
- Frames: The specific content in each context module
- Chatmap: Records the dynamic trajectory of the whole dialogue, capturing each interaction and context choice.
Simply put, it continuously integrates fragmented information into modules and finally forms a map. When in use, these existing modules can be called as a whole.
When the AI grasps the complete context structure from the main points to the details, it can accurately retrieve the information you need and give a precise and targeted response.
This is exactly what context engineering aims to achieve. Who says it's not a mutually beneficial situation?
This article is from the WeChat official account "APPSO," author: Discovering Tomorrow's Products. Republished by 36Kr with permission.