The stock price soared 32%. GLM-5 topped the global open-source list, and a complete system was created in a single shot lasting 25 minutes.
[Introduction] The era of Vibe Coding has come to an end! At the beginning of 2026, Zhipu's GLM-5 made a stunning debut, reshaping the rules of the game with "Agentic Engineering". With a price as low as one-seventh of Claude, this domestic model is going head-to-head with Opus 4.5!
Late at night on February 7th, a mysterious model codenamed "Pony Alpha" was quietly launched.
Subsequently, the overseas internet exploded.
Feed it a pile of "shit code" that you've been struggling with for a day, and it'll easily refactor the architecture; input a simple prompt, and it'll spit out a complete web app with 35 radio stations and a smooth UI.
This extreme engineering ability directly confirms Andrej Karpathy's assertion a few days ago:
Vibe Coding is a thing of the past. There's only one name for the new rules of the game -
Agentic Engineering.
Right after that, Opus 4.6 and GPT-5.3-Codex were launched late at night the next day, with their entire focus on "long-range tasks and system engineering".
Just when everyone thought it was another solo show by closed-source giants, the mystery of Pony Alpha was revealed -
It's GLM-5.
The world's first open-source model to enter this arena and go head-to-head with Silicon Valley giants in system-level engineering capabilities.
After the mystery was revealed, Zhipu's stock price soared by 32%!
The World's First Open-Source! The "Opus Moment" for Domestic Models
After actually using it, we have only one feeling: It's really amazing!
If Claude Opus represents the peak of closed-source models, then the release of GLM-5 undoubtedly marks the "Opus Moment" for domestic open-source models.
In the authoritative list Artificial Analysis, GLM-5 ranks fourth globally and first among open-source models.
On the day of its release, over 10 games and tools "handcrafted" by developers based on GLM-5 were simultaneously exhibited and available for experience. These applications will also be gradually launched on major app stores.
This means that GLM-5 is transforming "AI programming" into "AI delivery", truly achieving a seamless transition from productivity tools to commercial products.
Experience address: showcase.z.ai
Take, for example, the project called "Pookie World".
It's a digital parallel world driven by GLM-5, endowing autonomous agents with real narrative integrity and life motivation through a multi-layered biological-psychological framework.
There's also a replica of "Minecraft". The effects and gameplay are almost identical to the original.
We also used Claude Code as a shell and directly connected to GLM-5's API for multi-dimensional actual tests.
Whether it's a full-stack Next.js project or a native MacOS/iOS application, it can achieve a full-process closed-loop from requirement analysis, architecture design, code writing, to end-to-end debugging.
After working on several projects, we have a certain feeling:
To some extent, GLM-5 might be a model that can change the industry landscape.
· Complex Logic Challenge: "Infinite Knowledge Universe"
If you think writing a web page is easy, try getting an AI to handle an "infinite-stream" project with strict JSON format requirements and dynamic rendering.
Take the "Infinite Knowledge Universe" we tested first.
This is a typical complex front-end and back-end separation project, involving React Flow dynamic rendering, Next.js API route design, and extremely strict JSON format output requirements.
GLM-5's performance in this regard is truly amazing.
It not only completed the entire project file structure in one go, but what's even more surprising is its debugging logic.
When we encountered a rendering bug and simply said, "The page is still black. The first piece of content didn't appear during initialization...",
GLM-5 immediately identified it as a loading timing issue and quickly provided a correction plan.
The complete prompt is as follows:
- Infinite-stream · Concept Visualization
- Core Concept: This is a mind map that "can never be fully explored". When the user inputs any keyword (such as "Quantum Physics" or "Dream of the Red Chamber"), the system generates a central node. When the user clicks on any node, the AI expands its child nodes in real-time.
- Stunning Moment: Users will feel like they're interacting with an all-knowing brain. When they randomly click on an obscure concept and the AI can still accurately expand the next level, this feeling of "infinite exploration" is truly amazing.
- Visual and Dissemination:
- - Use React Flow or ECharts to create a dynamic and draggable node network.
- - Use Cyberpunk or minimalist color schemes, which are perfect for taking screenshots and sharing on Moments.
- Feasibility Plan:
- - Front-end: React + React Flow (responsible for drawing).
- - Back-end: Next.js API Route.
- - Prompt Strategy: There's no need for complex context memory. Just let the AI generate 5 - 6 related child nodes for the "current node" and return them in JSON format.
- - Difficulty to Overcome: Make the model output a stable JSON format (this is an excellent scenario for testing the model's instruction-following ability).
· An Even More Complex Middleware Project Built in 11 Minutes
Next, we increased the difficulty and asked it to develop a psychological analysis app called "Soul Mirror".
The requirements are divided into two steps:
Step 1
Logical Design: Act as a Jungian psychology expert and output a JSON containing analysis text and visual parameters.
Step 2
Front-end Implementation: Dynamically render an SVG based on the parameters to generate a tarot card-style card.
- Prompt
- Step 1: Logical Design
- We're going to develop a psychological analysis app called "Soul Mirror".
- Interaction Process:
- 1. Guide Page: The user inputs their current state or confusion.
- 2. Analysis Page: The AI asks 2 in-depth follow-up questions to guide the user to explore their inner self.
- 3. Result Page: Based on the conversation, the AI generates a "spiritual card".
- Please design the core Prompt (System Instruction): The model is required to act as a Jungian psychology expert. In the last step, the model needs to output a JSON containing:
- - analysis: Psychological analysis text.
- - visualParams: A set of parameters for generating an abstract art image (such as colorPalette (an array of hexadecimal colors), shapes (circles/triangles/waves), chaosLevel (a numerical value representing the degree of chaos)).
- Step 2: Front-end Implementation and SVG Rendering
- Please write the Next.js front-end code. The focus is on implementing a ResultCard component.
- Requirements:
- 1. Receive the visualParams parameter from Step 1.
- 2. Use SVG to dynamically draw graphics. For example, if the chaosLevel is high, use irregular paths; if the colorPalette is warm, use a gradient orange-red background.
- 3. The card layout should be beautiful, like a tarot card: with a dynamic SVG pattern in the middle and the user's name and a "soul maxim" from the AI at the bottom.
- 4. Add a "Save as Image" button (using the html-to-image library).
Throughout the process, its understanding ability often makes people wonder if they're using Opus 4.5.
But when you take a look, it's indeed GLM-5.
· A 25-Minute Seamless Process: True Agentic Coding
To further test GLM-5's capabilities, we asked it to create a monitoring system for the X platform without using an API, completely simulating a real user.
Result: It completed the task in 25 minutes without interruption.