He created an Agent using DeepSeek-V4 and topped the GitHub trending list.
The DeepSeek version of Claude Code has gone viral!
According to a report from Zhidx on May 6th, today, the open - source project DeepSeek - TUI by independent American developer Hunter Bown has gone viral on GitHub, reaching the top of the GitHub trending list. Today, the number of Stars increased by 2434, and the total number of Stars has exceeded 10,200.
This project is a terminal - native programming agent based on DeepSeek - V4. It allows developers to directly chat with DeepSeek in the terminal, edit files, run shell commands, manage tasks, and even coordinate sub - agents in the codebase.
This morning, DeepSeek - TUI updated to version 0.8.13. The focus is on fixing runtime and TUI - related issues. The optimization of prompt specifications, runtime trajectory logs, Anthropic interface compatibility support, and large - scale interface organization and optimization have all been postponed to subsequent versions.
It is worth mentioning that the developer of DeepSeek - TUI is not a professional. Bown's undergraduate and master's majors have nothing to do with programming. Bown obtained a bachelor's degree in music education from the University of North Texas in 2015 and a master's degree in music education from Southern Methodist University in 2019. Currently, he is studying at the Dedman School of Law of Southern Methodist University in the United States.
This project was released in January 2026. It became popular after the upgrade of DeepSeek - V4 at the end of April this year and Bown's post on X expressing his desire to connect with Chinese developers. He referred to Chinese developers as "Whale Brothers".
Netizens on X shared that Bown has successfully obtained a WeChat account and started communicating with Chinese developers.
In the contributor list on the open - source homepage of DeepSeek - TUI, there are also Claude and Gemini.
The author of the open - source project Tday, a one - stop all - around scheduling intelligent agent terminal, posted that after he successfully integrated DeepSeek - TUI into Tday, the experience showed extremely high robustness. When used with DeepSeek - v4 - flash, the speed is very close to that of the open - source AI programming intelligent agent OpenCode.
The author of Nexu, an open - source alternative to Claude Design, said that this is the first time to directly run DeepSeek - V4 in the terminal environment of a code intelligent agent, and the test results are quite good.
Some netizens praised it below, saying that such a good project must be supported.
Some netizens also asked about the origin of the "Whale Brothers" mentioned in Bown's post, feeling that this title is quite comical.
However, some netizens think that the popularity of DeepSeek - TUI is inexplicable: "Why abandon a product with a mature solution and turn to an unstable product?"
01. Built on DeepSeek - V4 and a version friendly to Chinese developers was specially released
DeepSeek - TUI is a terminal programming intelligent agent built on DeepSeek - V4, with the ability of a 1 - million - token context window, streaming inference blocks, and prefix cache - aware cost reporting.
Specifically, it can read and edit files, execute terminal commands, conduct online searches, manage Git repositories, and schedule multiple sub - intelligent agents to work together in the terminal interface (TUI) with keyboard interaction.
Netizens commented that the interface layout of DeepSeek - TUI is clear at a glance, but the disadvantage is that the boundary between the AI output and the user input in the dialogue area is not obvious.
Some netizens compared it using the official DeepSeek API. Compared with Claude Code, the cache hit rate of DeepSeek - TUI will decrease when running long - duration tasks.
The architecture of DeepSeek - TUI is as follows: DeepSeek schedules the command line → DeepSeek - TUI supporting program → Terminal graphical interface ↔ Asynchronous engine ↔ Streaming client compatible with the OpenAI protocol.
Tool calls are transferred through the typed registry, including terminal commands, file operations, Git version management, online searches, sub - intelligent agents, MCP protocol, and RLM large - scale model. The execution results are written back to the dialogue log in a streaming manner.
The engine is responsible for managing the session state, dialogue rounds, and persistent task queue, and also has a built - in LSP language service subsystem. After code editing is completed, the syntax diagnostic information will be sent to the large - scale model context first, and then the next - step logical reasoning will be carried out.
There is also a mirror - friendly installation version for Chinese developers on the open - source homepage of DeepSeek - TUI:
02. It has three major operation modes and can adaptively adjust the inference level
On the open - source project homepage, Bown specifically wrote a README.zh - CN.md file in Chinese, which mentioned that the main features of DeepSeek - TUI include:
Automatic mode: Users can enable the automatic mode through the "model auto" instruction. The tool will automatically select a suitable large - scale model in each round of interaction and match the corresponding inference thinking level.
Switch inference level: Users can cycle through the inference levels by pressing the Shift + Tab shortcut key, which are: turn off inference → high inference intensity → highest inference intensity.
Streaming output of inference: It will display the thinking and reasoning process of the model in real - time streaming, allowing users to intuitively see the complete logical reasoning steps of DeepSeek.
Full - scale tool capabilities: It has a built - in complete toolset, supporting file read and write operations, terminal command execution, Git version management, web search and web browsing, patch application, sub - intelligent agent scheduling, and MCP protocol server connection.
Million - token context: It has the functions of context content tracking, manual/auto - configuration of content compression, and also provides the ability to monitor and statistics prefix cache.
Built - in three major operation modes: Planning mode (only read - only access to project code and files), Agent mode (interactive operation and manual approval required), Minimalist automatic mode (all operations are automatically approved and executed).
Session saving and resuming: It supports creating checkpoints for long - running work sessions, saving progress at any time, and resuming the session with one click later to continue working.
Workspace version rollback: The project has a built - in independent snapshot Git mechanism, which automatically generates project snapshots before and after each round of operations. The operations can be rolled back through the "/restore" and "revert_turn" commands without changing the project's native Git repository configuration.
Persistent task queue: Background tasks support persistent saving. After the program restarts, the unfinished background tasks can continue to be executed automatically.
HTTP/SSE operation interface: It supports starting the service through "deepseek serve — http", providing HTTP and SSE interfaces, and adapting to the headless automated proxy workflow without a graphical interface.
MCP model context protocol: It can connect to the Model Context Protocol server to expand the capabilities of more third - party tools.
Native RLM batch query: It has a built - in "rlm_query" native ability, reusing the same API client to call the lightweight and low - cost deepseek - v4 - flash model to efficiently complete batch code and data analysis tasks.
LSP code diagnosis: Relying on mainstream language service tools such as rust - analyzer, pyright, typescript - language - server, gopls, and clangd, after each code editing, error and warning information of the code will be displayed in real - time in the interface.
User - personalized memory: Users can enable the persistent note file function, and the customized preference settings will be injected into the system prompt to retain personal usage habits and configuration preferences across sessions.
Multi - language interface localization: It supports four interface languages: English, Japanese, Simplified Chinese, and Brazilian Portuguese, and can automatically identify the system language for adaptive switching.
Real - time cost statistics: It can statistically analyze the token consumption and estimated usage cost of each round of interaction and the entire session in real - time, and also display the detailed data of cache hits and cache misses.
Skill expansion system: It supports installing and combining customized instruction skill packages from GitHub, which can flexibly expand tool capabilities without relying on additional backend services throughout the process.
This morning, DeepSeek - TUI updated to version 0.8.13, focusing on fixing runtime and TUI - related issues:
Additional updates include pruning non - LLM tool results before compression: Before paid summary processing, mechanically summarize old detailed tool results. Keep the latest complete data body for repeated reads and replace the old copies with single - line summaries; if this can bring the session size below the compression threshold, completely skip the LLM summary call.
Repeated tool anti - loop protection device: Each user round generates (tool_name, args) parameter pairs. On the third identical call, it will insert a synthetic error - correction tool result instead of running the same tool again without any changes; if a tool fails, it will issue a warning on the third call and stop on the eighth call.
V4 cache hit telemetry compatibility fallback adaptation: Usage parsing now supports identifying the "usage.prompt_tokens_details.cached_tokens" field. Therefore, the existing cache hit identification component in the bottom status bar can adapt to the telemetry data of DeepSeek - V4's automatic prefix cache and is also compatible with the old - version explicit cache hit/miss field format.
03. Conclusion: Trying to create a replacement for Claude Code, but the stability is questionable
Proprietary systems like Claude Code usually require paid API access and run in a relatively closed ecosystem. The emergence of DeepSeek - TUI may provide a reference for breaking this situation. Relying on DeepSeek's low - cost model stack, it can provide a similar workflow at a lower cost. However, developers still cannot ignore the risks behind such unstable open - source projects.
However, the popularity of this open - source project undoubtedly confirms the influence of DeepSeek - V4 from the side. It provides new possibilities for more developers to build terminal intelligent programming agents at low cost and customize development workflows independently