Jihu CodeDrive: A private AI Coding engine that boosts the efficiency of the entire R & D process of Fortune Global 500 companies by 30%.
AI Coding, the first track in the past two years to have verified the Product-Market Fit (PMF) of large models.
Since the beginning of 2024, AI Coding products such as Cursor, Devin, and Windsurf have emerged continuously, and the financing of various companies has also been on the rise. This year, AI Coding in the United States has entered the next stage.
First of all, major model manufacturers have entered the arena more directly.
First, Claude, a large model manufacturer whose model is used by multiple AI Coding products, officially launched Claude code. According to data released in July, this Coding product, which has been on the market for four months, has been used by more than 115,000 developers, and the weekly code processing volume reaches 195 million lines. In May, OpenAI also re - launched Codex, positioning it as an AI agent that can help programmers write code, fix bugs, and run tests, and support real - time collaboration and asynchronous task delegation.
Large companies are also joining this "AI arms race." In early July, foreign media reported that Google "acquired" the core team of AI Coding startup Windsurf for $2.4 billion. This move just preempted OpenAI - after Cursor refused to be acquired, OpenAI was planning to acquire Windsurf for $3 billion.
While there is a flurry of activity overseas, in China, AI Coding has long been a hotly contested area. Whether it is Tongyi Lingma of Alibaba, Trae of ByteDance, or Wenxin Kuaima of Baidu, almost every tech giant betting on AI has an AI Coding product. After the boom last year, some startups have withdrawn, while others are emerging in new forms.
An interesting phenomenon is that many of the "new faces" that emerged this year are mainly promoting Vibe Coding. This kind of AI Coding product, which claims to enable ordinary people to write programs with just one sentence, has shown new possibilities to many friends who don't know how to program.
However, in fact, as long as you try this AI - assisted programming method that is purely driven by natural language, you will find that it is only suitable for scenarios of creative verification and lightweight development. In an actual professional production environment, this so - called "light development" model will lead to problems such as the continuous accumulation of bugs and difficulty in debugging.
Against this industry trend, some companies are trying to explore another path.
"What we want to achieve is to enable Chinese enterprises and Chinese programmers to use AI programming products that are truly suitable for their actual situations," Liu Gang, CEO of Jihu GitLab, told 36Kr.
Jihu GitLab, founded in 2021, is derived from the open - source platform GitLab and can be regarded as the "Chinese version of GitLab." In 2022, the company released a DevOps product based on GitLab, covering aspects such as management, planning, creation, verification, packaging, release, and operation and maintenance. It can achieve efficient collaboration among product, development, QA, security, and operation and maintenance teams, accelerating and optimizing the enterprise software development lifecycle.
When Jihu GitLab was two years old, large models began to gain popularity, and this company deeply rooted in the enterprise R & D field naturally actively integrated AI functions.
In May 2024, a year ago, Jihu GitLab launched its first enterprise - level AI programming product, Yuma CodeRider.
At that time, the trend of AI Coding was at its peak, and the Vibe Coding model of "completing programming requirements with just one sentence" had just emerged. However, Yuma CodeRider, which had been rooted in the Chinese market for more than three years, abandoned Vibe Coding from then on and decided to take a path that could make AI Coding more practical and more in line with enterprise needs.
Specifically, Jihu GitLab's AI Coding product, Yuma CodeRider, supports private deployment and can combine AI capabilities with its own Devops platform, embedding the entire process of code generation, security review, testing, and release, thus forming an "end - to - end" enterprise - level closed - loop.
In the summer of 2025, Yuma CodeRider has signed contracts with dozens of customers, and there are more potential customers in the Proof of Concept (PoC) stage. Compared with Vibe Coding products with tens of thousands of users, Jihu GitLab chooses to prove the product's implementation ability through the number of customers that can bring real benefits and the commercialization process.
"We have integrated AI capabilities into the entire programming process. Moreover, we have added functions specifically suitable for Chinese enterprises and Chinese programmers," said Liu Gang. "Only in this way can AI Coding be truly implemented in batches in China."
01 A real AI Coding product cannot only be good at Coding
The AI code adoption rate is a performance indicator for some enterprises when implementing AI Coding.
However, 36Kr learned that some enterprises are now in a dilemma in order to improve this indicator. "Although a lot of code can be generated by AI, it may not meet the requirements at once. So there will be a bottleneck when this indicator is improved to a certain extent," a R & D engineer told 36Kr.
The reason for this phenomenon is not only the insufficient capabilities of large models. The deeper problem is that the development process in enterprises is often a team effort, and many problems cannot be solved by technology alone but are related to people or collaboration.
If evaluated from the enterprise R & D process, the generation of a requirement needs to be reviewed by roles such as business, R & D, and product before it can be determined. In the middle, the product and technology teams need to provide PRD, technical documents, etc. Only after that will the development stage begin.
In the development stage, the Dev and Ops processes also need to be continuously combined to ensure that the product can be launched quickly and well.
Facing these problems, the existing AI Coding products still have many "unstable factors."
A recently released report pointed out that although many AI tools can generate source code, they rarely consider the design and architecture of the application or the relationship between the architecture and components. They also cannot fully consider maintainability, reusability, scalability, and performance when generating code like human developers. Moreover, the code generated by AI is usually insecure and may contain many errors.
In addition, due to the shortage of computing power and strict security requirements, Chinese enterprises also need to consider whether various types of AI Coding products can be privately deployed within the scope of computing power support.
In the eyes of Liu Gang, CEO of Jihu GitLab, the current AI Coding products on the market can be divided into two dimensions: First, whether they support private deployment. Second, whether they support full - cycle intelligent software R & D. These are exactly what Jihu GitLab hopes to solve with Yuma CodeRider and are also the biggest features of this product.
Let's first look at full - cycle intelligent software R & D.
Enabling AI to support full - cycle intelligent software R & D is almost in the genes of Jihu GitLab.
Jihu GitLab is derived from the globally well - known open - source platform GitLab. GitLab was founded in 2014, and its main business is to provide an open - source DevOps platform and help developers achieve online collaborative development and version control. Its ability to serve enterprise private repositories allows enterprise development teams to have more control over their code repositories - this is also what differentiates GitLab from other competitors.
Naturally, in February 2022, Jihu GitLab released a one - stop DevOps platform, covering aspects such as management, planning, creation, verification, packaging, release, and operation and maintenance. In 2022, Jihu GitLab already had more than 180 customers.
Yuma, launched in 2024, has achieved deep integration with Jihu's original DevOps platform and proposed the paradigm of "agent - based programming + workflow integration." The effect of this is that the product can achieve seamless collaboration from code generation to code review and the CI/CD pipeline, and can build a complete software intelligent R & D closed - loop that can be directly implemented within the enterprise.
This platform - based strategy has gradually become a consensus this year. For example, Cursor is now increasingly emphasizing the smoothness of unit testing and paying more attention to end - to - end task - based programming capabilities, rather than just focusing on code generation.
"People are gradually realizing that software development is a full - lifecycle process, including code generation, unit testing, code review, security scanning, and continuous release, covering the entire process from development to implementation," said Liu Gang.
This trend suits Jihu perfectly. After all, starting from the Chinese version of GitLab, it understands the ins and outs of the enterprise internal R & D process better than anyone else.
"From the moment the code is generated, a large amount of modification, editing, review, and merging work is involved. In an enterprise, programming is never a single - person act. There are at least a dozen people, and sometimes hundreds of people collaborating in development. In the field of integrated collaboration, I believe that many AI programming startups, including Cursor, still have a long way to go," said Liu Gang.
In terms of the implementation method, Yuma and Jihu's original DevOps platform are not simply glued together.
Now, Yuma has added AI to each of its functions. Each of these functions may use different models. Liu Gang told us that in the past two years, Jihu has figured out the most suitable models for each product function and can directly provide customers with the most practical and effective solutions.
In terms of the usage effect, Jihu GitLab and Yuma CodeRider can be "seamlessly integrated and smoothly switched."
Now, if a programmer opens the product interface, they will find that in the DevOps interface of Jihu GitLab, there is also an intelligent review option when submitting code; in the Yuma CodeRider plugin, functions such as issue analysis, summarization, and induction can also be directly enjoyed without jumping out of the plugin.
This platform - based thinking combined with AI capabilities, plus the efficient implementation speed and smooth product experience, have enabled Yuma CodeRider to successfully convert previous customers.
For example, the previously mentioned Fortune Global 500 new retail company has provided more knowledge bases, project guidance, unit testing, and code review functions around GitLab, and the internal estimate is that the R & D efficiency has been improved by 27%.
02 Understanding private deployment can meet the AI Coding needs of Chinese enterprises
Private deployment is also a differentiating ability of Yuma CodeRider in the Chinese market.
Due to the long - standing cloud environment and usage habits, the private deployment model is the most commonly considered model when Chinese medium - and large - sized enterprises purchase IT products. However, in the AI Coding scenario, there are currently not many solutions that can meet the needs of enterprise customers.
The reason is that most of the current popular AI Coding products suitable for programmers in professional scenarios are from overseas (such as Cursor, Windsurf, Claude Code, etc.) - almost none of them support private deployment.
This means that if an enterprise does not want its data to go public on the Internet and hopes to build a dedicated system, these popular overseas products cannot be delivered.
Excluding overseas products, let's look at the solutions of Chinese manufacturers.
Although Chinese IT suppliers are familiar with private deployment, many new AI Coding products still want to serve the global market overseas and do not focus on providing private - deployment versions. Among the enterprises that can provide such services, many lack experience in providing private deployment of AI products for enterprises.
"This is like building a small water treatment plant. Although it is small in scale, it can meet the needs. This requires professional knowledge (know - how) in specific fields, such as model combination, computing power stratification, and software - hardware optimization. This part of the investment is difficult for companies that do not involve private deployment to understand," Liu Gang explained by analogy.
Jihu GitLab seems to be one of the few enterprises that can provide such services, as it needs to serve enterprises in the long term, keep up with new technologies continuously, and be able to provide ready - made solutions in a timely manner.
From the results, some customers of Jihu GitLab have experienced positive effects.
For example, a listed imaging equipment company that focuses on data security and needs private deployment found that after using the private deployment solution of Yuma CodeRider, it achieved the same results with only half of the model parameters and GPU resources of a certain cloud provider.
A Fortune Global 500 new retail company with a research and development team of hundreds of people and a large number of outsourced personnel also achieved the effect of being ready - to - use and having a lower cost than renting GPU graphics cards from cloud providers after purchasing the private deployment of the CodeRider all - in - one machine.
"The value of private deployment investment is underestimated. For example, after optimization on a standard all - in - one machine, the number of concurrent users and the generation efficiency we support are several times that of the unoptimized situation," said Liu Gang.
Regarding the question of whether private deployment "cripples" the model effect, Liu Gang told us that in actual implementation cases, many Chinese enterprises do need to balance computing power costs and effects.
That is to say, if the cost of privately deploying a full - fledged model is too high, suppliers need to find suitable solutions for private deployment. Only companies with experience in model combination, computing power stratification, and software - hardware optimization can help enterprise customers do this well.
03 The positive relationship between open - source and commercialization
As a company with open - source genes, in the AI era when open - source has become a de - facto standard, Yuma CodeRider is also considering open - sourcing.
At this moment, the trend of open - sourcing AI Coding plugins is becoming a reality.
On May 19, VS Code officially announced the open - sourcing of the code of the GitHub Copilot Chat plugin.
Liu Gang believes that this event represents three meanings: First, the open - sourcing of Copilot Chat will not bring fundamental changes to the current market ecosystem and user usage habits, but it has a great influence.
Second, Yuma has more complete functions and better user experience and is more suitable for Chinese programmers, so it will not be impacted by GitHub Copilot Chat.
Third, large - scale IDE products welcome AI Coding plugins. In the future, Yuma will also do its best to develop AI plugins on different platforms such as VS Code, JetBrains, and Android Studio.
Yuma, which has a clear understanding of the industry trend, is already preparing to open - source some functions of its product.
Jihu GitLab also clearly understands the significance of open - sourcing. They hope to gain the trust of more customers at the lowest cost, promote commercial conversion, and share their advanced concepts.
"We think our product has a lot of inspiration for the Chinese developer market, such as optimizing the architecture with limited resources, combining agent - based programming with specific tasks, and integrating AI programming with code management," said Liu Gang. "By sharing these advanced concepts and technical architectures through open - sourcing, more users will see our product, which will further promote the commercialization of the product."
This is similar to Copilot's thinking - after all, the open - sourcing of Copilot Chat has not changed Copilot's paid plan. For any open - source product, in actual business use, if the manufacturer only provides a free trial version, it can only let programmers experience the product. But if the product is to be used in a production environment, free resources are far from enough.
"Open - sourcing can quickly form an influence in the developer market and attract users to upgrade to the enterprise version," said Liu Gang.
Currently, Yuma CodeRider has signed contracts with dozens of customers, and more enterprises are in the PoC or technical communication stage. Although the number is not very large in terms of magnitude, these customers basically meet the two conditions of "having enough programmers; having considerable payment ability" - these are also the portraits of commercial customers summarized by Liu Gang.
He told 36Kr that the largest current customer of Yuma CodeRider is a new - energy vehicle startup in China. The automotive industry has a large number of programmers, and this company has thousands of programmers, with an annual cooperation amount of millions of RMB.
Currently, many high - quality potential customers of this type will first let some programmers try various AI Coding products and then make actual purchases. The significance of open -