StartseiteArtikel

It's a quantitative fund again. Is it the second DeepSeek moment?

字母AI2026-01-04 11:02
Liang Wenfeng has also published a new paper, but the new model is still in the works.

ZhiZhi Innovation Research Institute, under the umbrella of Ubiquant Investment, has released the open - source programming Agent model IQuest - Coder - V1. Although the ZhiZhi Research Institute isn't widely known in the AI field, the benchmark data of this model directly rivals the top - tier level in the industry.

Its parent company is a quantitative private equity firm, and it was released in January. This combination of features easily brings to mind DeepSeek R1 from the same period last year.

In fact, when DeepSeek R1 was released last year, the situation was similar: an unknown company released a top - tier model in the industry.

So, will IQuest - Coder - V1 be the next "DeepSeek moment"?

It's too early to draw a conclusion for now.

According to JetBrains' "2025 Developer Ecosystem Status Report", 85% of developers globally are already using AI tools, and 41% of global code is generated by AI. However, most of these tools remain at the auxiliary level.

From OpenAI to Anthropic, the agent products intensively launched by various companies at the end of 2025 all target code as a breakthrough point.

So, at least one thing is certain: programming Agents are the next big thing.

01

IQuest - Coder - V1 is not just a simple code completion tool. It's a large - scale code language model capable of independently completing the entire software engineering process.

Past AI programming assistants were mainly for automatic code completion. They'd continue the code you'd started halfway. In contrast, IQuest - Coder - V1 can understand requirements from scratch, design architectures, write code, conduct testing and debugging, and even perform multiple rounds of iterative optimization.

IQuest - Coder - V1 has three crucial technical features.

Firstly, it has a parameter scale of 40B. Compared with models like GPT - 5 and Gemini 3, which often have hundreds of billions of parameters, 40B is only about one - fortieth of theirs.

That means IQuest - Coder - V1 can run on consumer - grade hardware with slightly better performance, without the need for professional data - center - level computing power.

The second feature is the Loop architecture.

The name is straightforward. The model will iteratively loop through its own output. Just as programmers review, modify, and refactor code after writing it, the Loop architecture allows the model to reflect on and improve its generated code.

However, the Loop architecture isn't just about multiple calls. It internalizes the iterative optimization process into the model architecture. In simple terms, IQuest - Coder - V1 will over - deliver to ensure that the final output meets the user's requirements.

The Loop version makes the model "go through" the same neural network twice. It's like re - reading a key paragraph when reading an article; often, you'll notice issues you missed the first time.

The third feature is the code - flow training paradigm.

Traditional code models learn code snippets, focusing on static syntax and API call patterns. In plain language, AI can perfectly replicate the code it has learned but doesn't understand the reason behind the code.

However, IQuest - Coder - V1 learns how software evolves step by step, focusing on dynamic logical evolution. This enables the model to understand not only "what this code is" but also "why this code is written this way" and "how it should be modified next".

IQuest - Coder - V1 undergoes reinforcement learning training using 32k high - quality trajectory data, which is automatically generated through multi - agent role - playing.

The system simulates the interaction among users, Agents, and Servers. The user presents a requirement, the Agent writes the code, and the Server returns the execution result, all without manual annotation. The training goal isn't single - time code generation but the complete software evolution process.

These technical designs have been verified in benchmarks. In the SWE - Bench Verified test, which measures real - world software engineering capabilities, IQuest - Coder - V1 achieved an accuracy rate of 81.4%, surpassing Claude Sonnet 4.5's 77.2%. Its performance was 81.1% on LiveCodeBench v6 and 49.9% on BigCodeBench.

IQuest - Coder - V1 comes from the ZhiZhi Innovation Research Institute, initiated by the founding team of Ubiquant Investment. This research institute is independent of Ubiquant's quantitative investment and research system and focuses on researching multiple AI application directions.

Ubiquant Investment itself is one of the earliest quantitative private equity firms in China, established in 2012. Currently, it manages assets worth over 60 billion RMB and is known as one of the "Four Kings of Quantitative Investment" along with Minghong, Magic Square, and Lingjun.

The founder, Wang Chen, holds a bachelor's degree in mathematics and physics and a doctorate in computer science from Tsinghua University, and he studied under Academician Yao Qizhi, the only Chinese Turing Award winner. The co - founder, Yao Qicong, has a bachelor's degree in mathematics from Peking University and a master's degree in financial mathematics.

Both of them came from the top Wall Street hedge fund Millennium. They returned to China to start their business in 2010, seizing the opportunity of the launch of Chinese stock index futures.

Since 2020, Ubiquant has been building a super - computing cluster named "Beiming" and has set up an AI Lab, a Data Lab, and the Waterdrop Laboratory.

These infrastructures were originally for quantitative investment business but now also provide computing power support for large - scale model research and development.

Quantitative institutions possess large - scale computing clusters and data processing capabilities, which match the resource requirements for large - scale model training. Meanwhile, in terms of talent structure, both quantitative investment and AI research require researchers with mathematical and computer backgrounds, giving quantitative institutions a certain foundation when entering the large - scale model field.

The path from quantitative investment to open - source large - scale models isn't abrupt.

Quantitative institutions already have large - scale computing clusters and the ability to process massive amounts of data, which highly aligns with the requirements of large - scale model training. More importantly, there's a significant overlap in the talent structure between quantitative investment and AI research, both demanding research - oriented talents with mathematical, computer, and physical backgrounds.

Therefore, from a development perspective, IQuest - Coder - V1 seems to be a natural extension of Ubiquant's foray into AI rather than a simple follow - up trend.

02

It's undeniable that there are striking similarities between IQuest and DeepSeek.

They both come from Chinese quantitative funds and demonstrate the ability to achieve technological breakthroughs through engineering innovation under resource - constrained conditions. However, a closer look reveals that they've chosen completely opposite directions.

DeepSeek pursues "breadth". From DeepSeek - V3 to R1, Liang Wenfeng's team aimed to build general conversation capabilities, aspiring to create a Chinese - equivalent of GPT.

It aims to answer questions in various fields, write poems, tell stories, analyze current events, and solve math problems. This is a horizontally expanding path, covering as many application scenarios as possible.

IQuest - Coder - V1 pursues "precision". It focuses on the vertical code field and excels in professional tests like SWE - Bench. It doesn't care about writing poems but about understanding requirements, designing systems, and fixing bugs like a real programmer.

Interestingly, on the same day that IQuest - Coder - V1 was released, the DeepSeek team also took new actions.

Nineteen researchers, including founder Liang Wenfeng, published a paper on the mHC (manifold - constrained hyperconnection) architecture. This paper addresses the instability issue of hyper - connected networks in large - scale training.

Although the DeepSeek team maintains a certain frequency of research updates, they seem to be lagging behind in terms of products. They still haven't launched R2 and V4.

In 2025, the focus of competition in the AI field was on conversation and reasoning abilities. Companies were competing to better answer questions and present clearer reasoning processes. By 2026, the focus has shifted to Agent capabilities, competing on whether AI can independently complete complex multi - step tasks.

The core of Agent capabilities is "execution", not just "understanding" and "answering".

Take code as an example. A conversational AI can tell you how to fix a bug in the code, but an Agent can directly modify the code, run tests, and submit the changes. These are completely different levels of capabilities.

The DeepSeek team is indeed active in research, constantly publishing papers to advance underlying technologies. However, when it comes to products, DeepSeek is still mainly a conversational AI. Its main usage scenario is users asking questions and it providing answers.

DeepSeek hasn't launched a real Agent product yet and doesn't have the ability to independently complete the entire software development process like IQuest - Coder.

It's true that DeepSeek performed remarkably well in AI cryptocurrency/stock trading competitions like Alpha Arena, proving that the model trained by a quantitative fund "really understands the market", can read K - lines, analyze news, and make trading decisions.

The essence of quantitative investment is to use algorithms to understand market rules and find patterns in price fluctuations, further indicating that DeepSeek has the ability to "understand complex systems".

However, it should be noted that even with its excellent performance in the financial market, this ability remains at the "understanding" and "analysis" level. DeepSeek can analyze the market and give suggestions, but as a product, it hasn't developed complete autonomous trading capabilities.

From stock trading to code writing, the AIs of Magic Square and Ubiquant both show a tendency towards stronger execution orientation. This might explain why quantitative funds can achieve results in the AI field, as their core is "letting algorithms make autonomous decisions" rather than "letting algorithms answer questions".

Currently, the competition in AI isn't just about who publishes more papers. More importantly, it's about implementation, about who can transform technology into tools that users can directly use.

The market has waited long enough. Liang Wenfeng should launch new products.

03

IQuest - Coder - V1 is positioned to compete with Claude Opus 4.5. This positioning is clear, and the benchmark data of 81.4% versus 80.9% is indeed impressive.

Coupled with Anthropic's tough stance towards China, people have placed more hopes on Quest - Coder - V1. However, the question of "replacing Claude Opus 4.5" requires a more calm analysis.

Claude Opus 4.5's advantages lie not only in its model capabilities but also in its complete product ecosystem. It has a native VS Code extension, an interactive development tool for terminals like Claude Code, a tool ecosystem supporting the MCP protocol, enterprise - level security and compliance standards, and a user experience polished by numerous real - world projects. These aren't things a newly released model can replicate in the short term.

More importantly, there's the issue of user habits. Claude was released earlier, and the programmer community has gotten used to its "working style", knowing when to trust it, when to intervene, and how to collaborate efficiently.

Developing such usage habits takes time and is established through numerous trial - and - error processes. Even if a new model has better benchmark data, it'll take a considerable amount of time to build user trust.

There's indeed a gap between benchmark results and real - world applications.

Although the SWE - Bench Verified test measures the ability to solve issues in real - world code repositories, which is much more complex than simple code completion, performing well in such a test doesn't mean it can seamlessly replace human programmers in daily development.

In real - world work, requirements are often vague. During the communication between product managers and developers, requirements often change significantly, and these aren't reflected in benchmarks.

However, IQuest - Coder - V1 has opportunities in other aspects. It's open - source, meaning enterprises can deploy it themselves, adjust and optimize it according to their needs, without worrying about data being accessed by third - party service providers. For industries with strict data - security requirements, such as finance, healthcare, and national defense, this has real value.

The experience of using this open - source large - scale code model is completely different from that of Claude users. Claude users are mostly developers who are used to cloud services, willing to pay for convenience, and don't have extreme requirements for data privacy. The potential users of IQuest - Coder - V1 should be enterprises that need data autonomy and control, technology teams that want in - depth customization, or developers who like tinkering with open - source tools.

For example, in quantitative investment firms like Ubiquant and Magic Square, their algorithms are the lifeblood of the enterprise and can't be uploaded to the public cloud.

Of course, open - source also has its problems. There's no dedicated product team to polish the user experience, no customer service to solve usage problems, and when encountering bugs, users have to figure it out on their own or wait for the community to fix them. These are the disadvantages of open - source models compared to commercial products.

One view is that large - scale code models with certain agent functions like IQuest - Coder - V1 might be the first step towards general agents and AGI.

The logic behind this view is that code is a structured and logically clear task, making it easier to verify right or wrong compared to other open - ended tasks. The binary feedback of whether a test passes provides a clear learning signal for the agent.

More importantly, the capabilities required for programming tasks are the core capabilities needed for general agents.

Judging from benchmarks like SWE - Bench, it tests not only code generation but also the abilities to understand requirements, plan steps, debug errors, and iteratively improve. This process is similar to solving other complex tasks.

The code environment provides a relatively controllable training ground. Once its Agent capabilities are proven here, the technical path to expand into other fields will be clearer.

So, Ubiquant might be playing a long - term game.

This article is from the WeChat official account "Letter AI", author: Miao Zheng, published by 36Kr with authorization.