AI + Mathematik treibt die Rechenbeschleunigung an. Die Operator-Automatisierungsplattform "Zizixinyuan" hat eine Seed-Runde von mehreren Millionen Yuan abgeschlossen.
Recently, the high-performance operator automatic discovery and optimization platform, Zizixinyuan, announced the completion of a seed-round financing of tens of millions of RMB. This round was jointly invested by ZhenFund, Inno Angel Fund, and Songhe Capital, with Shenlan Capital serving as the subsequent exclusive financial advisor. This company, which has been established for only two months, is committed to redefining the limit of computing efficiency through an innovative approach that combines "AI + mathematics". Based on operator automation tools, it can automatically discover and self-optimize operators. In terms of efficiency, it far exceeds the manual operator development model and can produce operators with extreme performance that break through the boundaries of human capabilities.
The funds from this round will be mainly used for the research and development of core algorithms, the productization of the operator automation toolchain, and the promotion of the implementation and application among the first batch of key customers in the leading domestic computing power ecosystem.
As AI enters the era of diversification, operators are taking over the core of the computing engine
As the increase in transistor density approaches the physical limit, Moore's Law is gradually becoming ineffective, and the performance dividends brought by hardware stacking are drying up. The limit of computing is quietly shifting to the world of software acceleration.
In the past few years, the rapid development of artificial intelligence has accelerated the update and iteration of software and hardware. AI models have expanded from language and vision to behavior and the three-dimensional world, from autonomous driving to embodied intelligence, from biomedicine to a broader field of AI for Science. New model architectures are emerging in an endless stream. Every AI model ultimately has to run on operators. It is not only the key bridge connecting the computing power of underlying chips and the algorithms of upper-layer models but is also becoming the microscopic form of the model itself, defining the model's way of thinking and reaction speed.
The more diverse the models and hardware architectures are, the more complex the underlying computing becomes. Every structural innovation brings a batch of brand-new operator requirements.
However, traditional operator development is extremely dependent on human resources. It requires top engineers to manually optimize and repeatedly adjust parameters, which often takes weeks or even months. NVIDIA's CUDA ecosystem has built a moat for global AI computing thanks to the spontaneous contributions of nearly 6 million developers. For other chip manufacturers, this means an almost insurmountable barrier.
Expand the boundaries of human capabilities and reshape the performance ceiling with "AI + mathematics"
Zizixinyuan has proposed a brand-new path - to let AI automatically generate operators.
The company adopts a dual-drive architecture of "AI + mathematics", deeply integrating the intelligent generation ability of large models with the mathematical deduction ability of operational research and optimization to create a high-performance operator automatic discovery and optimization platform. This platform breaks the traditional operator development's dependence on human resources. It can automatically model, search for, and optimize operator algorithms on complex chip architectures and continuously evolve itself, truly making up for the hardware limitations with software in terms of computing power efficiency. Currently, the platform has achieved breakthrough results in various types of operators, such as mathematical function operators, matrix multiplication operators, and operators for solving supply chain problems, enabling AI systems and vertical field computing acceleration.
The paradigm shift from "manual adaptation" to "automatic generation"
Zizixinyuan's first product, ModelBridge, was unveiled at the 2025 Huawei Connect Conference. It can complete the full-automatic adaptation and inference deployment of the Qwen3 - 14B model on Ascend Atlas hardware from scratch within 30 minutes. It only takes 2 hours to increase the single-card throughput of Qwen3 - 14B on Ascend to 28 tokens per second, a 40% increase compared to the latest official community image (driven by two cards). This means that what used to take a ten-person team several days to complete in terms of adaptation and optimization can now be completed by Zizixinyuan's system in hours or even minutes. "Automatic operator generation" has moved from the laboratory to the industrial field.
Zizixinyuan's team consists of scientific research and engineering talents who are at the forefront of the world in the fields of large models, operational research and optimization, and high-performance computing. The founding team includes leading figures in the root technology of Huawei's 2012 Central Research Institute and partners (CTOs) of leading large model companies. In addition, many members have won gold medals in IMO/ICPC and have top academic achievements in the AI field. They have also invited Academician Luo Zhiquan, a master in the field of operational research and optimization, as the chief technology advisor. Their goal is not to chase the limit of hardware or stack the complexity of software, but to use AI to drive AI, create a system that allows operators to grow automatically, eliminate the gap in software and hardware adaptation, and achieve equal rights in computing acceleration. When the ceiling of hardware manufacturing processes becomes visible, the future of computing belongs to those who can "unleash the potential".