HomeArticle

Zhineng uses Rust to refactor the underlying system to solve the security pain points of domestic substitution of IT infrastructure.

质能科技2026-04-16 10:56
Zhineng Technology is reconstructing the computing system with a full-stack Rust solution.

The "Double Stranglehold" on Xinchuang Computing Power: Underlying Dependence and Data Transfer Bottlenecks

    Against the backdrop of global competition in computing power and the acceleration of domestic substitution, Xinchuang scenarios face two structural challenges. In terms of security, basic software such as operating system kernels, database engines, browser rendering pipelines, and AI development toolchains have long relied on foreign technology stacks. In high - security scenarios such as classified, government, and financial sectors, excessive third - party dependencies make it difficult to converge the attack surface, and the security and trustworthiness of the system cannot be fully guaranteed. Taking the operating system kernel as an example, more than 70% of memory security vulnerabilities in traditional kernels written in C/C++ are present. Defects such as buffer overflows, Use - After - Free, and data races have become persistent security risks. In terms of performance, in the traditional architecture, there are multiple levels of memory copying between the file system, database, and network protocol stack. In scenarios such as AI large - model training and inference and supercomputing, the cost of data transfer has become a significant bottleneck restricting IO efficiency. This structural problem makes it difficult for single - point replacements to form an evolvable next - generation computing system. The industry urgently needs an overall reconstruction from the kernel, memory bus to the desktop and AI toolchains.

The Full - Stack Rust Solution: An Autonomous Computing System from Microkernel to AI IDE

    Wuhan Zhineng Technology Co., Ltd. (hereinafter referred to as "Zhineng Technology") starts from a full - stack perspective, including the microkernel, in - memory database, 3D browser operating system, AI IDE, and large models. With the Rust language at its core, it attempts to reconstruct the computing foundation in the AI era.

    The most basic and minimally composable Rust microkernel framework (RMKF) adopts a minimalist core design with NoSTD native and conditional compilation of STD. The core of the kernel only relies on spin locks as the sole synchronization primitive, and replaces runtime dynamic dispatch with zero - cost abstract static generic dispatch. Currently, this microkernel supports three CPU architectures: x86_64, aarch64, and riscv64. Through the definition of the hardware abstraction layer (HAL) trait and conditional compilation of Cargo features, architecture - independent core logic is achieved. Each architecture has its own independent atomic operation implementation and link script. In terms of security, RMKF borrows the Capability model of seL4 to implement a fine - grained permission control system that is unforgeable, derivable but with permissions only decreasing and revocable, covering all resource types such as processes, threads, memory, channels, devices, and interrupts. The generation increment mechanism of capability tokens can defend against replay attacks. In terms of synchronization primitives, RMKF implements fourteen synchronization primitives, including spin locks, read - write locks, mutexes, condition variables, semaphores, events, barriers, seqlocks, ticket locks, and MCS locks. Among them, the read - write lock adopts a writer - priority strategy and supports lock downgrading. The seqlock achieves lock - free reading, and the MCS lock is optimized for the NUMA architecture. The Sidecar architecture establishes a secure bridging channel between the kernel mode and the user mode, supporting plug - in expansion, event buses, and service registration centers, providing an effective mechanism for kernel observability and security monitoring. In terms of testing, RMKF integrates the Kani formal verifier to conduct mathematical proofs on security - critical paths. At the same time, more than fifty libFuzzer fuzz testing targets are deployed. The memory management module has not shown any defects after 72 hours of continuous fuzz testing, and each core module has passed more than a thousand unit tests in total.

    At the data layer, Zhineng Technology has self - developed a distributed optical interconnection in - memory heap graph object database (LightField), which unifies the NoSQL, graph, and object models with a pure in - memory heap architecture. LightField adopts a hexagonal architecture based on domain - driven design, consisting of thirty - seven sub - crates. The core layer implements pure in - memory computing with NoSTD zero I/O and zero network dependencies. Its core lies in the compile - time memory layout verification framework. Through the derive - persist procedural macro, the memory layout, alignment constraints, and pointer offsets of all data structures are verified at compile time. The relative pointer (RelPtr) technology ensures that pointers remain valid after data moves freely between memory and disk, realizing the zero - serialization persistence paradigm of "memory is the database". SIMD vectorized graph traversal uses CPU vector instructions to accelerate BFS and DFS algorithms, and Epoch - Based Reclamation achieves lock - free memory reclamation. At the transport layer, LightField implements an optical transport abstraction layer (OTA), which uniformly abstracts four transport backends: RDMA, optical switches, DPDK, and TCP, supporting optical path switching management and zero - copy transmission, and providing native support at the database level for optical interconnection data centers. Its distributed capabilities cover Raft and Paxos consensus algorithms, two - phase and three - phase commit distributed transactions, sharding management with graph topology optimization, and cross - regional disaster recovery. The current version of LightField has passed 555 unit tests and 40 benchmark tests, reaching a production - ready state.

    The application layer covers two core products: the 3D browser operating system (OmniVerse) and the AI IDE. OmniVerse is not positioned as "running applications in a browser". Instead, the browser serves as the operating system shell, desktop, and window system. When the system boots, it directly enters the microkernel. Starting the 2D and 3D unified engine browser leads to a 3D space desktop. Its core is the hybrid object model (HOM), which unifies the DOM and 3D scene graph into a single - level structure. HTML elements and 3D mesh nodes are on an equal footing, sharing the same event system, style system, and script system. In terms of the rendering pipeline, OmniVerse implements a PBR material system, glTF model loading, GPU culling, IBL environment lighting, instanced rendering, and skeletal animation. It also integrates cutting - edge technologies such as Nanite - like virtual geometry, Lumen - like global lighting, and Gaussian splatting rendering in the advanced rendering module. The AI - native development toolchain supports end - to - end generation from natural language to 3D scenes. When a user inputs "Create a red sofa", after intent recognition and code generation, it can be injected through hot reloading, and the scene will be presented within five seconds. The spatial input system uniformly abstracts VR, gamepads, gestures, and eye - tracking into spatial events. WebRTC multi - person collaboration supports Pose synchronization and object ownership management. The cloud rendering module enables low - end devices to run high - end 3D scenes through GPU instance scheduling and video stream pushing.

    In terms of the AI IDE, Zhineng Technology has designed a seven - stage pipeline from requirement definition to detailed design. Each engineering stage is handled by a dedicated AI role, and humans always hold the final decision - making power. The IDE adopts a multi - model composite architecture. In the requirement stage, a model good at natural language understanding is used. In the architecture stage, a model good at system design is used. In the coding stage, a model good at code generation is used. Each AI instance is created independently and destroyed after the use - case task is completed, effectively solving the semantic drift and hallucination problems caused by the long context of large models. In the design stage, the appearance of specific programming language code is strictly prohibited. Abstract expressions such as AST syntax trees and data flow graphs are used to truly decouple business logic from implementation technologies.

    At the methodological level, Zhineng Technology has proposed a unified process computing model (UPCM), using an event - driven process network as a unified abstraction layer to incorporate upper - layer business applications and lower - layer system software into the same modeling paradigm. This model defines a four - tuple of events, activities, objects, and transfer rules, proposes data interaction specifications for subsets and supersets, and a four - dimensional abnormal exhaustive classification system. The mapping from the process model to executable code is designed as a mechanical translation process. The large model only performs four types of operations: mapping, translation, generation, and repair, and does not participate in independent innovation decision - making. In practical verification, during the development of the single - node version of the in - memory heap graph object database ChinaCoreDB, the memory leakage problem that could not be solved by the TRAE+DDD+GLM method in three months and the Claude Code+TDD+Claude - 4.5 method in one month was located and fixed within two weeks using the UPCM method. After the repair, Valgrind detected zero errors, and the system was stable after a 48 - hour aging test.

    At the model layer, Zhineng Technology uses pure Rust to implement large - model training and inference for "whole - brain quantitative simulation". It uses a fractal - geometry - driven neural growth algorithm to replace the prior structural bias of "manually designing the number of layers" in deep learning. It uses a stem - cell development model to achieve end - to - end development simulation from basic stem cells to the whole - brain structure, and models each neuron as an independent memory carrier with a distributed cell memory mechanism. This project aims to provide model capabilities with low resource consumption and high interpretability for high - value scenarios such as healthcare and finance.

    In terms of the security platform, Zhineng Technology is simultaneously developing the Taiji unified security platform, integrating more than thirty types of network protocol parsing, AI - driven threat detection, zero - trust security models, and MITM proxy capabilities to provide full - stack security protection from the network layer to the application layer for Xinchuang scenarios.

    At the hardware adaptation level, Zhineng Technology plans to conduct native driver development and joint optimization around domestic CPUs and GPUs, and cooperate with domestic SSD manufacturers to promote the optimization of the storage stack, achieving vertical integration from chips to system software. Currently, core modules such as microkernel compatibility, distributed in - memory database, 3D browser rendering, desktop environment, hardware virtualization segmentation, and computing power retail platform have all completed physical verification and runnable demonstrations on X86.

Open - Source Ecosystem and Commercialization

    Zhineng Technology adopts a development strategy of "open - source first, ecosystem - driven". Currently, it has open - sourced the documentation of the AI IDE error - handling design method on GitHub under the CC BY - NC - SA 4.0 license, covering content such as the four - dimensional error classification system, formal error code specifications, and error recovery design patterns. Subsequently, basic components such as the microkernel SDK, single - node version of the database, and 3D Web SDK will be open - sourced one after another, providing directly reusable underlying capabilities for global Rust developers. The open - source of the single - node version of the ChinaCoreDB database aims to lower the threshold for using graph databases. The derive - persist compile - time persistence framework can provide infrastructure for any Rust project that requires zero - serialization storage. The public API of the RMKF microkernel is gradually opened with feature gating, cultivating the open - source ecosystem while protecting core intellectual property.

    In terms of commercialization, Zhineng Technology adopts a hierarchical strategy of "self - sufficiency first, then platform building". In the short term, it provides computing power retail and database licensing services to small and medium - sized enterprises and AI developers, quickly validating demand and generating revenue through lightweight paid services. In the medium term, it provides private deployment and overall solutions for high - security markets such as Xinchuang, government, finance, and energy, and gradually improves the adaptation and optimization of domestic CPUs, GPUs, switches, and SSDs. In the long term, with the LightField distributed platform at the core, it will create a new - generation computing entry platform in the AI era through the WASM - first and Code - First strategies, and build an operating system and database ecosystem for global developers. The HOM hybrid object model of OmniVerse plans to promote standards. These open - source initiatives aim to attract more developers and researchers to participate, forming a full - stack Rust technology community from the underlying system to upper - layer applications.

    In terms of application implementation, Zhineng Technology has completed a small - scale verification of medical AI in a top - tier hospital and has been selected as a relevant typical case by the Ministry of Industry and Information Technology, providing a reference for subsequent replication in high - security scenarios. The founder of Zhineng Technology has long been engaged in system engineering and security design in high - security scenarios. The team adheres to a full - stack self - development route of "no external dependencies, no technology outsourcing". The company has sorted out more than 2,000 core patents and plans to simultaneously layout invention patents in China, the United States, Europe, and Japan, building long - term barriers around key areas such as the microkernel, optical interconnection memory bus, database, and browser engine.