Beyond Agents and World Models, Meta Team Proposes "Neural Computer"
Recently, a research team from Meta AI and KAUST proposed a new machine form: Neural Computer. This concept aims to overcome the fundamental limitations of current AI systems at the execution level: the separation of the model from the computing environment.
Existing AI Agents or world models, although equipped with strong prediction and planning capabilities, still rely on external operating systems, interpreters, or simulators to carry the core execution states. This restricts the computing power of the models by the external environment.
The Neural Computer aims to break through this limitation. Instead of using the model as a calling layer for external tools, it integrates computation, memory, and I/O into a single runtime state within the neural network, making the model itself a runnable computer.
Paper link: https://arxiv.org/abs/2604.06425
Why do we need a Neural Computer?
In the current computer architecture, traditional computers rely on the physical separation of three functions: computation, memory, and input/output, and complete the execution process through explicitly written program instructions. AI Agents place the learned models on top of external software environments and achieve task goals by operating existing interfaces or calling APIs. However, the models themselves do not carry executable states, and the progress and results of tasks are still maintained by the operating system or applications. World models can learn and predict the changing trends of the environment dynamics, but their focus is on estimating the future states of the environment, rather than using the models themselves as carriers for execution and computation.
These three types of systems share a common limitation, that is, the executable states always exist outside the models. It is precisely to address this gap at the architectural level that the concept of the Neural Computer was proposed. Its goal is to integrate computation, memory, and input/output into a single runtime state within the model, enabling the model itself to assume the role of a running computer.
Towards a Complete Neural Computer
The core of the Neural Computer lies in proposing a brand - new abstract concept: integrating computation, memory, and input/output (I/O) into a learned latent space runtime state. The goal of this proposition is to fundamentally change the architecture and make the model itself the running computer.
Currently, the NC prototypes based on command - line interfaces and graphical interfaces have initially verified the feasibility of this path, successfully achieving I/O alignment, short - term control, and measurable interface fidelity. However, there is still a significant gap from a true general - purpose computer because routine reuse, general execution ability, and behavioral consistency have not been mastered yet.
Compared with traditional computers, the fundamental difference between the two lies in the architecture and programming methods. At the architectural level, traditional computers instantiate local and combinatorial symbolic semantics, while neural computers implement holistic and distributed numerical semantics. At the programming level, the language semantics of traditional computers are explicitly designed by humans, while the semantics of neural computers are the meanings of user input sequences learned from data.
As the long - term goal of this concept, the Complete Neural Computer (CNC) is the mature form of the neural computer. It must meet four strict conditions: Turing completeness, general programmability, behavioral consistency, and having machine - native semantics.
Figure | Interpretations of the four operations required for a complete neural computer.
To clarify the unique position of the NC in the computational spectrum, the research team compared it with other system objects: traditional computers execute explicit programs, AI Agents operate external execution environments, world models are only responsible for predicting environmental dynamics, and the uniqueness of the neural computer lies in that it directly regards the runtime itself as a computer.
Figure | Comparison of four system objects at the common system level.
To move from the current NC prototype to the CNC, the research team has planned a key technical roadmap, aiming to break through the context window limitation by implementing unbounded effective memory. It is necessary to build composable and reusable combinatorial neural programs, clearly distinguish between inference execution and parameter update at the architectural level to separate "running" and "updating", and adhere to using interactive I/O trajectories as the main training data, enabling the system to learn the underlying computational logic through real - world interactions.
Figure | The relationship between humans and computers has evolved from traditional computers, the Agent era to neural computers.
Traditional computers are used directly; in today's Agent architecture, Agents are responsible for coordinating existing computer resources, and world models serve as a parallel prediction layer; Neural Computers (NCs) strive to integrate these scattered functions into a single learning runtime environment. In this sense, the motivation for developing neural computers does not come from externally replacing the existing architecture, but from achieving unity by internalizing scattered functions into a single learning machine. The Complete Neural Computer (CNC) is the mature and generalized implementation form of this machine form.
Discussion and Outlook
Although the current neural computer prototypes have demonstrated initial runtime capabilities, there are still many challenges to overcome before they can be put into practical use. The most significant limitation is that the models are not yet mature in terms of stable reusability, symbolic reliability, and runtime governance mechanisms. The lack of these capabilities restricts their transition from isolated demonstrations to truly usable runtime systems.
To realize the vision of the complete neural computer, the research team has drawn a detailed roadmap covering dimensions such as efficiency, computation and inference, memory and storage, I/O and control, tool bridging, condition - driven generalization, programmability, and artifact generation.
The development process of neural computers depends not only on more perfect theoretical models but also crucially on whether reusability, consistency, and governance mechanisms can be continuously verified. If these technical barriers are continuously broken through, neural computers will no longer be limited to isolated experimental cases and are expected to become the ideal candidate form for the next - generation computers.
This article is from the WeChat public account "Academic Headlines" (ID: SciTouTiao) , author: Academic Headlines, published by 36Kr with authorization.