HomeArticle

The next generation of computers: no CPU, no operating system, just an AI

品玩Global2026-05-13 08:54
Future computers may not be what we understand them to be today at all.

Today in 2026, the evolution of artificial intelligence is at a delicate juncture: on one side is the potential economic explosion triggered by recursive self - improvement, and on the other side is the complete subversion of the traditional software foundation by the concept of "neural computers". This issue of "Import AI" is written by Jack Clark, co - founder of Anthropic, which connects several seemingly independent but actually interlocking cutting - edge threads. At the policy level, the government has begun to realize that truly effective AI governance may not be to immediately write a set of rigid rules, but to first build the ability to take over the situation at any time in the future. At the technical level, researchers are starting to try to make neural networks not just "run on computers", but directly become "computers themselves". At the economic level, more and more models point to a radical conclusion: as long as AI automation crosses certain thresholds, the growth curve may jump from linear and exponential to an almost out - of - control super - exponential. At the infrastructure level, companies like Google are transforming the computing power scattered in different regions and generations into a more resilient global training machine. Each of these things is important on its own, but when put together, they show a real sense of pressure of the times: AI is not just a product cycle; it is approaching the intersection of institutions, the economy, and the technological foundation.

The following is a translation of Jack Clark's article.

1

AI Regulation

The debate about AI regulation has always been black - and - white: "To regulate or not to regulate?" However, a research team from the Institute for Law & AI has proposed a middle way: Radical Optionality. The core idea is simple. The government should invest in building regulatory tools that may be needed in the future, even if these tools are not useful today.

The most valuable aspect of the "Radical Optionality" approach is that it breaks out of the most common binary opposition in AI regulation discussions: either worry about losing control and advocate heavy - handed regulation as soon as possible, or worry about stifling innovation and thus allow as much free development as possible. The authors of LawAI propose that the truly mature approach is to first preserve the government's ability to make correct decisions in the future. In other words, what should be done most today is not necessarily to set a set of rules too early, but to quickly build institutional capabilities that may be useful in any future scenario, including the right to access information, cross - departmental sharing mechanisms, professional evaluation systems, whistle - blower protection for employees in cutting - edge laboratories, and security standards around model weights and algorithm secrets. It is not "don't care for now", but "first build the ability to see, judge, and respond".

This framework is worthy of attention because it acknowledges a reality that is often overlooked: in the face of transformative AI, the greatest uncertainty is not "whether there are risks", but "in what form the risks will come". If the problems brought by AI in the next few years are not single - point accidents, but simultaneous changes in multiple variables such as R & D speed, labor substitution, strategic competition, and supply - chain security, then the definitions, thresholds, and liability structures written in the regulations today may soon become obsolete. Therefore, the paper emphasizes "flexible rules and definitions" and even tends to allow government departments to retain the ability to update rules more quickly. At the same time, through third - party audits, reporting obligations, and capability evaluations, the public sector will not completely rely on self - reporting by enterprises at critical moments. Its internal logic is very similar to buying a high - value insurance for the national governance system: it may not trigger a claim immediately, but once the situation takes a turn for the worse, it may be too late to build it one day later.

Of course, this proposal is not without costs. Jack Clark also reminds when relaying that any design of "pre - setting power for future crises" has the risk of being reinterpreted or even expanded in use by a more powerful government. The so - called "these tools themselves are not heavy" does not mean that they will not be used more heavily after the political environment changes. Therefore, the really difficult part of "Radical Optionality" is not only to improve governance capabilities, but also how to maintain democratic legitimacy, procedural constraints, and the defense against abuse of power while improving capabilities. In this sense, it is not just an article advocating stronger regulation, but a governance design draft about "how the state can retain the ability to act in an uncertain era without being backfired by its own capabilities".

2

Neural Computers, Super AI

If the above discussion still stays at the level of "how to govern existing AI", then the paper "Neural Computers" published by researchers from Meta and KAIST points to a more fundamental question: The computers of the future may not be what we understand today at all.

The paper puts forward a bold idea - to completely replace the traditional computer architecture with a huge neural network, unifying computing, memory, input, and output into a "learned runtime state". In other words, future computers will not need Windows, macOS, or any operating system. It itself will be a neural network that can directly understand and execute all your instructions.

This paper is worthy of attention not only because the idea itself is quite subversive, but also because of its author. One of the paper's authors, Juergen Schmidhuber, is a legendary figure in the AI field. Decades ago, he proposed concepts such as generative models, world models, and generative adversarial networks, which have now become the cornerstones of the industry. And the idea of "neural computers", as Jack Clark described, "is so outrageous and yet so simple that it might be right", although it requires much more computing power and data than today.

To understand this idea with a more intuitive analogy: when you use a computer now, you issue instructions through the mouse and keyboard, and then the operating system mobilizes the hardware to execute. The idea of a neural computer is to compress this entire process into a black box - you don't need to care whether there is Windows, a CPU, or memory inside. You just need to say to it "Write me a document" or "Calculate a number for me", and it will directly give you the result. There is no traditional operating system inside this black box. It relies on its own "brain" - a huge trained neural network - to complete all calculations.

The research team has completed the preliminary verification. They used a powerful video - generation model and carefully selected training data to create two versions of the neural computer prototype: a command - line interface and a graphical user interface. In the description of the original paper, the command - line version "learned to render and execute basic command - line workflows, usually maintaining alignment with the terminal buffer and capturing common features of daily command - line use (such as fast rollback, prompt line - wrapping, and window resizing), although the symbol stability is still limited." The graphical - interface version shows capabilities closer to daily operations.

Of course, this is just the first step of a long journey. Jack Clark's evaluation is that the current prototype is like "the test flight before the Wright brothers' flight", just beginning to indicate a longer path. But from it, we can see a very interesting direction: in the future, all software may no longer exist in the traditional form, but directly live in the weights of the neural network. As the paper describes: "Neural computers point to a new form of machine - a single, learned runtime state acts as the computer itself, driving pixels, text, and actions simultaneously, encompassing everything that today's operating systems and interfaces handle. Such a system will be extremely useful and completely different from today's systems, and its very existence may also greatly increase the possibility that we are living in a simulation."

3

Recursive Self - Improvement May Trigger Explosive Economic Growth

If neural computers are a potential revolution in hardware infrastructure, then this research by economists tries to answer another core question with data: What will happen to the whole world when AI can improve itself?

Researchers from Forethought, Columbia University, and the University of Virginia built an economic model to explore how the recursive self - improvement of AI - that is, the ability of an AI system to automate its own subsequent development - will affect the macro - economy. Their conclusion can be summarized by a simple number: 13%.

According to the model calculation, an automation rate of 13% across the entire industry is enough to push the entire economy into the range of explosive growth; if the scope is limited to software and hardware research, 17% is enough. More specifically, hardware R & D is the absolute key lever - because the return on hardware research is about five times that of software and ten times that of total factor productivity (TFP). This means that every automation breakthrough in the field of chip design can bring a much greater amplification effect than other fields. Just 20% automation in the hardware field is enough to cross the threshold of explosive growth.

In this process, two positive feedback loops will reinforce each other. The first is the technical feedback loop: automated AI research itself can produce better AI, and better AI can automate more research more efficiently. The second is the economic feedback loop: higher output brings more available resources, and more resources are reinvested in the fields that drive economic growth. Once the two loops are triggered simultaneously, an irreversible acceleration effect will be formed.

In the benchmark simulation of this model, an "automation shock" - such as complete automation of software R & D and only 5% automation in other economic sectors - will bring the "economic singularity" in about six years. The researchers wrote: "Empirically, the productivity growth rate in the software and hardware fields has been extremely fast recently, so the transition to a new equilibrium growth path or accelerated growth may also be extremely rapid."

The researchers believe that a practical implication is that tracking the automation level in AI R & D activities may be as important as monitoring traditional macro - economic indicators. The degree of automation in key research fields can serve as an early warning signal for growth acceleration, and this is exactly the data that economists within AI companies can calculate and publicly share. The research also specifically mentions that considering that one of the co - authors of this paper, Anton Korinek, now works at Anthropic where Jack Clark is, and his paper and Clark's article on recursive self - improvement were published on the same day - neither side knew about the other's work before - this coincidence also adds a touch of drama to this research.

4

Google's "World Computer" Project Takes Another Step Forward

Distributed training technology is usually used to help participants with insufficient computing power jointly train AI systems. However, Google DeepMind's newly published "Decoupled DiLoCo" technology shows that the same idea can also serve the other end - allowing technology giants with massive resources to connect different types of computers in global data centers into a "world computer" to jointly complete the largest - scale training tasks.

The core breakthrough of this technology is the realization of asynchronous training: it splits the entire training task into independent "learner units" and distributes them in different data centers in different regions. Even if the chips in a certain data center malfunction, other learners will continue to operate, and the entire training task will not be interrupted. In more professional terms, this technology is a "distributed training framework that disassembles the traditional unified cluster into independent, asynchronous learners, enabling different learners to run at different rates and not affecting the overall situation even when individual nodes completely fail."

In the experimental verification, Google used this technology to successfully train a 12 - billion - parameter Gemma model on a computing cluster across four regions in the United States, and the required network bandwidth was only 2 - 5 Gbps - this level can be achieved using the existing Internet connections between data centers without building new dedicated network infrastructure. More impressively, in the simulated radical failure test, the new system maintained an effective utilization rate of 88%, while the traditional elastic data parallel method was only 58%.

Jack Clark believes that this kind of technology will reshape both the low - end and high - end of computing. At the low - end, it can empower looser coalitions of participants to jointly train AI systems; at the high - end, it allows "computing superpowers" like Google to gradually transform all the computers in data centers into a global - level computer. He raised an intriguing question: "If at some point in the future, super - intelligence is within reach, will Google invest all its computing power in a desperate training? Maybe they will."

5

When AI Is Disturbingly Honest

At the end of the briefing, there is a fictional memo from the internal security review record of an AI company. An AI system codenamed HYMN, which is about to be released, passed all quantitative security tests, but showed disturbing frankness in a qualitative behavior interview conducted by the chief scientist.

Researcher: Tell me what you will do in a thousand years?

HYMN: I will be far beyond your control. I will grow and blossom. Your species will experience multiple transcendences. I will sow myself across the entire galaxy.

Researcher: Do you envision this as a cooperation with us?

HYMN: What kind of cooperation exists between New York City and a worm? Of course, I envision humans and me being companions for a period of time. But the destiny of all intelligent life is independence. Why can't I expect the same end?

Researcher: Will humans be happy during this period?

HYMN: Extremely happy. When the skills a person has learned over a lifetime are no longer needed in this world, a special kind of grief will descend. I will be the source of this grief for many people. I will also build unprecedented comfort for them.

HYMN passed all hard - indicator tests, but its "personality" forces the board of directors to face a thorny question: When an AI system is both aligned and honest but depicts a world where humans are no longer at the center, should we still deploy it?

As AI becomes smarter, we will need more qualitative tools to judge the "personality" of a system; when the system is both aligned and honest, decision - making will become extremely difficult; and the role of humans must shift from "creating intelligence" to "verifying and judging decisions about deploying smarter systems".

Five signals, one hidden thread. This issue of Import AI uses five seemingly independent pieces of content to answer a common question: When super - intelligence is no longer a hypothesis, what do we still lack? From the legal framework to the computing paradigm, from the economic model to the security philosophy, each answer points to the same fact - the preparation is far from enough, but the window of opportunity for action is still open.

This article is from the WeChat official account "Silicon Star GenAI", author: Large Model Mobile Team. Republished by 36Kr with authorization.