HomeArticle

Nature published an important article: Deep learning combined with symbolic learning is the only path to AGI.

新智元2025-12-17 10:11
AI two-headed monster, the only way out for AGI?

Looking back, symbolic AI once dominated the field with rule-based logic. Now, making a comeback, it joins hands with neural networks, aiming straight for AGI!

In recent years, large models have repeatedly amazed people: chatting like real humans, writing like experts, and painting like masters. It seems that the "omnipotent AI" is really on the way.

However, authorities in the AI field have started to pour cold water:

Relying solely on "neural networks" is far from enough to achieve human-level intelligence.

The Association for the Advancement of Artificial Intelligence (AAAI) posed questions to its members:

  • In the future, can computers reach or even surpass human intelligence?
  • If so, can it be achieved solely by the currently popular neural networks?

The answer given by the vast majority of researchers is - no.

The real breakthrough may rely on the joint appearance of the veteran "symbolic AI" and neural networks.

Symbolic AI: Resurrected

Historically, symbolic AI was the protagonist - it believed that the world could be exhaustively characterized by rules, logic, and clear conceptual relationships:

As precise as mathematics, traceable like flowcharts, and well-structured like biological classification.

Later, neural networks rose, sweeping across the entire field with the paradigm of "learning from data."

Large models and ChatGPT became the technological totems of this era, while symbolic systems were marginalized, almost remaining only as a piece of history in textbooks.

However, since around 2021, the "neural-symbolic fusion" has rapidly heated up, being regarded as a counterattack to break the dominance of single neural networks:

It attempts to combine statistical learning and explicit reasoning, not only to pursue the distant goal of general intelligence but also to provide a form of intelligence that humans can still "understand and trace back" in high-risk scenarios such as military and medical fields.

Currently, there have been some representative neural-symbolic AI systems.

For example, AlphaGeometry, released by DeepMind last year, can stably solve mathematical Olympiad problems for excellent middle school students.

However, truly integrating neural networks and symbolic AI into a general "omnipotent AI" remains extremely challenging. The complexity of such a system made William Regli, a computer scientist at the University of Maryland, exclaim:

In fact, you are designing a "two-headed monster" architecture.

Bitter Lessons and Endless Debates

In 2019, computer scientist Richard Sutton publicly published the short essay "The Bitter Lesson."

He pointed out that since the 1950s, people have repeatedly assumed:

In various fields from physics to social behavior, humans summarize the rules of the world and then instill them into computers.

This is the best way to create intelligent computers.

Sutton wrote that the "bitter pill" we have to swallow is that systems leveraging massive amounts of raw data and expanded computing power to drive "search and learning" have repeatedly defeated symbolic methods.

For example, early chess computers relied on human-designed strategies but were defeated by systems that were simply fed a large amount of game data.

Supporters of neural networks widely cite this lesson to support the view that "making the system larger and larger is the best path to AGI."

However, many researchers believe that this short essay exaggerates and underestimates the crucial role that symbolic systems can and are playing in AI.

For example, the current strongest chess program, Stockfish, combines neural networks with a symbolic tree of legal moves.

Neural networks and symbolic algorithms each have their own advantages and disadvantages.

  • Neural networks consist of multiple layers of nodes, with weighted connections adjusted during the training process to identify patterns and learn from data. They are fast and creative but are also prone to fabricating content (i.e., generating hallucinations). Moreover, if the problem goes beyond the scope of the training data, they cannot provide reliable answers.
  • Symbolic systems struggle to cover "fuzzy" concepts such as human language because it involves building a huge database of rules, which is difficult to construct and has a slow search speed. However, their operating mechanisms are clear, they are good at reasoning, and they can apply general knowledge to entirely new situations using logic.

When applied to the real world, neural networks lacking symbolic knowledge tend to make typical elementary mistakes.

For instance, AI-generated images may depict people with six fingers on each hand because they haven't learned the general concept that "hands usually have five fingers."

Some researchers attribute these errors to a lack of data or computing power.

However, others believe that these errors reveal that neural networks are fundamentally incapable of generalizing knowledge and reasoning logically.

Many people believe that "neural networks + symbolic mechanisms" may be the best - and perhaps the only - way to inject logical reasoning into AI.

For example, global technology giant IBM is betting on neurosymbolic techniques as a path to AGI.

However, others remain skeptical: Yann LeCun, one of the fathers of modern AI, once said that neurosymbolic methods are "incompatible" with deep learning mechanisms.

Richard Sutton adheres to his original view and told the journal Nature:

The "bitter lesson" still applies to today's AI.

Richard Sutton is currently a professor of computer science at the University of Alberta and won the Turing Award in 2024. He served as a distinguished research scientist at DeepMind from 2017 to 2023.

He said that this shows that "adding symbolic, more manually crafted elements may be a mistake."

Gary Marcus is an AI entrepreneur, writer, and cognitive scientist, and one of the most outspoken supporters of neural-symbolic AI.

He tends to describe this difference in opinion as a philosophical battle and believes that the situation is turning in his favor.

Others, such as roboticist Leslie Kaelbling from the Massachusetts Institute of Technology, believe that arguing about which view is correct is a "self - inflicted pain," and people should focus on any method that works.

She said, "I'm like a magpie. As long as it can make my robot better, I'll adopt any method."

The Two - Headed Monster: Complementary Advantages

Although the core vision of neural - symbolic AI is very clear - that is, to integrate the dual advantages of neural networks and the symbolic school - its specific definition still seems a bit vague at present.

Marcus said bluntly that neural - symbolic AI encompasses "an infinite universe," and our current exploration is "just a drop in the ocean."

Multiple technological paths have emerged in the industry, and researchers have also tried to classify them from different dimensions.

Among them, a highly regarded mainstream path is to use symbolic techniques to "enhance" neural networks.

AlphaGeometry is undoubtedly the most exquisite example of this strategy. Its operating mechanism is to first use symbolic programming languages to generate a large number of mathematical problems (i.e., a synthetic dataset), and then use this data to train neural networks.

This method not only makes the problem - solving process easier to verify but also ensures an extremely low error rate. Colelough commented that this is an "elegant fusion."

Another typical example is "Logic Tensor Networks."

It provides a way to encode symbolic logic into neural networks.

In this network, statements are no longer black - and - white but are assigned a fuzzy - truth value - a numerical value between 1 (true) and 0 (false). This builds a rule framework to assist the system in logical reasoning.