Even Google would be speechless: Its own "black technology" has become popular, but why is the R & D team completely in the dark?
While the entire tech circle is collectively excited about the so - called "Google's black magic", the truth may slap everyone in the face. The so - called "Parallel Verification Loops" that has been put on a pedestal is nothing more than AI - generated "cyber hocus - pocus" on social networks.
If previous AI models were simulating human thinking, then Gemini 3 Flash is simulating human "intuition".
It is three times faster than Gemini 2.5 Pro and yet has reasoning abilities that surpass the Pro - level.
Even more incredibly, its intelligence has surpassed that of its Pro sibling in some benchmark tests.
So far, no one can clearly explain why Flash is "smarter" than Pro.
What kind of black magic does Google DeepMind really have?
As the saying goes, "When you stir up the woods, all kinds of birds will appear." So, netizen Jainam Parmar on X broke the news that:
The AlphaGo team doesn't use the Chain - of - Thought at all.
They use the Parallel Verification Loops mechanism.
This method is crushing all the "advanced reasoning" techniques you've ever heard of.
Tens of thousands of netizens have viewed this post.
Is this reliable? Could it be a "rumor spread from mouth to mouth" or AI - generated "fake news"?
If it's fake news, is it just because of the gimmick of "DeepMind's reasoning crushing its peers"?
Let's first see what the tweet is actually about.
Google DeepMind's black technology?
First of all, this "all - knowing netizen" hit the nail on the head of the Chain - of - Thought (CoT) and explained why the Chain - of - Thought is so bad.
Current AI reasoning is linear:
Thinking step 1 → Step 2 → Step 3.
But this is not how expert problem - solvers think.
Then, he wrote: "DeepMind analyzed how their AlphaGo team tackled complex problems and discovered something very astonishing."
Parallel Verification Loops:
Expert thinkers don't follow a long chain of reasoning all the way to the end. Instead, they run multiple verification loops simultaneously.
They propose a solution, test it with constraints, backtrack if necessary, and explore other possible paths at the same time - all these processes happen in parallel.
The Chain - of - Thought can't do this.
The Architecture Difference:
Traditional Chain - of - Thought: A → B → C → D (linear)
DeepMind's framework: A → [B1, B2, B3] → Verify separately → Refine → Iterate
This is like walking straight on one road, while the other method is to explore the entire decision - tree simultaneously.
The results are extremely impressive:
In complex reasoning benchmark tests:
Compared with the standard Chain - of - Thought, the performance is improved by 37%
The ability to catch logical errors is improved by 52%
The speed of converging to the correct solution is 3 times faster
This is not a minor optimization but a leap at the architectural level.
How it actually works:
Step 1: Generate multiple candidate solutions simultaneously
Step 2: Run a verification loop for each solution
Step 3: Conduct cross - verification between different solutions
Step 4: Prune the weaker branches and strengthen the more promising paths
Step 5: Keep iterating until convergence
Self - correction advantage:
This is the killer feature: The system can detect and correct its own errors before giving the final answer.
The traditional CoT (Chain - of - Thought) "submits" step by step. Once one step goes wrong, the whole thing fails.
While parallel verification allows backtracking and correction without interrupting the overall process, without having to start all over again.
Impact on the training method:
They didn't just test this method. Instead, they directly used this framework to train the model.
The model learned to:
Propose multiple hypotheses
Let these hypotheses test each other
Gradually build confidence through verification
Prune wrong or low - quality reasoning paths as early as possible
Real - world applications:
This framework performs particularly well in the following scenarios:
Mathematical proofs (one wrong step and the whole proof collapses)
Code debugging (there may be multiple potential bugs simultaneously)
Strategic planning (need to explore complex decision - trees)
Scientific reasoning (hypothesis formulation and verification)
Wherever correctness is prioritized over speed, it has an overwhelming advantage.
If you're building an AI agent or a reasoning system, the Chain - of - Thought is outdated.
The future belongs to Parallel Verification.
Generate multiple paths.
Test them.
Let the optimal solution emerge naturally.
This is how AlphaGo defeated the world champion.
This is also how reasoning really works.
Full of doubts. A day attacked by AI?
In these descriptions, "parallel verification" seems to be the ultimate weapon tailor - made for mathematical proofs and code debugging.
In any scenario where correctness is pursued, it seems to be able to achieve a crushing victory.
Doesn't this theory sound too perfect? It's as if DeepMind has really coded human intuition.
But it's precisely this "excessive perfection" and the "highly inciting" writing style that has alerted industry insiders.
While tens of thousands of netizens are still forwarding and liking this "black magic", the calm - headed people start to ask a fundamental question:
Who exactly said this?
Jainam Parmar, the poster, is not a big shot in the AI research field, nor is he an employee of Google DeepMind.
He also didn't clearly provide a reliable source link from DeepMind.
Is what he said reliable?
Even if DeepMind slows down the release of world - renowned research results to gain an edge in the AI competition.
DeepMind is still publishing its research results.
In early November last year, the Google DeepMind team also released a new machine - learning paradigm called nested learning, which claims to solve the problem of "sustainable learning".
The cryptic and attention - grabbing writing style of the original tweet is disliked. Some netizens even suspect that the post was generated by a large - language model!
Netizens familiar with DeepMind's research work think that the post is being deliberately mysterious and even distorting the original meaning!
Some netizens bluntly pointed out that the poster is just riding on the wave of popularity. Half a year ago, he was still advocating that "CoT is the next - generation reasoning technology".