Tian Yuandong's AI startup is valued at 31.5 billion. Both Huang (Jensen Huang) and Su (Lisa Su) have invested, and Shi Tianlin from Yao Class is also a partner.
After leaving Meta, Tian Yuandong appeared on the list of co-founders of a unicorn and became a partner in an AI startup.
Recursive Superintelligence (RSI), with a team of fewer than 30 people, has just emerged from stealth mode and secured $650 million in financing (approximately 4.4 billion RMB), with a valuation of $4.65 billion (approximately 31.6 billion RMB).
Google GV and Greycroft co-led this early-stage financing round, with participation from NVIDIA, AMD, and others.
Eight co-founders have made their debut, presenting a star-studded lineup.
Any one of these eight individuals is capable of leading an AI unicorn on their own.
So, what are they planning to do together?
The company's name says it all: Recursive Superintelligence, a closed-loop of AI self-improvement leading to superintelligence.
The first step in their roadmap is to train a system with the capabilities of "50,000 doctors" to automate AI scientific research itself.
Then, they will direct this "Eureka Machine" towards drug development, battery materials, and nuclear fusion physics.
Investing $30 billion in the Next Scaling Law
RSI's establishment is based on a core judgment:
The pre-training Scaling Law remains important, but if we only rely on more data, more computing power, and more parameters, the marginal returns are no longer as steep as before.
The AI industry is seeking a new growth curve.
RSI is betting on one of the most radical approaches: recursive self-improvement, recursive self-improvement.
This precisely addresses the most pressing concern in the AI industry right now: where will the next leap in capabilities come from after large models?
CEO Richard Socher provided an explanation in an interview:
"AI is code, and now AI can write code."
In the past, the cycle of AI research and development was largely human-led. Researchers proposed ideas, engineers conducted experiments, teams ran training, evaluated models, and adjusted the direction for the next round based on the results.
RSI is handing over part of this cycle to AI.
The system they envision is not just about answering questions or helping people write code. It should be able to identify its own shortcomings, design new experiments, create new benchmarks, and then proactively rewrite its own codebase to make the next version of the system more powerful.
Traditional AI optimization is like taking a fixed exam and getting a perfect score of 100. RSI is looking for a different path: like biological evolution, never stopping and always having new inventions.
One AI improves another AI; the improved AI then continues to improve subsequent AIs.
Socher is well aware of the significance of this bet.
If you're an academic researcher ahead of your time, you'll eventually be called a visionary. But if you're an entrepreneur ahead of your time, your company will fail.
He is one of the early representatives of the neural network school in the field of NLP. In 2010, he tried to submit a neural network paper to a top NLP conference but was rejected. The reviewer's reason was that neural networks were useless and asked why he was submitting such a thing to an NLP conference.
Fifteen years later, neural networks not only dominate NLP, but Socher is one of the people who laid the foundation for this.
So why is now the right time to found RSI?
Socher believes that the AI field is reaching a point of diminishing logarithmic returns. You need to increase the data by one or two orders of magnitude, but you only get a marginal improvement.
RSI is not the only one on this path.
David Silver's Ineffable Intelligence raised $1.1 billion in its seed round and has a valuation of $5.1 billion. Ilya Sutskever's SSI has an undisclosed valuation. Yann LeCun's AMI Labs raised $1 billion.
The mass exodus of top scientists and the collective investment of capital have become the most prominent structural trend in the AI field since 2025.
Eight Co-founders Create a Top Unicorn
One direct reason why Recursive was able to achieve this valuation at an early stage is the high concentration of talent in its founding team.
The threshold for a unicorn is a valuation of $1 billion. RSI's initial valuation is $4.65 billion, which means that on average, each of the eight co-founders is worth 0.58 unicorns.
Richard Socher, a former doctoral student of Andrew Ng at Stanford, the author of ImageNet and Glove, with over 240,000 Google Scholar citations. Before founding Recursive, he founded MetaMind, which was acquired by Salesforce, and later developed the AI search engine You.com, valued at $1.5 billion.
Tian Yuandong, former director of research scientists at Meta FAIR, has long been involved in reinforcement learning, the efficiency of foundation models, and the understanding of neural networks. Earlier, he worked on ELF OpenGo, re-implementing AlphaZero-style training in the Go scenario in an open-source manner. In recent years, his research has shifted towards issues closer to the bottlenecks of large model systems, such as Llama inference, long-sequence acceleration, and low-cost training.
Shi Tianlin is an alumnus of Tsinghua University's Yao Class and one of the co-founders of Cresta. Cresta started at the Stanford AI Lab and applied the Transformer model to real-time customer service agent assistance in 2019.
Tim Rocktäschel is an expert in open-ended intelligence and security cycles. He is a professor of artificial intelligence at UCL and was also the head of the open-ended research direction at Google DeepMind. His research focuses on AGI, open-endedness, and self-improvement. He and his collaborators reformulated the security red teaming problem as open-ended search, Rainbow Teaming: instead of manually listing all possible attack methods, the system continuously generates more diverse and effective adversarial prompts. Now, almost all AI security teams are using this approach.
Alexey Dosovitskiy is one of the authors of the Vision Transformer. In 2020, he was the first to directly apply the Transformer to image patch sequences, demonstrating that convolutional networks are not necessary for visual tasks.
Josh Tobin is an early member of OpenAI and one of the leaders of the OpenAI Agents Research Team.
Caiming Xiong was in charge of AI Research and Applied AI at Salesforce. He has worked with Socher for a long time and co-authored papers on controllable text generation, such as CTRL.
Jeff Clune's research highly aligns with RSI's roadmap. He has long been researching open-ended evolution, AI-generating algorithms, and AI security. He is also one of the authors of the Darwin Gödel Machine paper, which discusses enabling an AI system to modify its own code and then use benchmarks to verify the effectiveness of the improvements.
The eight individuals are respectively involved in reinforcement learning and large model efficiency, open-ended algorithms, security red teaming, visual Transformer, agent productization, enterprise AI implementation, startup organization, and self-improvement research.
With eight co-founders and a small founding team, RSI has a total of no more than 30 people. Socher specifically emphasized in an interview:
We will keep the team as small and efficient as possible and ultimately delegate many tasks to our agents.
Reference Links:
[1]https://www.recursive.com
[2] https://www.gv.com/news/recursive-superintelligence-self-improving-ai
This article is from the WeChat official account "QbitAI". Author: Meng Chen. Republished by 36Kr with permission.