Admit that their open-source capabilities are lacking? After transforming into the "American DeepSeek," an AI startup founded by two Google researchers has raised $2 billion in funding, with its valuation skyrocketing 15 times.
The AI startup Reflection AI was founded last year by two former Google DeepMind researchers. In just one year, it completed its latest round of financing, raising $2 billion. The company's valuation reached $8 billion, a 15 - fold increase from the $545 million valuation seven months ago.
The company initially focused on autonomous programming agents, which was a more practical entry - point. Now, Reflection AI is redefining itself as an open - source alternative to "closed frontier labs" like OpenAI and Anthropic, and it is also the "American version of DeepSeek."
"We are bringing the cutting - edge technology of open models back to the United States to build a thriving global artificial intelligence ecosystem," said Misha Laskin, the founder and CEO of Reflection AI.
The investors in this round are a star - studded lineup, including Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan (founder of Zoom), Eric Schmidt (former CEO of Google), Citigroup, Sequoia Capital, CRV, etc.
Starting with Coding Agents
Reflection AI was founded in March 2024 by Misha Laskin, who led the reward modeling of the DeepMind Gemini project, and Ioannis Antonoglou, a co - creator of AlphaGo (the AI system that defeated the world go champion in 2016).
Laskin said in a program a year ago that he left DeepMind because he believed that AGI would be realized soon, and an independent new company could make it progress faster. He predicted that "agents for small tasks" would be implemented first, and "general super - human agents" would appear in about three years.
At that time, both of them were on the Gemini project team. Ioannis was in charge of RLHF, and Laskin was responsible for reward model training. At that time, after pre - training, people made large models adapt to the "chat" scenario through fine - tuning, such as ChatGPT and Gemini. They thought that the methods for training and fine - tuning these models were already very mature, and the core issues became "data" and how to achieve "planning" and "search" on top of these models. "Doing this independently can make progress faster," Laskin said.
Their profound experience in developing cutting - edge AI systems is their core selling point. They hope to prove that top - tier talents can independently build the most advanced models without relying on tech giants.
Less than three months ago, Laskin mentioned in a podcast that the company's goal was to build super - intelligence through "collaborative design of R & D and products."
In July this year, they launched Asimov, a code - understanding agent for engineering teams, and claimed that "Asimov is already the best agent in the field of code understanding. In blind tests with maintainers of some large open - source software projects, Asimov's answers were more popular than Cursor Ask and Claude Code (Sonnet 3.7 and 4) in most cases." After that, the startup hardly had any official product releases.
The reason for starting from the programming field is that they believe that training a language model that can interact with software through code is like giving AI "hands and feet." In the future, most interactions between language models and various software such as Salesforce and CRM tools will be achieved through API and function calls (i.e., code), which is not only useful for engineers. Secondly, coding is a "naturally advantageous area" for language models. If they can build an "intelligent advisor" to solve coding problems for enterprises, it means they have mastered all the core capabilities needed to build super - intelligence and can easily expand to other fields later.
Laskin said that Asimov is just the first step, and then they will expand the "enterprise - level super - intelligence" to fields beyond coding, such as "team memory" and knowledge management in product, marketing, and HR.
While announcing this round of financing, Reflection AI said that it had recruited a top - tier team from DeepMind and OpenAI. They had led or participated in the R & D of PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and also contributed to projects like ChatGPT and Character AI.
Laskin once said that the core members of the team could get high salaries in large labs before, but many people entered the AI field because they wanted to do breakthrough work in essence. However, the golden age of large labs was actually in the early days, such as when DeepMind worked on Deep Networks and OpenAI worked on GPT - 1 to GPT - 3. Now, startups have the opportunity to become the next frontier labs.
"For these people, money is not the primary issue - the equity we offer is enough for them to see the long - term value. More importantly, it's the sense of participation: they can lead the core direction instead of being responsible for only a small part of the business in a large company. And there are few startups with the ability to challenge frontier labs now, so we have become a scarce option. After all, who doesn't want to participate in the next AI breakthrough project?"
Reflection AI also said that it had built an advanced AI training system, which the company promised to open to the public. More importantly, Reflection AI claimed that it had "found a scalable business model that aligns with the open - intelligence strategy."
According to Laskin, Reflection AI currently has a team of about 60 people, mainly including AI researchers and engineers in fields such as infrastructure, data training, and algorithm development. The company has obtained a computing power cluster and plans to launch a cutting - edge language model trained with "tens of trillions of tokens" next year.
"We have built a system that was previously thought to be achievable only by top - tier labs - an LLM and reinforcement learning platform that can train large - scale mixture - of - experts (MoE) models at the cutting - edge scale," Laskin said.
Reflection AI posted on X: "We have witnessed the effectiveness of this method in the field of autonomous programming. Now, we are going to expand these methods to the field of general agent reasoning."
Playing the "Open - Source" Card
MoE is an architecture that supports cutting - edge large models - previously, only large, closed AI labs had the ability to conduct training at this scale. China's DeepSeek made a breakthrough in open - source large - scale MoE training, and then models such as Qwen and Kimi emerged one after another.
In addition, a month ago, Bloomberg reported that DeepSeek was developing a new model with more powerful AI Agent capabilities. It can help users perform complex operations with only a small amount of prompts and can also evolve and learn by itself based on historical operations. The model is expected to be launched by the end of this year.
Laskin said: "Models like DeepSeek and Qwen are a wake - up call for us - if we do nothing, then the global intelligence standard will be set by others, not by the United States."
Reflection AI said that AI has come this far thanks to a series of key concepts that have been publicly shared, such as self - attention mechanism, next - word prediction, and reinforcement learning. AI is becoming the foundation layer of all technologies, supporting scientific research acceleration, education improvement, energy optimization, medical diagnosis enhancement, and supply - chain operation.
"But the problem is that the most cutting - edge technologies are currently concentrated in closed labs. If this trend continues, a very small number of institutions will control the capital, computing power, and talent needed to build AI, forming a 'snowball - style' monopoly and excluding all other participants." "We need to build open models that are powerful enough to make them the first choice for global users and developers, so as to ensure that the intelligent foundation remains open and accessible, rather than being controlled by a few people."
In the US tech circle, Reflection AI's new mission has been widely welcomed.
David Sacks, the White House AI and Crypto Affairs Commissioner, posted on X: "I'm glad to see more US open - source AI models emerging. A significant portion of users in the global market value the cost, customizability, and control brought by open - source. We hope the US can also lead in this field."
Clem Delangue, the co - founder and CEO of Hugging Face, said: "This is really good news for US open - source AI." But he also added: "The biggest challenge next is to demonstrate a high - frequency sharing speed of open models and datasets, just like the labs that currently dominate open - source AI."
Open Model Weights, but Not Training Data, etc.
It is reported that Reflection AI's definition of "open" is closer to open access rather than complete open - source, similar to the strategies of Meta (Llama) or Mistral.
Laskin said that the company would open the model weights, the core parameters of AI, for public use, but the training data and the complete training process would not be made public. "In fact, the most influential thing is the model weights because anyone can conduct experiments based on them. As for the complete infrastructure stack, only a very small number of companies really have the ability to use it."
This balance also supports Reflection AI's business model. Laskin said that researchers could use the model for free, but the company's main revenue would come from large enterprises building products on its model and projects of governments around the world to build Sovereign AI systems.
"For large enterprises, open models are the default choice. You want to have full control over the model - be able to run it on your own infrastructure, control costs, and make customized optimizations for different workloads. After all, AI costs are extremely high, and of course, you want to optimize it as much as possible. This is the market we are targeting," Laskin said.
Reflection AI has not yet released its first model. Laskin revealed that the model will be text - based and will have multimodal capabilities in the future. The company plans to invest this round of financing in computing power resources, with the goal of launching the first cutting - edge model early next year.
Regarding the funding issue, Laskin also mentioned in a previous blog that "funding is important, but we can be more efficient than large labs. For example, if a large lab needs 100 units of funds, we only need 10 units if we focus on the core direction, which is an order of magnitude difference."
He admitted that the funding requirement mainly depends on "when to expand GPUs" - the main cost of an AI company is GPU expenditure, followed by manpower and data. "So, the scale of our financing will match the pace of expansion into the next stage. We don't need to spread out like large labs and can be more efficient."
Reference Links:
https://reflection.ai/blog/frontier-open-intelligence/
https://techcrunch.com/2025/10/09/reflection-raises-2b-to-be-americas-open-frontier-ai-lab-challenging-deepseek/
This article is from the WeChat official account "AI Frontline", author: Chu Xingjuan, published by 36Kr with permission.