After a 23-year-old genius was fired by OpenAI, he raked in $1.5 billion with AI.
Key points:
Leopold Aschenbrenner, a 23 - year - old, transitioned from a researcher at OpenAI to the founder of an AI hedge fund, which manages assets worth over $1.5 billion.
The fund's investment strategy focuses on companies that benefit from AI development and star AI startups. Meanwhile, it adopts a long - short strategy to control risks, going long on the AI track and short on traditional industries that may be phased out.
Aschenbrenner named the fund after his 165 - page paper "Situational Awareness", emphasizing the "situational awareness ability" in investment decisions and directly applying academic achievements to investment logic.
Recently, the well - known tech podcast TBPN broke the news on social media that the AI hedge fund under the 23 - year - old former OpenAI researcher Leopold Aschenbrenner has exceeded $1.5 billion in scale, with a yield of up to 47% in the first half of 2025, far outperforming its Wall Street peers. Who is this young investor? How did he switch from the AI research field to the financial industry?
Aschenbrenner is not an unknown figure. As early as June 2024, at the age of 22, he shocked the tech circle with a 165 - page heavyweight paper titled "Situational Awareness". In the paper, he predicted that artificial general intelligence (AGI) would be achieved by 2027 and called for the launch of an "AI - version Manhattan Project".
Figure: Cover of the paper "Situational Awareness"
01 From an unknown to a capital favorite
Aschenbrenner has almost no professional investment experience, but he shows amazing talent in raising funds. People familiar with the matter revealed that the eponymous hedge fund Situational Awareness founded by Aschenbrenner in San Francisco manages assets worth over $1.5 billion.
Aschenbrenner positions this institution as "the top think - tank in the AI field". Its investment strategy focuses on stocks globally that benefit from AI technology development, including semiconductor, infrastructure, and power companies. It also carefully selects and invests in star startups such as Anthropic.
To control risks, Aschenbrenner also adopts a long - short strategy. While going long on the AI track, it moderately shorts traditional industries that may be eliminated by the technological revolution. This strategy has achieved remarkable results: the Situational Awareness fund had a yield of 47% after deducting management fees in the first half of the year, far exceeding the 6% increase of the S&P 500 index during the same period and outperforming the 7% average return of the tech hedge fund index compiled by professional institutions.
Aschenbrenner named his fund after his paper that explores the prospects and risks of super - artificial intelligence and recruited AI expert Carl Shulman, who previously worked at Peter Thiel's macro - hedge fund, as the research director.
What's even more eye - catching is its luxurious investor lineup. It includes the co - founders of payment giant Stripe, brothers Patrick Collison and John Collison; AI experts Daniel Gross and Nat Friedman, who were recently recruited by Mark Zuckerberg; and well - known investor Graham Duncan serves as an important advisor.
Aschenbrenner once said in an exclusive interview with podcast host Dwarkesh Patel last year: "Our situational awareness ability far exceeds that of fund managers in New York, and our investment performance will surely be better." The market's confidence in him is evident: most investors agree to lock in their funds for several years, which is quite rare in the hedge fund industry.
02 The feast of AI hedge funds: Opportunities under the capital frenzy
As the market values of artificial intelligence giants such as Nvidia and OpenAI hit record highs, hedge funds focusing on the AI track are becoming the new focus of capital competition. In this wave of enthusiasm, not only new players like Aschenbrenner have emerged, but more institutions are also rushing to enter the market.
Similar to the Situational Awareness fund, the AI hedge fund from Value Aligned Research Advisors (VAR Advisors) has also attracted much attention recently. This company, founded by former quantitative analysts Ben Hoskin and David Field in Princeton, launched its fund in March but has quickly accumulated about $1 billion in assets. According to regulatory documents, the charitable foundation of Facebook co - founder Moskovitz once appeared on its investor list.
Established hedge funds have also joined the battle. Well - known investor Steve Cohen assigned Eric Sanchez, a fund manager from his Point72, to establish the AI hedge fund Turion (named after Alan Turing, the father of computer science) last year and personally invested $150 million. The latest data shows that the fund's scale has exceeded $2 billion, with a year - to - date return of 11% as of the end of July, and a 7% return in July alone.
Another notable phenomenon is that due to the limited number of genuine AI listed companies, the concentration of fund holdings remains high. According to the latest disclosed documents, power supplier Vistra has become one of the top three heavy - holding stocks of both Situational Awareness and VAR Advisors because it supplies power to AI data centers.
The investment focus is also extending to the primary market. Gavin Baker's Atreides has launched a venture capital fund in cooperation with Valor Equity Partners, raising hundreds of millions of dollars from institutions such as the Oman Sovereign Wealth Fund. The two companies have also separately invested in Elon Musk's xAI.
03 A 165 - page paper attracts attention, predicting the arrival of AGI in 2027
The life of this German - born genius has also been uncovered as the public's attention turns to him. In 2023, he joined OpenAI's Superalignment team as a researcher, but was fired in April 2024, a year and a half later, for publicly disclosing the company's security vulnerabilities.
Interestingly, just one month after his departure, the entire Superalignment team announced its dissolution, and even his mentor, OpenAI Chief Scientist Ilya Sutskever, left.
Just two months after leaving, Aschenbrenner published a 165 - page paper that in - depth explored the development trends, future impacts, and challenges of AGI (artificial general intelligence). He clearly stated that by 2027, AGI is very likely to become a reality.
His argument method is clear and intuitive: just review the growth curve of the "effective computing power" of the GPT model in the past four years and extend it to four years later, and the conclusion will be self - evident.
Figure: Scale expansion of effective computing power (including physical computing power and algorithm efficiency)
From GPT - 2 to GPT - 4, artificial intelligence has made a leap from the "preschool" level to the "excellent high - school student" level. Aschenbrenner pointed out that if the three major trends of current computing power growth (about 0.5 orders of magnitude per year), algorithm efficiency improvement (also close to 0.5 orders of magnitude per year), and "ability unlocking" (such as the evolution from chatbots to agents) remain unchanged, by 2027, we will witness another qualitative change comparable to the "from preschool to high school" transformation.
Figure: OpenAI only took four years to upgrade GPT - 2, which was at the preschool - child level, to GPT - 4, which is at the level of a smart high - school student
Here, Aschenbrenner adopted a simple estimation method - OOM (Order of Magnitude), that is, for every 1 OOM increase, the ability is enhanced by 10 times. For example, 2 orders of magnitude represent a 100 - fold leap.
The emergence of GPT - 4 amazed many people. It can not only write code and articles but also solve complex mathematical problems and even easily pass college - level exams. A few years ago, these abilities were generally considered insurmountable barriers for AI.
But GPT - 4 did not appear out of thin air; it is the result of the continuous evolution of deep learning. Ten years ago, AI models could barely recognize pictures of cats and dogs. Four years ago, GPT - 2 had difficulty organizing a coherent sentence. Now, AI is rapidly conquering various tests designed by humans. Behind this is the stable leap brought about by the continuous expansion of the scale of deep learning.
Figure: Artificial intelligence systems will rapidly evolve from the human level to the super - human level
Based on this, Aschenbrenner asserted that by 2027, AI models will be able to perform the work of AI researchers or engineers. In other words, artificial intelligence will have the ability to participate in its own evolution.
This article is from Tencent Technology. Translated by Jin Lu, edited by Helen. Republished by 36Kr with permission.