HomeArticle

The brother and sister quit their jobs at OpenAI without a new offer lined up to start a business and created a trillion - dollar unicorn.

融资中国2025-08-05 10:49
AI that balances ethics and commercialization
The provided text is already in English, so the content remains the same:

After four years of establishment, with a valuation exceeding one trillion, a brother and sister who left OpenAI to start a business have created another AI legend.

In recent days, Iconiq Capital, an investment institution known as the "Zuckerberg's banking circle" in the investment community, led a new round of financing for the AI unicorn Anthropic. The latter is expected to raise $5 billion (approximately RMB 35.9 billion), and its valuation is projected to increase to $170 billion (approximately RMB 1.22 trillion). This means that besides OpenAI with a valuation of $300 billion (approximately RMB 2.15 trillion) and SpaceX with a valuation of $400 billion (approximately RMB 2.87 trillion), another unicorn with a trillion - dollar valuation has emerged in the AI field.

At the end of 2020, Dario Amodei, the former vice - president of research at OpenAI, and his sister Daniela Amodei submitted their resignations. Along with several other key employees who left OpenAI, they set up a tent in the backyard of their San Francisco home and named their new company: Anthropic. Two years later, their Claude series of models opened up a differentiated niche with a 200,000 - token long window and "constitutional" security alignment...

Different from OpenAI, which focuses on C - end subscriptions, and DeepSeek, which emphasizes open - source and low - cost models, Anthropic packages compliance capabilities such as SOC 2, HIPAA, and GDPR into enterprise - level "security as a service". It targets high - trust scenarios in finance, healthcare, and government affairs with high customer unit prices, providing another verifiable path for the balance between AI security and commercialization. From a follower to a leader, this new unicorn in the "trillion - dollar club" might be the "strongest dark horse" in the AI field this year.

Another RMB 35.9 billion in financing

Another trillion - dollar unicorn has emerged in the AI field.

Recently, in the latest round of financing led by the US investment institution Iconiq Capital, the US AI unicorn Anthropic is expected to raise $5 billion (approximately RMB 35.9 billion), and its valuation is projected to increase to $170 billion (approximately RMB 1.22 trillion). This valuation is second only to OpenAI's $300 billion and SpaceX's $400 billion.

It has been less than half a year since this unicorn, which recently joined the "trillion - dollar club", last raised funds.

In March this year, Anthropic received a $3.5 billion financing led by Lightspeed Venture Partners, with participation from a group of new and old investors such as Bessemer Venture Partners and Cisco Investments. At that time, Anthropic's valuation reached $61.5 billion. That is to say, in just half a year, the valuation of this AI company has almost tripled.

Iconiq Capital, which led this investment, has an impressive background.

ICONIQ Capital was founded in 2011 by four former members of Morgan Stanley's private wealth management department - Divesh Makan, Chad Boeding, Michael Anders, and Will Griffith. Its headquarters is located in San Francisco.

One of the founders, Makan, is a good friend of Facebook founder Mark Zuckerberg. The two met in 2004 when Zuckerberg, who was still a student at Harvard, got to know Makan through a classmate. Makan was in charge of high - net - worth client business at Morgan Stanley. The two maintained contact due to their common interest in the intersection of technology and finance. In 2008, Facebook hired Morgan Stanley as one of the lead underwriters for its IPO, and Makan was deeply involved, further strengthening the trust relationship with Zuckerberg and his core team. ICONIQ's initial clients included early Facebook executives such as Zuckerberg, Dustin Moskovitz, and Sheryl Sandberg, and it has since gradually expanded to include Silicon Valley's core circle, such as Twitter founder Jack Dorsey and LinkedIn co - founder Reid Hoffman.

ICONIQ's core business is divided into three major segments: First, it provides family office services for ultra - high - net - worth individuals, including tax planning, trust structure, and real estate and art allocation; second, it conducts direct private equity investments, making mid - to late - stage investments in unlisted companies through its own funds and joint funds; third, it manages public market and hedge fund allocations, handling liquid assets. Its funds mainly come from ultra - high - net - worth individuals, family trusts, university endowments, and sovereign wealth funds. As of the first quarter of 2024, its management scale is approximately $80 billion, with the direct investment fund scale being approximately $12 billion.

In the direct investment field, ICONIQ's strategy is mainly centered around "data - driven infrastructure" and "enterprise - level SaaS". Early representative projects include participating in Workday's Series F financing in 2012, leading ServiceNow's secondary market PIPE in 2013, entering Snowflake's Series B through Growth Fund I in 2015 and continuously following up until the pre - IPO stage, with a book return multiple of approximately 30 times. Since then, the fund has gradually increased its layout in the cybersecurity, fintech, and developer tools sectors. Its investment portfolio includes more than 120 companies such as Datadog, CrowdStrike, Coinbase, Robinhood, Canva, Figma, Stripe, Databricks, and Discord. The investment in Canva started in the Series C round in 2016, with a shareholding ratio of approximately 3%. When partially exiting in the secondary market in 2021, a 10 - fold return was achieved. The investment in Figma started in the Series D round in 2019. In 2022, Adobe announced a $20 billion acquisition, but the deal was terminated due to regulatory issues. ICONIQ still holds the equity, and the book profit is approximately 6 times based on the latest valuation.

ICONIQ is also active in the secondary market. Its hedge fund, ICONIQ Capital Partners, was established in 2014 and adopts a long - only strategy. Its key holdings include Microsoft, Amazon, Nvidia, and Tesla. From 2020 to 2022, the annualized return of the fund was 18.7%, significantly outperforming the Nasdaq index. In addition, since 2021, the company has been deploying in Web3 and AI infrastructure through the ICONIQ Strategic Partners fund, investing in projects such as Circle, Alchemy, Anthropic, and Scale AI. The investment in Anthropic's Series B was approximately $150 million, with a 2% stake, and the floating profit is approximately 4 times based on the latest valuation.

In terms of organizational structure, ICONIQ has approximately 180 employees globally. The investment team is divided by industry vertically, and the average tenure of partners is more than 9 years. The company does not raise funds from the public, and all its funds come from existing clients and long - term cooperation institutions. Its LPs include the Qatar Investment Authority, the Government of Singapore Investment Corporation (GIC), and Princeton University's endowment fund. Its main exit channel is IPO. From 2021 to 2023, a total of 28 projects were exited through listing, including Snowflake, Airbnb, Coinbase, and GitLab, with a total cash recovery of approximately $7.5 billion.

The brother and sister who quit OpenAI to start a business

"To create a kind AI" is the original intention of founding Anthropic.

The key figures of Anthropic are a brother and sister, Dario Amodei and Daniela Amodei. In the 1970s, their parents emigrated from Italy to San Francisco. Dario Amodei was born in 1983, and his sister Daniela is four years younger. Their father, Riccardo, was a leather craftsman from a small town near Elba Island and unfortunately passed away due to illness. Their mother is an American Jew born in Chicago and works as a project manager in a library.

When Amodei was in college, he once wanted to become a theoretical physicist. However, he soon found that this discipline seemed to be an "ivory tower" far from the real world. He wanted to do something that could "promote social progress and help humanity". Therefore, when a physics professor started researching the human brain, Amodei became very interested. He also began to read the works of the famous American futurist Ray Kurzweil about "non - linear technological leaps". Later, Amodei completed a winning doctoral thesis in computational biology at Princeton.

In 2014, Amodei joined the US research laboratory of Baidu. Under the guidance of Andrew Ng, Amodei began to understand how a significant increase in computing power and data volume could lead to qualitative leaps.

At that time, an entrepreneur named Sam Altman approached Amodei and said they were starting a startup to build AGI in a safe and open way. After attending the so - called "AI legend" dinner at the Rosewood Hotel, Amodei was "unimpressed" with the project. He thought this dinner was more like a social gathering for tech executives and venture capitalists rather than a gathering of AI researchers.

However, a few months later, when OpenAI was established as a non - profit company, stating its goal of promoting AI development so that it "is most likely to benefit all of humanity, unconstrained by the need to generate financial returns", Amodei saw the influence of this company in the AI talent circle, which included many of his former colleagues at Google. So, Amodei also joined the OpenAI team.

Later, Amodei's sister Daniela also joined OpenAI. Daniela, who majored in English in college, had worked in overseas NGOs and the government for many years before finally returning to the Bay Area. With the development of GPT - 2, this brother - sister duo reached a turning point in their lives.

After releasing the full - fledged model that took the world by storm, OpenAI expanded its cooperation with Microsoft to raise funds and became a for - profit subsidiary. However, at this time, Amodei's concerns about "AI ethics and security" grew, which was the main reason why Amodei, his sister, and five other OpenAI employees left the team.

Anthropic, an AI company whose name literally means "related to humans", clearly shows the company's values from its naming.

Initially, after leaving OpenAI, the small team set up a tent in Amodei's backyard to discuss their business ideas. At that time, the pandemic was raging, and it was pouring rain that night. Thus, the story of this trillion - dollar unicorn began.

Interestingly, Anthropic's initial funds were closely related to organizations associated with Effective Altruism (EA). EA advocates are enthusiastic about research and activities in specific areas such as animal rights, climate change, and the potential threats that AI may pose to humans. The main investor in Anthropic's seed round was Jaan Tallinn, a supporter of EA. This Estonian engineer made billions of dollars from creating Skype and Kazaa and has since invested his funds and energy in a series of AI security organizations. Another early investor was Dustin Moskovitz, a co - founder of Facebook and a strong supporter of EA.

In this way, this brother and sister, with the utopian ideal of "doing something to help human society", have gradually grown into a trillion - dollar unicorn from the startup funds provided by EA supporters.

A safe path to AGI

Many people may not be familiar with Anthropic, but its most well - known product, Claude, is almost a household name.

Claude is the flagship large - model family launched by Anthropic, divided into three tiers - Haiku, Sonnet, and Opus - according to its capabilities. The latest version, Claude 3.7 Sonnet, has been launched. It can simultaneously achieve "instant response" and "deep reasoning" within a 200,000 - token context window: it can answer questions like "What time is it?" immediately and activate the "extended thinking" mode for complex tasks. It achieved an accuracy rate of 78.2% in the graduate - level reasoning benchmark, outperforming GPT - 4 at the same stage.

Code scenarios are Claude's forte. In the SWE - bench test, Claude 3.7 outperformed OpenAI o1 and DeepSeek R1 with a score of 70.3%. Its supporting command - line tool, Claude Code, can directly take over full - library reconstruction, PR review, and CI/CD error debugging, attracting 110,000 developers in just four months. In addition, Claude has incorporated "Constitutional AI" into its model's DNA: when a request crosses an ethical red line, it will first quote the "constitution" and then politely decline. Its hallucination rate is significantly lower than that of competitors using the RLHF approach.

In terms of commercialization, Anthropic does not follow the "freemium" model but places all its bets on the B - end. The API priced by tokens, Claude Pro starting at $20 per month, and the unlimited - usage Claude Code package at $200 per month together contribute 75% of the revenue. Claude Team, targeted at enterprises with over a thousand employees, is subscribed to by seat, with a renewal rate as high as 92%.

Compared with OpenAI and DeepSeek, Claude's commercialization path has three unique characteristics: "security premium + developer depth + enterprise closed - loop".

OpenAI relies on C - end subscriptions of ChatGPT Plus to support its cash flow and uses the low - cost o3 - mini to attract new users. In essence, it is still driven by the "advertising + subscription" dual - wheel model. DeepSeek takes open - source to the extreme, with its API priced as low as $0.25 per million tokens, quickly expanding its user base. Its profit mainly comes from night - time discounts, high - frequency usage, and subsequent value - added services.

In contrast, Claude has bet on enterprise - level security and controllability from the start. With SOC 2 Type 2, HIPAA compliance, private deployment options, and copyright compensation clauses, it has directly increased the customer unit price to 1.5 - 5 times that of GPT - 4, but has attracted customers from "high - trust industries" such as finance, healthcare, and government affairs. After the Morgan Stanley risk - control system was connected, the misjudgment rate dropped to 0.001%, and the saved regulatory fines were enough to cover the model cost.

In the developer ecosystem, OpenAI uses a plugin store to attract traffic, and DeepSeek lowers the threshold with open - source models. In contrast, Claude chooses to be an "invisible foundation". Leading IDEs and Agent frameworks such as Cursor and Composio are integrated with Claude by default. The MCP protocol makes it as plug - and - play as USB - C. Developers can directly access the 200,000 - token long context and function - calling capabilities without secondary adaptation, thus forming a lock - in effect in high - value - added scenarios.

The deeper difference lies in the "cost structure". DeepSeek reduces the computing cost to less than one - tenth of its competitors through the MoE architecture, and OpenAI maintains low prices with subsidies from Microsoft Cloud. Anthropic, on the other hand, invests its budget in "constitutional training" and security alignment, resulting in a lower model hallucination rate and a more predictable rejection rate. Enterprises are willing to pay a premium for this certainty and compliance. Ultimately, Claude's commercialization is not simply about selling APIs or subscriptions but about packaging a comprehensive solution of "model + security framework + industry compliance", enabling enterprises to find the best balance between "affordability" and "peace of mind".

After "low - price volume - grabbing" and "advertising subsidies" have become common monetization strategies in the AI field, Claude has found a different path by making "developer sovereignty" an ecological entry point and then achieving high - margin revenue through "scenario closed - loops". The combination of these three aspects creates the most unique AI monetization model at present.

While the industry is still competing for a few cents per million tokens, Claude Opus 4 has set the enterprise API price at a ceiling level of $15 for input and $75 for output - extremely expensive but in high demand. The reason is that Anthropic packages SOC 2 Type 2, HIPAA, GDPR, and copyright compensation into "compliance as a service". Customers in the finance, healthcare, and government sectors are willing to pay a premium for an auditable security report. After the risk - control system of Morgan Stanley was connected, the misjudgment rate dropped to 0.001%, and the saved regulatory fines were enough to cover the model cost.

Secondly, there is the "parasitic expansion" of the developer ecosystem. Instead of building its own platform, Anthropic has made Claude a "plug - and - play brain". Zapier MCP can connect more than 8,000 SaaS services with one click. Blender MCP enables 3D designers to generate urban scenes with natural language. Ableton Live MCP liberates composers from the knob - turning hell. The more radical Claude Code is directly embedded in the terminal, and its 7 - hour continuous reconstruction and coordinated editing capabilities across 17 files have made it the new - generation base model selected by GitHub Copilot.

Starting from the "original intention of AI security", moving forward on the "road to creating a kind AI", and exploring the technological upgrade of a "safe path to AGI", how will Anthropic, this trillion - dollar - valued unicorn, gain an edge in the AI commercialization competition? The story continues.

This article is from the WeChat official account