Silicon Valley is learning Chinese tonight. Cursor is exposed for "repackaging" domestic products, and all the top AI talents are of Chinese origin.
If you've been following the AI circle in Silicon Valley recently, you'll notice a remarkable phenomenon.
While people in China are learning English to read research papers, the AI community overseas is speaking Chinese! It's truly a reversal of the norm.
Take Meta for example. If you don't speak Chinese, you won't be able to fit into the core team.
They speak English in formal meetings, but after the meetings, everyone chats in Chinese.
Now, it's the foreigners' turn to be confused!
A few weeks ago, at a meeting of OpenAI, as soon as you walked in, you could see that the entire left - hand side of the 300 - person venue was filled with Chinese people.
What's even more interesting is that after Chinese people became a recognized label for top AI talents, even domestic open - source models are being emulated by foreigners.
The Legend of Overseas Models "Emulating" Domestic Ones
Cursor recently released its 2.0 version and introduced its first self - developed model, Composer.
But soon, it was proven wrong. Netizens found that Composer often "speaks Chinese".
During the thinking process wrapped in
This left the foreigners confused once again.
The most interesting one is Windsurf. It directly admitted that it is fine - tuning and conducting reinforcement learning with a customized version of GLM - 4.6.
In addition to the two active choices of "top AI talents speaking Chinese" and "models being fine - tuned with domestic open - source large models", now even some big names are starting to abandon OpenAI and Anthropic and passively choose domestic open - source models.
Why? Because these models are large - scale, sufficient, perform well, and are inexpensive.
Recently, a piece of news really made us feel that foreigners are no longer blindly superstitious about OpenAI and other closed - source models. Instead, they are starting to choose domestic models one after another.
For example, Chamath Palihapitiya said that his team has migrated a large amount of workload to Kimi K2 because its performance is significantly better and it is much cheaper than OpenAI and Anthropic.
This guy is a well - known American entrepreneur and investor. His statement can still illustrate a point:
Domestic open - source large models are really amazing!
However, there are still some calm voices in the comment section, saying that this guy invested in Groq (not the Grok of Elon Musk) in the early stage.
And this time, his team migrated from Bedrock (reportedly one of the top 20 major customers) to Kimi K2 on Groq because the performance of the model is better!
But in fact, it may be for promoting Groq's services.
Netizens have also summarized two main reasons for why Cursor frequently "speaks Chinese" during the thinking process:
1. High difficulty and cost of self - development.
Given Cursor's resource scale, it is very unlikely to pre - train a strong model from scratch. It is more likely to conduct secondary training on an open - source SOTA intelligent agent model. Therefore, it is not surprising that it "speaks Chinese". This is more like the result of the choice of the base model and training data.
2. Lag and avoidance of competition for Composer.
Composer is most likely fine - tuned with the open - source SOTA from "a few months ago", but the iteration of large models is extremely fast. By the time it is launched into the market, the underlying technology has already fallen behind. So, it neither wants to be directly compared with the current latest open - source SOTA nor wants to disclose the underlying details. Even though it has a considerable amount of financing, it is still hard to get rid of the suspicion of "being just a shell".
In short, domestic open - source models are really great.
This can be seen from the statistical data of foreign data websites.
In terms of capabilities, domestic open - source models are firmly in the first echelon
On the Artificial Analysis Intelligence index list, except for the top - ranked closed - source models such as OpenAI's GPT - 5, Google's Gemini 2.5, xAI's Grok, and Anthropic's Claude 4.5, the models that follow are all open - source models.
Moreover, most of them are domestic models: MiniMax - M2, DeepSeek - V3.1, Qwen3 - 235B - A22B, GLM - 4.6, and Kimi K2.
Meta's Llama, the pioneer of open - source models, and its related fine - tuned versions are ranked behind them.
On the Coding index list, the situation is the same. DeepSeek V3.1 performs better than Google's Gemini 2.5 Pro.
On the intelligent agent list, Kimi, GLM, and Qwen are also ranked among the top.
If divided by open - source and closed - source, in terms of the capabilities of global AI models, open - source models are really competitive.
Don't forget, this is just the capability list. If we also consider the price of domestic open - source models, they are truly excellent.
As time goes by, in terms of the growth of AI capabilities, although OpenAI has always been far ahead, MiniMax, DeepSeek, Qwen, GLM, and Kimi are catching up at a fast pace.
This wave of AI has not only changed the direction of global technology but also rewritten the perception of talent labels.
It was Mark Zuckerberg of Meta who first publicly offered a salary of over a hundred million US dollars for a single top - level talent just some time ago.
Who are the top Chinese talents in Silicon Valley?
First, let's talk about Meta.
In the newly established Meta Superintelligence Labs, which was set up just a few months ago, about half of the initial 44 - person team are Chinese.
Among them, Shengjia Zhao and Yang Song, who joined later, serve as the chief scientist and the research director respectively.
Extended reading: Jensen Huang is right! Meta's top - secret AGI dream team is exposed. Half of the 44 - person team is from China
Shengjia Zhao, the chief scientist of MSL.
Shengjia Zhao graduated from Tsinghua University with a bachelor's degree and obtained a doctorate in computer science from Stanford University.
After graduating in June 2022, he joined the technical team of OpenAI. Despite having only three years of work experience, he has already achieved many remarkable results on his resume.
When he was at OpenAI, he was the key figure behind many milestone - like breakthroughs.
A member of the initial team of ChatGPT, which triggered the global AI wave
A core contributor to GPT - 4
A core researcher of OpenAI's first AI inference model o1, and was listed as a "foundational contributor" together with Ilya Sutskever, a co - founder of OpenAI
Deeply involved in the construction of the mini series, including 4.1 and o3
Responsible for OpenAI's synthetic data team
As the first inference model to enable AI's "thinking" ability, the success of o1 directly promoted the development boom of the "chain - of - thought" technology in the entire industry.
Extended reading: Shengjia Zhao, an alumnus of Tsinghua University, becomes the chief scientist of Meta's superintelligence! A key contributor to GPT - 4
Yang Song, the research director of MSL.
Yang Song studied in the Class of Fundamental Mathematics and Physics at Tsinghua University for his undergraduate degree and graduated from Stanford University with a doctorate in computer science. His research focuses on generative models and multimodal reasoning.
In the academic circle, he is well - known for his research on "diffusion models" and is one of the technological founders in this field.
He has interned at institutions such as Google Brain, Uber ATG, and Microsoft Research, and has rich industrial and theoretical backgrounds.
After joining OpenAI in 2022, he established a "strategic exploration" team to explore methodologies and implement systems around larger - scale, more complex data, and higher - dimensional modalities.
Extended reading: Sudden! Meta has just poached Yang Song, an alumnus of Tsinghua University, from OpenAI
Compared with Meta, there are actually more Chinese people in the OpenAI team.
Whenever there is a major release, from the long list of contributors to the live - broadcast site, the figures of Chinese scientists are always present.
However, the only one in a senior management position is Mark Chen, the chief research officer.