$2.2 billion: Jensen Huang and Lisa Su join forces to invest in a "world model" company
On February 11th, TechNode reported that yesterday, Runway, a U.S.-based video generation unicorn, announced that it had secured a Series E financing of $315 million (approximately RMB 2.2 billion). Foreign media TechCrunch quoted people familiar with the matter as saying that the new financing may push Runway's valuation to $5.3 billion (approximately RMB 36.6 billion). Participants in this financing round include NVIDIA, AMD, Adobe, etc.
Runway was founded in 2018 by three New York University alumni: Cristóbal Valenzuela, Anastasis Germanidis, and Alejandro Matamala-Ortiz.
▲Runway founders Alejandro Matamala-Ortiz, Cristóbal Valenzuela, and Anastasis Germanidis (Source: The Athens Times)
To date, Runway has raised $815 million (approximately RMB 5.6 billion) in financing. The previous financing round took place in April 2025, when the company received $308 million (approximately RMB 2.1 billion) in financing from investors including SoftBank and NVIDIA, and its valuation exceeded $3 billion (approximately RMB 20.7 billion).
This AI star startup was once famous worldwide for its video generation products. In December 2025, Runway released the latest generation video generation model, Gen-4.5, which can generate movie-grade, high-fidelity outputs, such as rendering complex scenes composed of multiple elements and realistic physical effects.
On the global AI text-to-video model performance ranking list, Artificial Analysis Text to Video Leaderboard, Gen-4.5 currently ranks third, only behind Vidu Q3 Pro of Shengshu Technology and grok-imagine-video of xAI, and surpasses models such as Google Veo 3, OpenAI Sora 2 Pro, and Kuaishou Keling 2.5 Turbo.
▲Screenshot of the latest ranking list of AI text-to-video model performance
Its official announcement mentioned that the new financing will be used to train the next-generation world models and then bring these models to new products and industries. Ten days after the release of Gen-4.5 in December last year, Runway also launched the General World Model GWM-1, aiming to achieve real-time simulation of reality, and be interactive, controllable, and general.
GWM-1 has three variants: GWM Worlds for explorable environments, GWM Avatars for conversational characters, and GWM Robotics for robot operations. They are working on unifying many different fields and action spaces under a single foundational world model.
Runway's official website mentioned that in January this year, the company was leveraging the NVIDIA Rubin platform to advance video generation and world model technologies, and Runway was one of the first teams to showcase video generation models on the NVIDIA Rubin platform.
In December 2025, Runway also reached an agreement with CoreWeave, a U.S. AI cloud service provider, to expand its infrastructure and computing power. It is worth noting that NVIDIA is an important financial backer, major supplier, and major customer of CoreWeave.
Conclusion: Runway Regains Investors' Interest
To Shift Focus to World Models
As a once-popular video generation startup, Runway was once overtaken by competitors such as OpenAI Sora and Kuaishou Keling. The Gen-4.5 released by the company in December last year outperformed other well-known video generation products in multiple benchmark tests, which may have enabled this startup to regain investors' interest. Coupled with Runway's investment in infrastructure, it may help convince investors that the company has the ability to operate in highly computationally intensive fields.
The world model track that Runway has announced a major bet on is also highly competitive. World Labs founded by Fei-Fei Li, a professor at Stanford University, and Google DeepMind have recently both announced new progress in world models.
Many top researchers believe that world models are crucial for breaking through the current limitations of large language models. This is because such models are a type of AI system that can build internal representations of the environment, thereby enabling planning for future events.
This article is from the WeChat official account "TechNode" (ID: zhidxcom), written by Cheng Qian and edited by Li Shuiqing. It is published by 36Kr with authorization.