Lao Huang and Su Ma have invested in the same global model company.
AI Video Company Shifts to World Model, and Jensen Huang Invests Again
While Seedance 2.0 is making waves both at home and abroad, its peer Runway quietly shifts its focus and raises 2.1 billion yuan in financing.
This company was initially just a video tool company founded by three art students. It has undergone two transformations in three years, and its valuation has been continuously increasing with each transformation. Currently, the company has only 140 employees but supports a valuation of $5.3 billion. At the same time, it is continuously favored by NVIDIA and has received consecutive investments in three rounds.
However, the reason why Runway is favored by NVIDIA this time is different from the previous two times.
Runway Shifts to World Model, and Both NVIDIA and AMD Invest
Runway announced on its official website that it has completed its Series E financing, raising $315 million, approximately equivalent to 2.17 billion yuan. According to Runway, this money will be used for pre - training the next - generation world model and implementing it in products.
The lead investor in the latest financing round is General Atlantic, which was also the lead investor in Runway's previous financing round and is also one of the shareholders of Anthropic (the parent company of Claude). In addition, NVIDIA and AMD also participated in Runway's current financing round.
After this round of financing, according to TechCrunch citing sources familiar with the matter, Runway's post - investment valuation has almost doubled compared to before, reaching $5.3 billion, approximately equivalent to 36.58 billion yuan.
How did Runway's valuation increase step by step?
Three Art Students Build a $5.3 - Billion AI Unicorn in 7 Years
Runway was founded in 2018. Its three founders, Cristóbal Valenzuela (CEO), Alejandro Matamala, and Anastasis Germanidis, all graduated from the School of the Arts at New York University and majored in interactive design.
Runway received $2 million in seed - round financing at its inception. Initially, it mainly focused on video - editing tools. At the end of 2020, it created its first popular feature, "Green Screen," which allows users to extract people from videos with a single click through a web application.
Riding on the success of this popular tool, Runway quickly received Series A financing of $8.5 million. At this time, Runway had already gained fame in the design field. In 2021, it quickly completed Series B financing, raising $35 million.
However, Runway did not use this money to develop tools but instead invested in the research and development of the text - to - image model Stable Diffusion. This laid the foundation for its subsequent entry into the generative AI field.
At the end of November 2022, ChatGPT emerged, setting off a wave of generative AI. After seeing this new trend, Runway resolutely shifted its focus and completed Series C financing in December 2022, raising $50 million.
Two months after the Series C financing, Runway quickly launched the AI video model Gen - 1. Its functions were relatively simple, only capable of editing videos and not directly generating them.
However, Runway's iteration speed was very fast. One month later, it released Gen - 2, which began to support text - to - video generation. Gen - 2 was also the first large - scale commercially available text - to - video model in the industry at that time and later started a free trial in July 2023.
NVIDIA made its first investment in Runway during this period, participating in Runway's Series C+ financing. At this time, Runway's post - investment valuation exceeded $1.5 billion, making it a unicorn.
Subsequently, Runway continuously iterated on the Gen series of models. In April 2025, it released Gen - 4, introducing physical laws to enable the model to understand object materials and gravity, laying the groundwork for its subsequent entry into the world model field.
At this time, Runway simultaneously completed Series D financing of $308 million, led by General Atlantic, with NVIDIA participating again.
△
In early December 2025, Runway upgraded its model to Gen - 4.5, enhancing the realism of AI videos. Half a month later, Runway released its first world model, GWM - 1 (General World Models - 1). It is reported that this is an autoregressive model based on Gen - 4.5, capable of generating images frame by frame, running in real - time, and being interactively controlled through actions such as robot commands. It includes three variants:
GWM Worlds, used to generate explorable simulated environments
GWM Avatar, used to generate conversational virtual characters
GWM Robotics, used to generate synthetic data for robot training and evaluation
△
Runway introduced that these three variants are models trained independently. In the future, efforts will be made to unify different fields and action spaces under a single base world model.
Just after Runway shifted to the world model, NVIDIA made its third investment in Runway. NVIDIA once again voted with real money, expressing its continuous optimism about the world model.
The Popular World Model
In fact, as early as 2024, within just 42 days, NVIDIA made two investments, successively investing in Waabi and Wayve, which apply world - model technology. Both of these are autonomous driving companies.
Among them, Waabi focuses on driverless trucks and has created the AI closed - loop simulator Waabi World for automatically generating traffic - scenario training algorithms. Wayve is testing Robotaxi and has developed the GAIA (Generative AI for Autonomy) series of models, training AI drivers by generating videos.
NVIDIA did not stop at capital layout. It quickly entered the field itself and released NVIDIA Cosmos during CES in January 2025. It supports multi - modal input, including 2D videos, 3D data, and LiDAR point clouds, and is used to generate realistic videos that follow physical laws, which can train robot algorithms.
Subsequently, several autonomous driving companies also shared their world - model achievements. During ICCV 2025, Ashok Elluswamy, the vice - president of Tesla's FSD, revealed during a seminar that although Tesla has millions of vehicles on the road and can collect a large amount of data, 99% of it is from simple scenarios, and Tesla lacks data on extreme scenarios. Therefore, Tesla developed a world simulator that allows developers to generate desired videos using prompts or modify videos, and then use the output video data to train FSD.
Ideal Auto also shared relevant achievements during the same period, stating that it has combined the cloud - based generative world model with the vehicle - end VLA to achieve a training closed - loop, taking a step towards L4.
The entry of industry leaders has kept the world model hot until now. At the beginning of 2026, "the first Robotaxi stock," WeRide, released WeRide GENESIS, which can infinitely generate, replay, and adjust various edge scenarios and achieve quantitative evaluation and problem diagnosis.
This week, Waymo also built a world model based on Google Genie3, which can not only generate videos of various driving scenarios but also generate LiDAR point clouds.
With multiple giants making investments and many well - known players entering the field, from autonomous driving to robotics, the world model has become a key path for implementing physical AI. NVIDIA, which ignited the fire of physical AI, now invests in Runway, which has just shifted its focus. This may inspire other AI video players to transform.
Today, we can already use AI to generate realistic videos. In the future, we may be able to use AI to generate a realistic world.
This article is from the WeChat official account "Intelligent Vehicle Reference" (ID: AI4Auto). The author is Yifan, and it is published by 36Kr with authorization.