HomeArticle

Stanford University has approximately 0.1 GPU per person. The computing power in academia has been "slaughtered", and Yann LeCun is getting anxious.

新智元2025-12-09 11:23
There is a severe shortage of academic GPUs.

In the face of the brute-force aesthetics of hundreds of thousands of GPUs in the industrial sector, academia is becoming the "slum" of computing power. When the average number of GPUs per person in universities is less than 0.1, the battle for the dominance of AI research may have no suspense.

The GPU shortage in academia is a hundred times more serious than expected!

During NeurIPS 2025, two YC bigwigs organized a dinner and invited 14 professors from top university laboratories in the United States.

Unexpectedly, many people at the dinner complained that the computing resources in academia were simply "miserable"!

Out of curiosity, Francois Chaubard dug into the data, and the results were incredibly absurd...

Here are the situations of top university laboratories in the United States:

· Princeton: 0.8 GPUs per person

· Stanford: 0.14 GPUs per person (only 248 H100s are available in the supercomputing cluster Marlowe)

· Harvard, UW, CMU: All between 0.2 - 0.4 GPUs per person

· Caltech, MIT, UC Berkeley: Less than 0.1 GPUs per person

Nowadays, to conduct decent AI research, each person needs at least 1 GPU. To be honest, at least 8 GPUs are needed to really get things done.

There is no comparison, no harm.

At this moment, the cutting - edge laboratories of global top companies often start with hundreds of thousands of GPUs.

Take Microsoft's Fairwater Atlanta data center for example. Its current computing power can run 23 times of GPT - 4 scale training every month.

In other words, it took 90 to 100 days to train the first - generation GPT - 4. In the same period here, this process can be run about 70 times.

With such a giant data center, the laboratory can greatly increase the scale and frequency of preliminary experiments and final model training.

By the end of 2026, Musk's Colossus 2 is likely to more than double these numbers.

By the end of 2027, Microsoft's Fairwater Wisconsin is expected to complete more than 225 times of GPT - 4 scale training tasks per month.

Musk's xAI is training Grok 5 on the super - beast "Colossus 2" with millions of GPUs connected in series.

There is a severe shortage of academic GPUs

In a fireside chat in 2024, Fei - Fei Li admitted that "Stanford's NLP laboratory only has 64 GPUs."

Academia is facing a cliff - like decline in AI computing resources.

Meanwhile, a survey in Nature proposed the "AI computing power gap", revealing the same heart - wrenching reality:

When it comes to training AI models, the computing resources available to academic scientists are not in the same league as those in the industrial sector.

The data at the beginning of this article just confirms from the side that the GPUs in universities are far from enough to carry out large - scale AI experiments.

This phenomenon is basically the same whether in the United States or in China.

In a hot post on Reddit, a doctoral student revealed that he didn't have an H100, and computing power had become the main bottleneck for project implementation.

Moreover, in a Uvation survey, GPUs are becoming increasingly important in university courses and teaching, reshaping the way students learn computer science and engineering.

As shown in the following table, courses related to GPUs are required at Stanford, MIT, and the University of Oxford.

The GPU shortage in academia is no small matter, and its impact will spread like a domino effect.

Professor Yiran Chen from Duke University once mentioned that due to the widening gap in computing and data resources between the industrial and academic sectors, AI researchers no longer regard university faculty positions as their goal.

This means that top talents will accelerate their flow to the industrial sector in the future, all because of the lack of GPUs.

On the other hand, due to limited GPUs, academia has difficulty verifying big ideas and is gradually losing the ability to define the cutting - edge.

In the 2025 Stanford AI Index Report, a graph clearly shows this trend.

Tech giants such as Google, Meta, Microsoft, and OpenAI produce far more influential AI models than academia.

AI expert Sebastian Raschka said that the shortage of resources is just one of the problems.

Another problem is that these resources are usually only accessible through SLURM (or a similar scheduling system), and there is no interactive mode at all.

Unless you already know exactly what experiments to run and how long they will take, going through this process is simply torture. It's really difficult to conduct research under such conditions.

Moreover, the GPUs in schools are not always available.

Netizen Lucas Roberts said that he talked to a professor in Texas last month. The professor said that the school's GPUs can only run for a maximum of 24 hours at a time. When the time is up, you have to save the progress (checkpoint) and then re - queue for the next task.

Later, he finally got some funds for the laboratory to buy a few GPUs, and then he was able to run tasks without interruption.

As far as he knows, this 24 - hour mandatory interruption rule is quite common in other universities.

However, LeCun immediately refuted this view, revealing that NYU has the largest - scale GPU cluster among all academic institutions in the United States.

The specific number - 500 H200s, larger than that of Princeton.

Some universities build their own AI factories

However, some universities are in better conditions.

Jindong Wang, a former senior researcher at Microsoft Research and an assistant professor at the College of William & Mary, said that each student in the experiment is equipped with 6 GPUs, and there is also a cloud cluster available.

Dan Roy, the research director of the Vector Institute and a professor in the Department of Statistics and Computer Science at the University of Toronto, said that they will equip each student with 1 GPU.

More generous schools, such as the University of Texas at Austin, directly purchased more than 4000 Blackwell GPUs for their own AI infrastructure.

With the original equipment, UT Austin will have a total of more than 5000 NVIDIA GPUs.

Moreover, it is said that they are powered by their own power station.

These NVIDIA GB200 systems and Vera CPU servers will join the largest academic supercomputer in the United States, "Horizon", providing the most powerful AI computing power in academia for UT Austin.

This level of computing power means that UT Austin is fully capable of building an open - source large - language model from scratch.

Coincidentally, California Polytechnic State University is also launching an "AI factory" powered by NVIDIA DGX -

It is equipped with 4 sets of NVIDIA DGX B200 systems and integrates high - performance storage, network facilities, and NVIDIA's full - set AI software stack.