Amazon and Meta Conduct Massive Layoffs: Chinese Tech Giants Start Stockpiling GPUs Instead of Product Managers
The news of layoffs in Silicon Valley in the United States has basically flooded the screens. R&D personnel in non-core positions and product managers have been laid off. At the same time, the large companies conducting the layoffs have instead used the funds saved to purchase GPUs and stockpile graphics cards.
I believe this trend will soon reach China, perhaps by the end of 2025. The core issue is that the increasingly intelligent large AI models running on GPUs are indeed more reliable than product managers in coding, prototype design, and PRD requirement documents.
For example, I now use AI to complete product prototypes, PRDs, and technical research. I open prototype tools like Axure and Modao, both domestic and foreign, less and less frequently.
There's no need to worry about AI "slacking off." So, product managers in non-core teams of large companies may need to make some preparations in advance. It's all about doing one's best and leaving the rest to fate.
I think this will be a trend that starts from large IT companies and gradually spreads to traditional industries and all walks of life. What can we, as ordinary front-line product managers and IT practitioners, do?
1. First, think about what the large models ChatGPT6 and ChatGPT7 can do
When reading this article, first figure out what ChatGPT6 can do now. I once wrote an article saying that ChatGPT6 can self-evolve because the author of the relevant literature has joined OpenAI.
ChatGPT6, a large model capable of autonomous evolution
What can ChatGPT7 do? It's hard to estimate. But OpenAI's Altman mentioned that they hope ChatGPT8 can cure cancer. Looking back from this stage, I believe any position or profession will become meaningless, and this is a fact.
2. Large companies in Silicon Valley don't lack chips; they lack electricity
Ironically, although large companies in Silicon Valley don't lack chips, they lack electricity. So, while stockpiling graphics cards, they are also building various power plants. Nuclear energy and solar energy are emerging industries. For example, Musk is also using GPU computing in Earth's orbit to save electricity costs.
In China, there's no shortage of electricity, but there's a shortage of GPU graphics cards. However, for large companies, high electricity costs are still a significant expense, especially when choosing domestic GPU graphics cards. Domestic graphics cards not only require a larger quantity for matrix operations but are also more expensive per unit.
Now, the state has implemented an electricity fee reduction policy for the AI computing centers of technology companies, indirectly supporting the development of domestic GPU graphics cards. You know, in China, due to its vast territory, electricity costs are relatively low.
3. With GPUs and large models, users need to be more and more refined. There are many people with basic skills
Currently, ChatGPT5 is still the large model with the strongest comprehensive capabilities. I conducted a survey some time ago, and only about 10% of product managers use the paid version of GPT, which costs nearly 160 yuan. The rest use free models. You know, these are the product managers who follow me. Among ordinary people, the proportion may be less than 0.1%.
So, few people have an understanding of what functions the $20 ChatGPT5 can perform. And the proportion of those using the $200 version is less than 1%.
Regrettably, considering the input-output ratio, I haven't used the $200 version of ChatGPT5, which is nearly 2000 yuan. I'm currently using the $20 version, which has become an essential assistant in my life. I'll feel anxious if I don't use it. It's involved in daily medical consultations, work skills, and daily life.
But I believe that at least in doctoral research, tasks like literature reviews can be easily completed. At the same time, technical research and solution architecture construction required by product managers can also be easily accomplished now.
4. The prototypes, UI designs, and even coding generated by large AI models don't involve spatial computing
Due to the issue of data corpora, it's fortunate that currently, large AI models on the market can't complete the application interfaces for spatial computing, such as those for Vision Pro and Android XR. They are still mainly planar. However, in spatial computing, many spatial operating systems can help render planar applications. So, it can only be said that some functions can't be used, but the coding and construction of spatial applications can still be completed.
5. Product managers, designers, and programmers with 1 - 3 years of work experience will basically be laid off
Product managers and product assistants with 1 - 3 years of experience in prototype design, requirement analysis, and PRD documentation will basically be laid off. The same goes for other professions.
If you've seen the UI design drawings generated by paid models as an interviewer, they almost always surpass the portfolios of those with 1 - 3 years of experience. The only drawback is that they may need to use some templates and build their own structures, but they are still usable.
Product managers who can plan business models and design product architectures before a product is born will be very scarce because this requires a large number of actual failure cases, especially engineering experience.
This is something that's difficult for AI training to achieve because a project can only be perfected considering different timing, location, and human factors.
One thing must be clear. For product managers in real R & D projects, they are the first to know the requirements and real information and then choose to tell them to the large AI model.
Just like when you write a prompt for a conversation, you already have an idea of what effect you want in mind.
There are also some systematic requirement issues related to board decisions, shareholder discussions, and even company strategies. How to transform these requirements into functional requirements for multiple product lines and new product architectures at the company's strategic level is the key competitiveness of AI product managers, making them difficult to replace.
Ultimately, it's up to humans to decide whether to use AI. So, what prompts product managers input are also analyzed and determined by their product architecture thinking, not just random inputs.
All-in companies with both GPUs and large models
I wrote some time ago that Alibaba has become a combination of an AI company and NVIDIA. It not only has large models but also its own graphics card GPUs from T-Head. Currently in China, there's a trend to all-in on GPUs from companies like T-Head, Moore Threads, Huawei Kunpeng series, and Cambricon. No matter how big the gap is compared with NVIDIA, domestic companies need to be supported. So, in the short term, they are all promising.
After the Yunqi Conference, Alibaba has officially become China's "OpenAI + NVIDIA."
Companies like Alibaba, which have the leading Qwen 3.0 model, their own GPU graphics cards, and Alibaba Cloud, have fully integrated the upstream and downstream of large model usage. They basically only need lower electricity costs. T-Head sells graphics cards to Alibaba Cloud, and Alibaba Cloud sells computing power to users, ultimately achieving graphics card sales. This way, they can avoid competing with NVIDIA directly and continuously optimize the hardware capabilities of their GPU graphics cards.
What to teach children in the future
Children born now who haven't started primary school will witness the arrival of the ChatGPT6 large model. With the development of BCI brain-computer interface and MR glasses technology, mobile phones will gradually be replaced. In the future, schools as fixed teaching places will be phased out because teaching can be completed through spatial computing.
Unless it's due to national policies, primary and middle school students born in the next 5 - 10 years will become more and more lonely and selfish. This is an objective trend that can't be avoided. With large models and spatial computing, they can achieve complete self - satisfaction through human - machine interaction but can't satisfy other humans.
So, what we should teach children is more like what Lucy asked the professor in the movie "Lucy." "What can I do now?" The professor replied, "Reproduction and inheritance."
The only personalized knowledge and skills that children can learn will come from their parents, including their parents' values, character, and taste. Other things can be taught by AI, MR glasses, and humanoid robots, but the unique qualities of parents can't.
This article is from the WeChat public account "Kevin's Little Bits of Changing the World" (ID: Kevingbsjddd), written by Kevin's Stories. It is published by 36Kr with authorization.