HomeArticle

12 Key Q&As about the Technology Industry in 2026 (AI, Autonomous Driving, Robotics, World Models, US Stocks...)

硅谷1012026-01-14 15:23
What will happen in the technology industry in 2026?

Our three - hour New Year live stream is here! In this front - line Silicon Valley debate, we invited six leading figures spanning AI, autonomous driving, and US stock investment for a three - hour live discussion, focusing on the outlook for several technology sectors that people are most concerned about in 2026. This is not just a prediction but also a dissection of "non - consensus."

Core Insights (Live Highlights)

  • "Decentralization" of the AI Paradigm: The "DeepSeek Moment" in 2025 marked that large models are no longer monopolized by the top five companies. In 2026, computing power will no longer be the only threshold, and data curation and system - level scaling will become the deciding factors.
  • The Game of Tech Giants: "Self - developed Models" vs "Application - First": The path dispute at Meta reflects the strategic differences among tech giants. In our discussion, Zhang Lu believes that underlying model technology is the "electricity" in the AGI era and must be independently controlled to ensure independence; Howie argues that the shelf - life of models is short, and large companies should leverage their advantages in the application layer rather than getting stuck in infrastructure construction.
  • End - to - End (E2E) Dominance: During the San Francisco power outage, Tesla was able to operate normally, while Waymo was at a loss, causing a city - wide traffic jam. The advantage of Tesla's end - to - end solution was evident: in the complex physical world, the rule - based "tram" solution is facing challenges.
  • Reconfiguration of the Computing Power Landscape: US stock investment is shifting from "pure GPU belief" to "ASIC efficiency." The confrontation between Google's TPU camp and NVIDIA is essentially a game between inference cost and versatility.
  • Faith Prevails over Valuation: Whether the AI bubble bursts does not depend on how much money is burned but on whether the belief that "model intelligence doubles every three months" is lost. In 2026, the IPOs of OpenAI and SpaceX will be the touchstone for global capital.

01 AI in 2025 and 2026: Technical Consensus and Non - consensus

Host: Chen Qian (Co - founder of Silicon Valley 101)

Guests:

Howie Xu (Teacher Xu from Silicon Valley): A senior Silicon Valley technology executive and the Chief AI Innovation Officer of Fortune 500 Gen

Zhang Lu: Founding partner of Fusion Fund

Key Question 1: Which "non - consensus" event in 2025 shocked the industry the most?

Chen Qian:

Looking back at 2025, Meta's large - scale layoffs, restructuring, and the loss of core talents were very dramatic. I never expected Yann LeCun's departure and the talent war in Silicon Valley to stir things up to this extent. As insiders, what events exceeded your expectations?

Zhang Lu:

Meta's decline was really unexpected. I was very optimistic about the open - source ecosystem in 2023 and 2024, thinking that Meta could lead the open - source movement to a new height. However, the most pleasant surprise in 2025 was the speed at which large enterprises changed their perception of AI.

In 2024, people were still discussing whether the Scaling Law was the universal key, but by 2025, people realized that the Scaling Law is not the universal key to solving all problems. People will be more practical and realistic when considering industry implementation. They don't need the most expensive and best models to solve all problems but will focus on some small language models.

Then, through the "cocktail approach" I mentioned before, local fine - tuning can be carried out for vertical deployment in various industries, especially those with more data privacy regulations. I remember when I went to the Davos Forum and the JP Morgan Healthcare Conference at the beginning of the year, global leaders and Fortune 500 companies no longer asked "whether to use AI" but "how and how much" - how much budget should we allocate? This shift from "abstract" to "practical" was extremely rapid.

Even though we were busy and tired in 2025, it was an exciting state. It was not just a single AI narrative but a superimposed growth of AI - native enterprises and various AI - empowered industries (such as healthcare and finance), including space technology and defense technology. This was a very exciting year of growth.

Source: IIm - stats

Howie:

I have to mention "DeepSeek Moment." In fact, its functionality emerged in 2024, but the explosion of inference models in 2025 completely confirmed it as a systematic trend. Its impact far exceeded expectations. At the beginning of the year, people thought that the emergence of DeepSeek was just a matter of geopolitics or a few hundred billion - dollar drop in NVIDIA's stock price. But as the year went by, you'll find that its impact is that making large models no longer needs to be monopolized by the top five companies. Apart from OpenAI and Anthropic, new labs established by those departing scientists (such as SSI and Thinking Machines) are all making efforts. This means that real research may be happening outside large companies, and these labs have the hope of making a 0 - to - 1 breakthrough like what OpenAI did in 2019.

Key Question 2: Has the Scaling Law really hit a wall? Will a "new god" emerge in 2026?

Chen Qian:

In the first half of the year, people said that the Transformer had "hit a wall," and some even said that the Scaling Law had "hit a wall." Then, when Google's new - generation Gemini came out, people thought that the Scaling Law still had a chance. What can we expect in 2026?

Howie:

I'm a firm optimist. I think the Scaling Law is still very strong. Many people say that the data is used up, which is nonsense. It's not about the quantity of data but about how to carefully select it. What is good data? What proportion should different data have? There are too many permutations and combinations in this regard. Google Gemini's breakthrough is not because there is more data on the Internet but because it has optimized data organization, cleaning, and matching to the extreme. The same goes for computing power. Musk built a cluster of millions of cards in Austin, but "the devil is in the details." How do you connect the cards? How do you solve fault - tolerance and bandwidth issues? If we look back a few years later and find that there is still 10 times the scaling space, I won't be surprised at all. We are still far from the end in terms of algorithms, computing power, and data.

Zhang Lu:

I agree that the Scaling Law holds, but it is no longer the only growth path. In 2026, the competition will shift from "simply piling up data" to "system - level scaling." Google's advantage lies in its status as a "system - centric" company. It has a deep talent pool like DeepMind, a new - type architecture planned several years ago, and a closed - loop of real - world user feedback. You can see that Gemini's amazing performance is actually because it combines system - level optimization, data - quality closed - loop, and product feedback best. In contrast, pure model - centric companies will face greater cost pressure in 2026. In 2026, we will see more corrections to the past way of "mindlessly piling up data."

Key Question 3: Meta's "life - and - death debate": Should it stick to large models or return to applications?

Chen Qian:

There is a very dramatic point here. Recently, Meta spent $2 - 3 billion to acquire the Chinese team Manus in just over ten days. You two had a very wonderful debate about Meta's situation in 2026. Let's discuss it in detail.

Zhang Lu:

I'm quite disappointed with Meta now. Originally, Llama 3 was very impressive when it came out, and we were very optimistic about the open - source ecosystem. However, the performance of Llama 4 fell far short of expectations. The core reason is that they made internal strategic adjustments and tried to promote the product end too early, resulting in neglect of in - depth development of reasoning ability.

Source: Linkedin

Although the Manus team has strong execution ability, my question is: Does Meta currently lack application ability or model ability? Personally, I think it needs model ability more. If you are a first - tier giant and your model performance can't even rank in the top three, it's very painful. But I think Meta must stick to large models. If artificial intelligence is like "electricity" in the future, as a company of such a large scale, if you don't control the "electricity" yourself, it's very dangerous. Especially, the relationship between Meta and Apple has always been tricky. Meta has always wanted to get rid of Apple. If it can gain underlying independence through the AI trend, it's an opportunity for him. So even if it makes mistakes or lags behind due to strategic adjustments, it has to catch up because this is its foundation for survival.

Howie:

I completely disagree with Zhang Lu! I think why does Meta have to make large models out of nowhere? I think it's good at making applications. As a world - class company, it's okay to retain a certain research ability, but I don't think it has to make large models. Can't it just buy the APIs of OpenAI or Anthropic?

I'm not only disappointed with Meta but also with Microsoft and Apple. However, my disappointment doesn't lie in their failure to make models but in their failure to do well in applications. You can see that the "shelf - life" of models is too short. So what if you are the world - number - one today? You'll fall behind if you take a six - month nap. I don't think this is a "race against time" thing. Even if Meta only focuses on applications in 2025 and then buys a model company or makes an investment after seeing the right opportunity in 2026, it's still not too late. I don't think missing one year will lead to its death.

Zhang Lu:

You said it can buy models, but at its scale, who will sell them to it? And if you don't master the underlying technology, where is your business security? Among these Silicon Valley companies, none of them are close partners. If the "electricity" is in others' hands, how can Meta ensure its long - term independence? This is not just an issue of efficiency but also a matter of the "ace in the hole" for survival. Everyone has witnessed how badly Meta was affected by Apple's privacy policy in the past. It definitely doesn't want to be controlled by others again in the AI era.

Howie:

How short is the shelf - life of current models? For example, Mark Zuckerberg was interested in AI in 2013, but he didn't rush to bet all his resources on it at that time. Instead, he first made acquisitions and external collaborations. I think it's not too late to look at the model market landscape in 2026 or even 2027. The most important thing now is to let users experience the benefits of AI in Meta's products, rather than getting entangled in whether the underlying model is self - trained. Even if it can't buy models now, it can wait for the market to mature and then make a layout. This "chaser" strategy has been proven effective in history. There's no need to compete for the first place during the most volatile and money - burning stage of technology.

Key Question 4: Where will the "killer app" of AI applications appear in 2026?

Chen Qian:

Many people said that 2025 was the year of AI application, but in reality, there doesn't seem to be an explosion. People are still looking for that "killer app."

Zhang Lu:

I'm actually very optimistic. I can see the under - the - surface surging. For example, I invested in a medical technology project, and I even transferred money to the company on Christmas Day. We said that "innovation never sleeps." Why don't people feel the impact of AI applications strongly yet? Because the most intense explosion of this wave of applications is on the B - side (enterprise - level).

I'll give you the most shocking detail: JP Morgan Chase's AI budget this year is more than the combined budgets of the other nine top - ten global banks. The ROI brought by this scale of investment is extremely amazing. Three startup companies we invested in are now collaborating with JP Morgan Chase. The fastest one can complete the POC (proof of concept) in a few weeks and sign very large commercial orders in a few months. This was unimaginable in the previous SaaS era, when it was normal for an order to take a year to finalize. So in 2026, people will see the real revenue - level explosion of vertical agents in the healthcare, finance, and insurance industries.

Howie:

I think we should be rational about the term "year zero." The killer app of each technological era usually appears a few years after the technology matures. Think about the mobile Internet. The iPhone was launched in 2007, but when did applications like Instagram and Uber, which changed the world, appear? It was in 2010 or even later.

Currently, I've only seen two applications that are really hardcore and can generate real productivity premiums: one is Vibe Coding (AI - native programming). Not only programmers but also many non - technical people can now create complex products through Cursor or Claude Code. This is the only "productivity dividend" at present. The other application I've been very concerned about recently is "Browser Use" or the "AI browser," which directly changes the way you interact with information.