HomeArticle

Silicon Valley godfather Marc Andreessen's interview at the beginning of 2026: The AI revolution has just begun, and the price of intelligence is collapsing.

品玩Global2026-01-12 08:36
After 80 years of technological bets, it has finally paid off.

On January 7, 2026, Marc Andreessen, the co-founder of a16z (Andreessen Horowitz), had a profound 81-minute in-depth conversation on his podcast The a16z Show. As the inventor of the Netscape browser and one of the most influential investors in Silicon Valley, Andreessen rarely and systematically elaborated on his complete judgment of the AI industry in this AMA (Ask Me Anything) - style interview.

This is not a regular forward - looking speech. Andreessen started from the academic paper on neural networks in 1943 and examined the rise of AI in the 80 - year history of technological evolution. He detailed the breakthroughs of Chinese AI companies and admitted that DeepSeek and Kimi "surprised Silicon Valley". He predicted that the shortage of GPUs will turn into overcapacity within a decade, and the unit cost of AI will "drop like a stone".

The most crucial judgment is: This is the biggest technological revolution he has ever seen in his life, even bigger than the Internet, and can be compared with electricity and the steam engine. But we've just begun.

The following is a compilation of the core content of this interview.

Original link: https://www.youtube.com/watch?v=xRh2sVcNXQ8

1. This is the biggest technological revolution, and we're only in the first inning

Host: Marc, which inning are we in of the AI revolution? What excites you the most?

Andreessen: First of all, I have to say this is the biggest technological revolution in my lifetime. In terms of magnitude, it's obviously bigger than the Internet. Comparable things are the microprocessor, the steam engine, and electricity.

Why do I say so? If you go back to the 1930s, before the invention of the computer, scientists had already understood the theory of computation. There was a big debate at that time: Should the computer be built in the image of an "adding machine" (cash register), or based on the model of the human brain?

Ultimately, the industry chose the former. This is the computer industry we've been in for the past 80 years - building literal mathematical machines that can perform billions of mathematical operations per second but can't understand speech and language like humans.

But in 1943, the first academic paper on neural networks was published.

You can see an interview with the author McCulloch on YouTube in 1946. He was in a seaside villa, not wearing a shirt for some reason, talking about the future where computers would be built on the model of the human brain through neural networks. That was an unchosen path.

The idea of neural networks was explored by a small group of people for 80 years. Essentially, it didn't work. Decade after decade of over - optimism was followed by disappointment. When I was in college in the 1980s, AI was a niche field, and everyone thought it would never come true.

Then came the ChatGPT moment. That was less than three years ago - Christmas 2022. Suddenly, everything became concrete. The great news about this technology is that it has become extremely democratic. The world's best AI can be directly accessed on ChatGPT, Grok, and Gemini.

I'd say: I'm basically surprised by what I see every day.

Every day, I come across a new AI research paper that completely shocks me - it's some new ability or some new discovery that I never expected. On the other hand, we see the emergence of all these new products and new startups.

The revenue growth being achieved by this new wave of AI companies - I'm talking about real customer revenue - is taking off at an absolutely unprecedented speed. The revenue growth rate of leading AI companies is definitely faster than any wave I've seen before.

From all of this, it feels like it's definitely still in the early stages. It's hard to imagine that we've reached the peak. I highly doubt that the product forms people use today will be the same in five or ten years. I think we probably still have a long way to go.

2. The cost of intelligence is "hyper - deflating": faster than Moore's Law

Host: One of the biggest doubts is that, yes, the revenue is huge, but the expenditure also seems to be growing at the same pace. So what are people missing in this discussion?

Andreessen: Let me start with the business models. There are basically two core business models in this industry: the consumer model and the enterprise infrastructure model.

On the consumer side, we live in a very interesting world - the Internet has been fully deployed. There are 5 billion people on Earth using some version of mobile broadband Internet. Smartphones around the world are available for as low as $10.

I'm saying all this because consumer AI products can basically be rapidly deployed to all these people at the speed they want to be adopted. The Internet is the carrier wave that allows AI to spread to a wide global population at the speed of light. You can't download electricity, you can't download indoor plumbing, you can't download a TV, but you can download AI.

This is what we're seeing. The growth rate of consumer - grade killer AI applications is amazing. The monetization ability is very good, even at higher price points. It's now common for consumer AI to have monthly price tiers of $200 or $300, which I think is very positive.

On the enterprise side, the question is basically: How much is intelligence worth?

If you have the ability to inject more intelligence into your business, such as improving customer service scores, increasing upsells, reducing customer churn, or running marketing campaigns more effectively - all of these are directly related to AI. The revenue growth of leading AI infrastructure companies is incredibly fast.

The core business model is basically "buying tokens by the drink" - that is, how many intelligence tokens you can buy per dollar.

Here's the key: What's happening to the price of AI? It's dropping faster than Moore's Law.

The cost of all inputs for AI is collapsing on a unit basis. The result is extreme deflation of this unit cost (hyper - deflation). This leads to demand growth beyond the corresponding level under the elasticity of supply and demand.

There's no doubt that the price of "buying tokens by the drink" will drop significantly from here. This will only drive huge demand. Then everything in the cost structure will be optimized.

3. From shortage to surplus: The fate of GPUs is sealed

Host: Actually, AWS says that the GPUs they use can now have a service life of even more than 7 years.

Andreessen: Yes, this is a very important observation. It also leads to another question - the debate between large models and small models.

Many data centers are being built around hosting, training, and serving large models. But at the same time, a small - model revolution is also taking place.

If you track the capabilities of cutting - edge models over time, you'll find that after 6 or 12 months, there will be a small model with the same capabilities. So there's a kind of feature - chasing going on, where the capabilities of large models are basically compressed and offered in a smaller size.

Let me give you a recent example that happened in the past two weeks. There's a Chinese company that produces a model called Kimi. The new version of Kimi is an inference model that, at least according to current benchmarks, basically replicates the inference capabilities of GPT - 5.

An inference model at the GPT - 5 level is a big step forward compared to GPT - 4 and is extremely expensive to develop and serve. Suddenly, you have an open - source model called Kimi that can either be compressed to run on a single MacBook or two MacBooks.

So suddenly, if you're an enterprise and you want an inference model with GPT - 5 capabilities but don't want to pay the cost of GPT - 5 or have it hosted on the cloud and want to run it locally - you can do it.

Of course, OpenAI will develop GPT - 6. So there's a kind of step - up effect happening, and the whole industry is moving forward. Large models are becoming more capable, and small models are chasing behind.

Regarding chips, in any market with commodity characteristics, the number one cause of surplus is shortage.

When you have a shortage of GPUs, chips, or data center space, if you look at the history of humans building things in response to demand - if something is in short supply and can be physically replicated, it will be replicated.

Right now, there may be hundreds of billions or even trillions of dollars being invested in this. So the unit cost of AI companies will drop like a stone in the next decade.

Nvidia is actually a great company and fully deserves its position. But they're now so valuable and generating so much profit, which is the best signal ever for other companies in the chip industry.

You have other big companies like AMD chasing them. And very importantly, hyperscalers are making their own chips. Many big tech companies are making their own chips, and of course, the Chinese are also making their own chips.

It's very likely that AI chips will become cheap and abundant within five years, which will be extremely beneficial to the economic efficiency of the kind of companies we invest in.

4. China's AI "supernova moment": The rise of DeepSeek and Kimi

Andreessen: It's a bit of a historical coincidence that AI runs on so - called GPUs. The classic architecture of a personal computer is a CPU and a GPU. But the fact is, there are two other forms of computation that are very valuable and happen to be massively parallel in operation, making them very suitable for the GPU architecture.

These two extremely profitable additional applications - one is cryptocurrency, which started about 15 years ago, and the other is AI, which started about four years ago.

Regarding China, I think what has happened in the past 18 months is quite remarkable. Basically, within less than 12 months, four or five Chinese companies have caught up to the industry front - line.

Most notably, there's a company called DeepSeek. The DeepSeek model is on par with Claude 3.5 Sonnet, GPT - 4o, and Gemini in almost all benchmarks. This is really a supernova moment.

Not only does it perform well, but it comes from a hedge fund, not a large tech giant, which is completely unexpected. It's completely open - source, and anyone can download and run it.

Then you have the Kimi model from Moonshot AI, ByteDance, the Qwen model from Alibaba, and Baidu. All of these have caught up within less than 12 months.

The interesting part is: Once someone proves that something is feasible, it doesn't seem difficult for others to catch up, even those with far fewer resources.

I think several things are happening:

First, open - source plays a huge role. When someone makes a new breakthrough, the open - source model basically says "This is how it works". Then every smart person in the world can study it and figure out how to replicate it.

Second, the talent in this field is very young. Many of the world's best AI researchers are only 22, 23, or 24 years old. They couldn't have been in this field for a long time - they must have emerged in the past four or five years. If they can do it, more people will be able to do it in the future.

Third, the US and others need to understand what China is doing. China's strategy with open - source models has actually created a global price competition. This makes US policymakers rethink the direction of regulation.

5. Large models and small models: The birth of an intelligence pyramid

Andreessen: There are some very smart people in this industry who think that ultimately, everything will only run on large models. Because obviously, large models are always the smartest. So you'll always want the most intelligent thing.

The counter - argument is: A lot of tasks in the world don't require an Einstein. A person with an IQ of 120 is good enough. You don't need a string - theory doctor with an IQ of 160; you just need a capable person.

I tend to think that the structure of the AI industry will be similar to that of the computer industry - you'll have a very small number of things basically equivalent to supercomputers, that is, these giant "god models" running in huge data centers.

Then there will be a cascade downwards, with smaller models, and ultimately, very small models running in embedded systems, on single chips inside every physical object in the world.

The smartest models will always be at the top, but the number of models will actually be greater for those small models that spread out.

This is exactly what happened with microchips, and it's also what happened with computers - they became microchips, and it's what happened with operating systems and many other things we build in software.

6. AI startups aren't just "wrappers": Cursor and others are building their own models

Host: Many people question whether AI startups are just "wrappers" for large models (GPT Wrappers)?

Andreessen: This is an important question. I'll use Cursor as an example - Cursor is one of the leading AI programming tools.

On the surface, Cursor did start by calling large models. But in fact, it's building its own AI model.

What these leading application companies are doing is "backward integration". They start with one model but soon use 12 models, 50 models, or even more. Different models are responsible for different parts.

Why is this? Because they have the deepest domain knowledge. They know better than anyone what their customers need.

If you can significantly improve the productivity of doctors, lawyers, or programmers, can you take a share of the increased value? I think you can.

This is why I think AI startups are more creative in pricing than SaaS companies. High pricing is often beneficial to customers because it supports better R & D.

7. Which inning are we in? The answer is: We've just begun

Back to the original question: Which inning are we in?

Andreessen's answer is: Very early.

Even though AI has become a global hot topic, even though the number of ChatGPT users has exceeded hundreds of millions, even though the revenue growth of AI companies has set a historical record - the product forms are still far from mature.

"I highly doubt that the product forms people use today will be the