HomeArticle

a16z values 30 million developers at 3 trillion, equivalent to France's GDP. Netizens: A few startups + large models want to replace us. Are they crazy?

极客邦科技InfoQ2025-10-30 18:53
a16z quantifies the global developer value at $3 trillion, sparking discussions about the disruptive impact of AI programming.

When a partner from top - tier venture capital firm a16z smilingly calculated the "human capitalization" account—quantifying the value of the global developer community at $3 trillion, equivalent to France's GDP—he was painting a capital - driven grand vision where AI programming would disrupt production relations and unleash massive value.

He excitedly elaborated in an interview: "There are approximately 30 million developers worldwide. Assuming each creates a value of $100,000... the total would be around $3 trillion, equivalent to France's GDP." He further emphasized that this scale "is also roughly equivalent to the combined value of several startups reshaping the AI software development ecosystem and the large foundational models they use."

He also has a set of radical judgments about "software disrupting software": Traditional CS courses at any top - tier university may become "relics of the past"; LLMs can write COBOL, Fortran, and CUDA kernels. Even "with agents, developers who don't understand CUDA at all can review and improve such code." There's also a trend that might send a chill down developers' spines: Software development capabilities are shifting from "human salaries" to "infrastructure costs" that continuously consume tokens.

However, it should be noted that a16z is essentially an investment firm, and its statements more represent the perspective of capital. When human creativity is directly converted into a financial metric of macro - GDP, it's inevitable to arouse doubts and irony. In response, a netizen commented, "Right from the start, industry practitioners are simplified into monetary value. Then, the output of several startups is equated with the economic contributions of the entire group. Finally, the GDP of the world's seventh or eighth - largest country is used as an endorsement—all with a smug look on the face, and the host is laughing beside... " Another netizen pointed out that this optimistic attitude ignores a key issue: "Helplessly watching the changes of the era without knowing how to adapt. Isn't this the biggest crisis? Saying that developers don't need to worry about survival without providing any real evidence... and acting so certain. It's really speechless."

Right from the start, he downgraded people in an industry to monetary value. Then, he equated the economic output created by several startups with that of these people. Next, he quantified this value to the scale of the world's seventh or eighth - largest GDP country. Throughout the process, he had a self - satisfied smile on his face, and the host was laughing beside. Salute to 'technological feudalism'—a seemingly anti - utopian future, indifferent to economic replacement, yet some people are excited about it! It's really ironic.

Helplessly watching the changes of the era without knowing how to adapt. Isn't this the biggest crisis? Saying that developers don't need to worry about survival without providing any real evidence... and acting so certain. It's really speechless. Are you crazy?

Guido Appenzeller is a partner at Andreessen Horowitz (a16z), focusing on AI infrastructure and cutting - edge technology investments. He has served as the CTO of Intel's Data Platform Division and the CTO of VMware's Cloud and Network. He has also founded several startups (such as Big Switch Networks and Voltage Security), with rich technical and entrepreneurial experience in network virtualization and enterprise security.

This is the complete Chinese translation of this podcast episode. We've tried to retain the original meaning and context details to help you independently judge: Is this a productivity revolution worth cheering for, or an over - packaged capital narrative? Welcome to leave your thoughts in the comment section after reading.

The $3 Trillion AI Programming Opportunity

Yoko Li: You mentioned that AI programming is the first truly large - scale application market for artificial intelligence. Can you explain in detail why this is such an exciting business opportunity?

Guido Appenzeller: I really do think that AI programming is the first truly large - scale application market for artificial intelligence, considering that a large amount of investment has already poured in. The current question is where the value lies and why we should get involved.

First of all, AI programming can indeed create huge value. There are approximately 30 million developers worldwide. Assuming each participant creates a value of $100,000—of course, in the United States, this figure is more than $100,000 as many people's salaries are much higher. But roughly around $100,000, a rough estimate would be that the entire developer community creates a value of about 30 million times $100,000, which is equivalent to $3 trillion. And these are just professional developers; there are also other participants interested in development. Now, design has become the top priority, and every designer, product manager, and even many writers can write code. This is definitely a huge impact.

Moreover, looking at the $3 trillion figure, it's basically equivalent to France's GDP—the value created by the entire population of the world's seventh or eighth - largest economy. It's also roughly equivalent to the combined value of several startups reshaping the AI software development ecosystem and the large foundational models they use.

Yoko Li: You talked about how software has disrupted everything, but now software itself is also undergoing a large - scale disruption. How is this manifested in practice?

Guido Appenzeller: Indeed, everything we see, touch, and use today is software. Software has disrupted the world, and now software itself is undergoing a huge disruption. More and more, we are using language models to write code and generate software. But instead of reducing job opportunities, it has led to more software output. In the past, SaaS services were needed to meet the needs of hundreds or even thousands of people, but now AI programming makes it possible to write code and software on a one - to - one basis. For example, I specifically wrote an email filter for myself. I don't use language models much to reply to emails, but I did create a filtering program that can do tasks like tagging and grouping, of course, only for certain emails.

Yoko Li: With the rise of AI agents, what changes do you think will occur in the development process?

Guido Appenzeller: I think the answer is complex, and it's still too early to discuss. In this AI revolution, we can't get a complete answer at present. Everyone has their own technology stack and software development lifecycle, and every aspect of it is undergoing disruption. It's no longer just traditional developers who write code; every participant in the value chain is experiencing disruption.

If I had to name the area of programming disruption that impressed me the most, it's that traditional programming IDEs are increasingly integrating programming assistants or programming agents, such as Cursor, Devins, GitHub Copilot, and Claude Code. These AI development tools have brought about incredible revenue growth and have almost become the area with the fastest revenue growth in the entire IT startup history. Billion - dollar acquisitions and transactions occur frequently. In short, it's a very active area.

Yoko Li: You mentioned that the basic development cycle (planning, coding, review) is changing. Do you think this cycle will still exist in the future, or will it change fundamentally?

Guido Appenzeller: I think it's difficult to predict the final result at present, but we may get a clue from the world's first email. With the first email sent over the Internet, we can roughly infer that things like websites will appear in the future. To give a rather far - fetched example, if everyone rents out their houses to compete with hotels, a hotel business system of the largest global scale can be formed—that's Airbnb. So I think these secondary effects are really hard to predict. My personal guess is that software developers will still exist in the future, but their work content will be completely different.

Frankly speaking, at present, the computer science courses at any top - tier university will become relics of the past. In contrast, what those top - notch startups are doing and the development cycles they adopt are completely different from what is taught in schools.

Startups will use a large number of agents and convey some context information through prompts. This represents a huge leap as the entire creative perspective has risen to a higher level of abstraction. I don't know what the future development cycle will be like, but I have an intuition that there may be more developers observing the execution cycle of the planning.

Yoko Li: As agents take on more and more tasks, how will the role of manual review change? Do humans still need to review code, or can agents handle it themselves?

Guido Appenzeller: I think agents will operate autonomously for longer periods. But if someone wants to write a complete ERP system for their multinational enterprise, I definitely can't believe that an agent alone can generate a truly satisfactory result. I think the problem is that the models are still far from being able to operate completely autonomously. But from another perspective, a human team may not understand all the challenges at the beginning of a project either. Human developers need to examine the design, architecture, and cost, etc., and then find that there are problems with the original plan or discover new challenges. So I think the cycle will still exist, maybe with some changes in the time scale, but it's really hard to predict now.

Moreover, we frequently observe another phenomenon: humans are getting less involved in the cycle and instead are providing tools for agents to let them know what to do. In the past, we were used to referring to development documents and then extracting content to tell agents what to do. Now, agents can call APIs by themselves to fill in the context, eliminating the middle - man. Take verification as an example. When I used to write code or review others' code, the first thing I would do was not to directly review. I don't like reading code, so I would first make a fork and modify it to see if it still worked. If it didn't work after the modification, I wouldn't review it. Now we have the opportunity to give agents a native environment to see if the functions still work, if the UI still looks normal, and if all requests can still pass.

All in all, I think that at the current stage of agent development, they really need a dedicated environment to run. I used to write some small tools for myself, like a small script to solve a specific problem. In the past, I definitely wouldn't write unit tests for such tools as it wasn't production - level code. But with agents, now I can introduce unit tests to let agents help me understand whether the changes will break other functions. This is definitely very, very valuable.

Yoko Li: In this value chain, which segment has the fastest - growing economic value? What are the killer applications of current AI agents? And in which areas will the next leapfrog point appear?

Guido Appenzeller: Every year, I discuss this issue with hundreds of enterprises. According to their feedback, the use case with the highest return on investment at present is legacy code migration. Google published a great paper in this area, which is about replacing old code with Java code in an extremely large codebase. This may sound ordinary, but this way of dealing with legacy technology stacks is very important. For example, converting Cobol or Fortran to Java. The word Cobol hasn't been mentioned for many years. I only wrote Cobol code once, back in the 1990s.

Now, with large language models, legacy code can be rewritten according to the specifications designated by enterprises, and the results are quite good. Based on my observations and exchanges, compared with traditional processes, enterprises mostly say that this method is faster and they are also increasing the recruitment of developers. I can't say whether this will become a long - term trend, but there are indeed many long - standing problems in the enterprise environment. With a little upfront investment, a large amount of infrastructure costs can be saved. For example, the mainframe business is a very promising area. This wave of transformation has suddenly made the migration of legacy code much easier. The ability to program mainframes is quite scarce in the talent market, but now more people can write such code in natural language. So from another perspective, the underlying legacy programming languages may also see a revival.

Moreover, I'm quite shocked by the powerful functions of programming assistants—I've seen them write CUDA kernels, which is a very difficult task. I even tried an experiment with a language that has almost no available training datasets, and the large model was still able to abstract the form of the code. Although it's not perfect, I think the application space should be very large and extensive.

Yoko Li: As AI agents become more and more popular, how is the concept of code review evolving? In the future, will we only need to review plans instead of paying attention to specific lines of code? For example, will two sentences plus a runnable environment be enough?

Guido Appenzeller: You're right. Large models are really good at generating code, and their actual performance often exceeds our understanding and expectations. Also, it's an almost undisputed view that developers actually spend more time on code review than on writing code. And humans really don't have the ability to review thousands of lines of code, so there's really no good way to handle large - scale PRs. Unless it's a one - time trial - and - error result produced by so - called "vibing programming," other program codes need to be reviewed line by line. Now, more and more powerful AI tools can directly analyze and review PRs, point out security vulnerabilities, and find parts that violate specifications. For example, they can prompt about dependencies that shouldn't be used and enforce programming specifications.

Of course, I haven't seen anyone completely rely on AI to review code yet, but some enterprises have reported that they've reduced the number of full - time code review developers from two to one. They do this by delegating tasks to AI agents, and suddenly it seems like they have more manpower. Human developers only need to provide context and remind agents of what changes to pay attention to. Of course, the term "code review" is not entirely accurate, as the review can focus on functionality or performance.

But now, with agents, even developers who don't understand CUDA at all can review such code and make improvements through multiple verifications. I don't know if large models can accurately understand how each line of code in a PR works and in what environment it should be tested, but at present, there is hope.

Another very important aspect is that large models are also very good at generating code documentation and descriptions. For example, when using tools like Cursor for programming, I often ask it questions afterwards and then refer to the internal documentation to make updates. After all, internal documentation is very important and can be used as a reference at any time. Of course, we can't always stuff the entire codebase into the context window as it's too inefficient. But now we can let it read the documentation and then implement new sub - classes based on it. The whole process will be much faster. The reason why compilers are important is that they can convert high - level abstractions into low - level abstractions. With large models, as long as we manually optimize the low - level abstractions, agents can update the high - level abstractions.

From this perspective, large models are the new compilers in the AI era. The biggest drawback of current agents is that they don't have a native environment to directly verify functionality like compilers do. But as time passes, we may provide tools for large models to parse and reason about code at the syntax level, such as determining whether the representation of object X is the same in different serialized modules. And agents can do a lot of things, from socializing (keeping track of what other developers are doing) to software distribution.

There's also an interesting trend: the way people use git repos is starting to change. In the past, people would make modifications first, then submit, and others could only see the changes later. When opening a PR, people could see different modified versions. But now, agents make a large number of modifications at once, resulting in a large amount of content to submit. In contrast, repos are designed for human developers, so the rate limit is set very low. I think repos will gradually take on a new form, for example, allowing agents to explore five different paths at once and only return to GitHub after finding the optimal solution.

I think this may completely change the way humans write code, or rather, more and more writing tasks will shift from humans to agents. In this case, the underlying services originally designed for humans are definitely not sufficient to support this new agent - dominated era. Let's imagine an extreme scenario: in the future, I may run 100 agents in parallel, all trying to help the codebase achieve its functions. Therefore, there must be some kind of coordination mechanism among the agents to ensure that they don't try to edit the same file at the same time.

Moreover, they also need to share memory in the code repository they are working on to avoid each agent re - loading dependencies. So we definitely need something more flexible and real - time to match the agent - based development mode in the new era. And this idea is not only applicable to GitHub. We may