HomeArticle

What are the survival principles in the era of AI programming? Andrew Ng: Act quickly and take responsibility.

划重点2025-09-23 07:28
In the future, the core skill will be "precisely telling the computer what to do".

Recently, Andrew Ng delivered a keynote speech at the first Buildathon, covering topics such as AI-assisted programming, rapid product prototype development, and the skill requirements for AI engineers.

Andrew Ng is an international authority in the fields of artificial intelligence and machine learning. He is one of the founders of Google Brain and a co-founder of the online education platform Coursera. In 2014, Andrew Ng joined Baidu as the Chief Scientist, and in 2024, he joined the board of Amazon. In recent years, he has been active in the fields of AI investment and entrepreneurship, founding projects such as AI Fund and DeepLearning.AI.

Key points of the speech:

1. The new Silicon Valley motto: "Move fast and be responsible."

AI-assisted programming enables a tenfold acceleration in independent prototype development. The significant reduction in prototype costs makes rapid and multiple trial-and-error attempts a viable strategy. The real value lies in discovering projects worthy of in-depth development through trial and error.

Andrew Ng proposed that prototype development has lower requirements for security and scalability, and AI further lowers the threshold for trial and error. He advocates the principle of "Move fast and be responsible" and suggests conducting bold experiments in a sandbox environment before deciding whether to invest in production transformation.

2. Code is depreciating, and developers need to transform into system designers and AI conductors.

Programming tools have evolved through multiple generations: from GitHub Copilot to IDEs, and then to highly agentized programming assistants. The iteration speed of tools creates a substantial efficiency gap. Being half a generation behind may significantly affect output capabilities.

The value of code itself is decreasing. AI can automatically generate code and migrate database architectures, making architecture decisions more reversible. Developers need to transform from code writers to system designers and AI conductors, focusing on controlling core architectures and building composite systems.

3. The engineering efficiency revolution has given rise to a "new bottleneck in product management."

After the engineering speed increases, product decision-making and user feedback become the new bottlenecks. Andrew Ng used a personal example to illustrate: when the engineering time was compressed from three weeks to one day, spending one week to obtain user feedback seemed extremely long.

He proposed a new paradigm for data use: instead of simply relying on data to make decisions (such as "the data says choose version three"), use data to correct intuition ("why did I misjudge that users wanted version one"). Polish user intuition through corridor tests, coffee shop surveys, and rapid prototype verification to establish an efficient decision-making cycle.

4. "There's no need to learn programming in the AI era" is the worst career advice in history.

Andrew Ng strongly opposes the view that "there's no need to learn programming in the AI era," pointing out that every improvement in programming tools in history has enabled more people to have programming capabilities. The CFO, legal counsel, and front desk staff in his team have all improved their work efficiency by learning programming. In the future, the core skill will be "precisely telling the computer what to do," which requires an understanding of computer languages and programming logic. Non-technical personnel can quickly master basic programming capabilities with the help of AI to achieve cross-domain efficiency improvements.

5. There is a severe shortage of AI engineers, but university courses are seriously out of touch.

The unemployment rate of computer science graduates has risen to 7%, but enterprises still face a severe shortage of AI engineers. The core contradiction is that university courses have not covered key skills in a timely manner: AI-assisted programming, large language model invocation, RAG/Agentic workflow construction, and standardized error analysis processes. Emerging AI engineers need to master three major skills: using the latest AI programming tools, being familiar with AI building blocks (prompt engineering/evaluation techniques/MCP), and having rapid prototype capabilities and basic product intuition. Andrew Ng called on the education system to accelerate curriculum updates and encouraged developers to actively embrace these changes.

The following is the original speech:

I'm very glad to see you all here on the weekend. What I want to do is share with you my views on AI-assisted software engineering and why we organized this Buildathon, a rapid engineering competition to go from zero to product in one day.

I've found that when you try to do something innovative, whether it's building a startup like AI Fund or innovating in a larger corporate environment, one of the biggest factors in predicting whether a project will be successful is speed. Some teams can achieve more iterations, try more things, and greatly increase their chances of success just by executing at a rapid pace.

Frankly, when I work with engineers or CEOs of portfolio companies, one thing I highly respect is people who can make decisive and rapid decisions. By the way, I know that when I talk about speed, sometimes people think, "Oh, is Andrew just talking about working harder?" But that's not the case. Just as hard work definitely helps, I know that in some parts of the world, it's not quite appropriate to talk about hard work. But frankly, I work hard, and I respect people who work hard. That's that.

But besides hard work, a certain amount of decisiveness is needed to make you extremely efficient, try more things, and get more work done. One of the reasons I'm excited about AI-assisted programming is that it speeds up the core part of our work significantly.

When I look back on the software work I've done, some of it was building prototypes of small independent products, and some of it, as well as what many of you do, is developing production software on large codebases. If we study the impact of AI-assisted programming, it's difficult to conduct a rigorous study. But perhaps for dealing with large traditional production-scale codebases, AI might increase our speed by 50% or even more. I'm not sure. But for building prototypes of small independent products, it's not a 50% acceleration; it's a tenfold acceleration.

I think many of you have experienced many projects that two years ago would have taken three engineers six months to complete. Now, we can do it with friends over the weekend. This means we can try new things. We can build 20 prototypes and see what works. In fact, due to my schedule, I often write more code on Saturdays. Every Saturday, I write a lot of software and then say, "Oh my god, this just doesn't work!" But I never tell anyone, and they never see the light of day. It turns out that some people are anxious because in the AI field, many proof-of-concepts never go into production. This is seen as a problem, but I don't think so. For me, the solution isn't to get more proof-of-concepts into production. The solution is to reduce the cost of proof-of-concepts to such a low level that people won't care if you build 20 prototypes, 18 of which fail and never see the light of day. But this is the price of discovering the two truly valuable prototypes that you really need to spend more time on to make them robust, reliable, and scalable.

It turns out that there's a reason why we can achieve scalability requirements so quickly when building prototypes of small independent products. Frankly, if the software I write only runs on my laptop, the requirements for scalability and even security are much lower because I don't intend to attack my own laptop maliciously. So, it's okay if I don't have many firewalls as long as I don't leak sensitive information or do other bad things. Therefore, the amount of work required to build a prototype is much lower, and AI is continuing to reduce this cost while also making it more secure.

A few years ago, the Silicon Valley motto "Move fast and break things" got a bad reputation because it really broke some things. I think some people concluded from this that Silicon Valley was developing too fast and shouldn't develop so fast, but that's wrong. My team's motto is "Move fast and be responsible." I've found that many smart teams have ways to create a sandbox environment so that you won't send software that might harm people in some way to millions of people. But you're building a prototype and trying it out yourself, assuming you won't cause too much harm to yourself. You know, if a large language model gives a wrong answer once, then you can move fast but do it in a safe sandbox for prototyping and then decide whether to take safety measures and then evaluate.

One of the characteristics of AI-assisted programming is that in the past few years, there have been multiple waves of innovation, multiple generations of innovation. I think GitHub's code auto-generation was a huge innovation two years ago, but now we've far surpassed that. Then there was a wave of AI-supported integrated development environments (IDEs). I often used Windsurf or Cursor at that time. At the same time, there were also companies like Replit, Bolt, V0, and maybe Lovable. And then earlier this year, or in the past few months, there was another wave, a wave of highly agentized programming assistants like Claude Code, Gemini CLI, Codex. Some of you know that I personally often use Claude Code, right? But I'll talk about what I'm using again in two months.

I've found that the evolution speed of tools is very fast, and it really brings about substantial changes. There's actually a big difference between using the latest generation of tools and not using them. Maybe large language models are mature enough, and if you're using them, multiple models are sufficient for many business applications. But programming assistance is one of the rapidly developing fields. Being half a generation or a generation behind actually makes a very substantial difference. However, if you're using a large language model that's six months behind, you know, it might be okay for many applications.

Another exciting thing I've seen is that code used to be a very valuable product. It was very difficult to build code with traditional software, and code was very valuable and needed to be maintained and updated. But the value of code as a product is decreasing because AI can write it for you. Even choosing an architecture is closer to what Jeff Bezos calls a "reversible decision." You can make a decision, and if you don't like it, you can change it.

So, before, you set up a database schema, and you never wanted to change it. But now, if you have the wrong database schema, it's okay. Let AI do the migration for you. It's not that painful. So when I build, sometimes I'll explore three completely different architectures in one day, and it's all good. Then, maybe next week, I'll say, "You know what? Let's abandon my codebase. Let's rebuild everything from scratch." So, this will make people think about software in very new ways.

One of the exciting things, which I'll talk about later, is that Silicon Valley, and I think all of you here and those watching the live stream, are actually at the forefront of this. I'll talk about this later.

But one of the biggest changes I've seen in the software field is that when building becomes easier, deciding what to build becomes a bigger bottleneck. I've always called this the Product Management Bottleneck.

So, at AI Fund, when we build products, I think we're trying to push for a very rapid iteration cycle. We write some software to build a prototype, maybe a quick and rough one, and then we do some product management work, get user feedback, shape our view of the product, and prompt us to update what we want to build. We iterate around this cycle quickly, trying to build user intuition and get a better product. So for me, this is a core cycle that many of my teams at AI Fund and DeepLearning.AI are using to drive software development.

Thanks to AI programming assistance and rapid engineering, we can now build software much faster than before. This means that getting feedback or honing our intuition about what users really want is becoming an increasingly big bottleneck. So before, if I spent three weeks building a prototype and then one week doing design research or something else, that was okay. So spending three weeks writing code and one week getting feedback was fine. But now, if we can write software in one day, then spending a whole week getting user feedback feels like an eternity.

So an interesting trend I've seen is that many of my teams are making decisions more and more intuitively because it's a very fast decision-making process. You know, I'm a data-driven person. I love data. I study AI data, and I talk about data-centric AI. So obviously, I value and respect data. In fact, we're like Snowflake, a data company. Data is important.

But let me give you an example of a decision I made. Steven and I were working on a product that hasn't been released yet, so I don't want to talk about the details. But long story short, we had four product ideas. I liked one of them, but my team didn't agree with me. So they conducted a user survey to find out what users liked, and the data came out. I was wrong. I liked version one, but users liked version three.

So for me, a bad way to use this data would be to say, "Oh, users like the third version. Let's build the third version." That's fine, and many teams do that. Your decision is driven by data. But for me, that's a very simplistic way of using data. What I actually did was not just say "the data shows version three, let's decide to do that." What I did was spend a long time carefully examining the user data. We asked multiple questions, and this was one of them. I really sat down and reflected, "How did I mess this up? Why did I think users wanted version one when the data clearly showed they wanted version three?"

Because rather than just relying on the data that shows users want version three, I want to make decisions based on this data. What I want to do is use the data to maintain my intuition and then decide what to build based on that. That's a big difference because it means I really spent hours thinking, "How did I mess this up?" Because I think by spending time honing my intuition, I can not only make this decision in a better way but also make many other decisions in a better way to serve users.

In view of this, we actually spend a lot of time thinking about a series of strategies to hone our intuition about users. But you know, everything we do starts with trying the product ourselves. If you know users well, your intuition will be good. And when your intuition about users is good, it's a very fast decision-making process. Asking friends, team members, doing hallway usability testing, asking some strangers. I often sit in coffee shops or hotel lobbies and politely ask strangers to take a look at my product. I still do that. From prototype testing to A/B testing when going live, I know Silicon Valley loves A/B testing. Of course, we also do A/B testing. We want to get results, but for me, this is actually one of the slowest strategies in our portfolio, and we rarely use it. But this example shows that when engineering speeds up, it creates bottlenecks elsewhere, and we now have to work hard to eliminate these bottlenecks.

I think many of you will be asked this question because many of you are software engineers, or your non-software engineer friends might ask: "Should I learn programming?" Before I end, I want to spend a little time discussing this. In the past year, many people have advised others not to learn programming, saying that AI will automate it. I think we'll look back and see this as the worst career advice in history.

When programming evolved from punch cards to keyboards and terminals, some people actually said, "Look, we now have programming robots. Programming is so simple that you don't even need software engineers." People really thought that. But that's wrong. I mean, keyboards and terminals made programming easier, and more people did it. When we evolved from assembly language to higher-level languages, from text editors to integrated development environments (IDEs), and from IDEs to today's AI programming assistance, each step has made programming easier. This means more people should learn programming.

So, I think at AI Fund and DeepLearning.AI, everyone will learn programming. And I've actually posted on YouTube that when our CFO knows how to program, the work she can do will make her and her team more efficient; when our general counsel knows how to program, he'll speed up the processing of non-disclosure agreements; when the front desk staff knows how to program, she can do things she couldn't do before. So I've found that I encourage all your friends to learn programming.

When I taught everyone the course on generative AI, something dawned on me. This is the fastest-growing course we've launched on Coursera. It's designed to let non-technical people understand the business significance of generative AI. Behind the scenes, I worked with a collaborator, Tommy Nelson, who knows art history. He can write prompts for Midjourney using art language, art inspiration, color palettes, and genre inspiration, so he can create beautiful pictures like this. On the other hand, you know, I don't know art history. All I can do is prompt: