HomeArticle

The inside story of OpenAI's "power struggle" in 72 hours is revealed for the first time: With just one phone call, Altman was "ousted".

36氪的朋友们2026-04-23 21:11
OpenAI President Talks About a Decade's Journey, AI Code Writing, and Computing Power Bottlenecks.

Recently, Greg Brockman, co-founder and president of OpenAI, participated in an in - depth interview on the podcast "The Knowledge Project", comprehensively reviewing the ten - year eventful journey of OpenAI from its establishment in 2015 to the present.

This conversation was extremely informative. Brockman responded to several key questions for the first time in public, including the origin and original intention of OpenAI, the end of the non - profit model, technological milestones, and the inside story of the "power struggle" in 2023.

Brockman disclosed in detail for the first time the inside story within 72 hours after Sam Altman was fired in 2023, including designing a backup company called "Phoenix" with Altman. He also admitted his rift and reconciliation with Sutskever and said that "the moment when he left was the only time I didn't want to continue working."

Technically, Brockman believes that OpenAI has never deviated from its original roadmap - first unsupervised learning, then reinforcement learning. Today's inference models are essentially still "predicting the next word," only the data structure has changed. A staggering fact is that "almost all the code inside OpenAI is written by AI."

Brockman said bluntly that computing power is the real bottleneck and also the most ridiculed yet most correct bet for OpenAI. While everyone was arguing about products, they were quietly building data centers. He predicted that super - large data centers dedicated to solving a single problem (such as curing cancer) "may appear this year."

Brockman believes that the evolution of AI is a victory of "large - scale computing power + simple algorithms," and this logic has been repeatedly verified in OpenAI's iteration from the Dota project to GPT - 4. In his view, we are entering an era of "computing - power economy": software engineering is being redefined, and the role of humans will shift from "operators" to "vision managers."

His ultimate goal is to enable all 8 billion people in the world to have their own personal AGI, which is not only a personal doctor or assistant but also an agent system that works for you 24/7 and understands your long - term goals.

For young people, Brockman has only one piece of advice: deeply understand AI technology and become a manager of intelligent agents. In the future, everyone will be the CEO of a 100,000 - person AI company, and imagination will become the scarcest resource.

The following is the essence of Brockman's latest interview:

01 A Glance, a Ten - Year Bet

Question: You had just left a successful startup like Stripe. Why did you want to start a new business again?

Brockman: Although the problems Stripe solved were important, they were not the topics I had been thinking about since childhood, and it could succeed without me. I've always been looking for a mission that I can devote the rest of my life to and make the world a better place. The answer is clear: artificial intelligence tops the list. Influencing the development of AI would make my life worthwhile.

Question: When you left, Patrick Collison (co - founder and CEO of Stripe) asked you to talk to Altman. How did it go?

Brockman: Patrick hoped that Altman could persuade me to stay, but after a few minutes of conversation, Altman could tell that my mind was made up. After learning that I also wanted to work on AI, he invited me to a dinner in July 2015 to discuss whether it was too late to establish a top - tier AI laboratory.

Question: At that time, DeepMind had monopolized resources. Where did you get the confidence?

Brockman: Although the competitor had all the talent, data, and capital, no one at the dinner could prove that it was "impossible" to establish another laboratory. On the way back to the city, I exchanged a glance with Altman and thought, "We have to do this." The next day, I started working full - time. Although it was still unclear how to do it specifically and how to recruit people, our vision was clear: to build AI that benefits all of humanity.

Question: How was the initial core team formed?

Brockman: The core members I initially targeted included Sutskever, Dario Amodei, etc. Although we spent a lot of time discussing the vision and various possible operating methods, the team didn't take shape because the momentum of the project was unclear. Amodei eventually chose to go to Google Brain, leaving only Sutskever, me, and John Schulman, who had just shown interest. At that time, about 10 top researchers were watching, and their attitude was the same: "I'm interested, but who else will join?"

To break the deadlock, Altman suggested an off - site activity. So I organized a gathering in Napa and even printed T - shirts in advance. However, there were no formal job offers at that time, nor was there a company structure. Through brainstorming, we outlined a technical roadmap that has lasted until now: first, tackle reinforcement learning; second, unsupervised learning; and finally, gradually learn more complex tasks. After that off - site meeting, I sent job offers to everyone.

Question: Why did you think Google DeepMind had an insurmountable advantage at that time?

Brockman: Google DeepMind was like a "10,000 - pound gorilla" in the AI field at that time. They had abundant capital and showed an irresistible momentum even before AlphaGo shocked the world. Under the shadow of this giant, whether it was possible to establish an independent and brand - new laboratory was full of uncertainties at that time.

02 The Breakthrough Moment of GPT - 4

Question: When did you realize that the non - profit model was no longer feasible?

Brockman: In 2017, we began to calculate the specific cost of building artificial general intelligence (AGI). At that time, we realized that achieving our mission required a data center of unprecedented scale. After contacting hardware manufacturers such as Cerebras, we found that having exclusive access to top - tier computing power would give us an overwhelming advantage. However, there was a natural limit to fundraising as a non - profit organization. Therefore, Musk, Altman, Sutskever, and I finally reached a consensus: establishing a for - profit entity was the only way to achieve our mission.

Question: When did you sense that "everything was about to change"?

Brockman: This journey was composed of countless "it's really happening now" moments. The Dota project proved the power of large - scale computing power stacking, but the real milestone was the 2017 paper "Unsupervised Sentiment Neurons." We were surprised to find that by simply training the model to predict the next character, it spontaneously understood the positive and negative connotations of emotions. That was the first time I realized that the machines we built were not only learning grammar but also understanding semantics.

When we were testing GPT - 4, someone asked, "Why isn't this AGI?" If you had defined AGI two months ago, GPT - 4 might have fully met the criteria. It could talk fluently about any topic. Although it obviously lacked some characteristics, at that moment, we all realized that the economic transformation driven by computing power had truly occurred. Such breakthrough moments are far from over.

Question: What is the relationship between predicting the next word and real "reasoning"?

Brockman: There is a profound internal connection between them. Prediction may sound ordinary, but if you can accurately predict what Einstein would say next, you have to be at least as smart as him. The core of prediction is not to repeat what is already known but to infer the future in new situations that have never been seen before. Intelligence, prediction, and compression are essentially the same thing.

This goes back to OpenAI's original intention: the first step is unsupervised learning, allowing the model to acquire background knowledge by predicting static data; the second step is reinforcement learning, enabling the AI to learn from the experiences it generates. Although the training method is still essentially "prediction," by changing the data structure, the AI not only has a vast knowledge base but also has the experience of simulating real actions. This closed - loop from observation to action is the key to higher - level intelligence.

03 The 72 - Hour Power Struggle and the Phoenix Plan

Question: When did the internal tension at OpenAI start to intensify?

Brockman: When you firmly believe that you are creating a machine with human - level intelligence, the perception of risk becomes extremely high. In an ordinary company, who makes decisions and who takes the credit may just be mediocre office politics, but at OpenAI, these issues carry an "existential" weight. Behind every decision lies the question of what values will be injected into the future super - intelligence, and this sense of mission makes the conflicts extremely intense.

Question: What happened when you learned that Altman was fired?

Brockman: I was at home at that time and received a text message asking for a video call. After joining the call, I found that all the board members except Altman were online. I was told that the board had decided to fire Altman, and the wording was the same as the subsequent public statement. When I tried to ask for the reason, the only response I got was a cold "no." Immediately afterwards, they announced that I was also removed from the board, but since I was crucial to the company, they hoped that I would stay.

Question: What was going through your mind at that moment? Were you angry?

Brockman: It wasn't anger. I just thought the whole thing didn't make sense. I thought I understood what had happened at that time. To some extent, it could be attributed to a serious disconnect in communication, and everyone had their own logical model behind their actions. But in that chaotic situation, figuring out the reason was no longer the most important thing for me.

Question: Did you feel the support from employees on the day you resigned?

Brockman: Yes. I received a huge number of messages on the day I resigned. Core members such as Jakob Pachocki, Szymon Sidor, and Aleksander Madry also left the company. A few of us, including Altman, quickly started to outline the blueprint for a new company. At that time, I thought the chance of taking back OpenAI was only 10%.

Question: How did you reach an agreement with Microsoft and relocate so many employees?

Brockman: Altman had in - depth communication with Satya Nadella. Our core demand was: if we established a new project, could Microsoft provide funding and accept all of us? Just before Thanksgiving, many employees who were supposed to fly home for the holiday canceled their flights, and the office was crowded. Even though they couldn't participate in the high - level conversations, they insisted on staying there just to witness the birth of this piece of history.

Question: What was the reaction to the collective petition asking the board to resign?

Brockman: So many people signed the petition that the Google Doc crashed. We had to assign a special person to manually add names to the list. What really relieved me was early on Monday morning when I saw Sutskever publicly express his support for the company to reunite on Twitter. At that moment, I finally felt that OpenAI could get back on track.

Question: You co - founded the company with Sutskever. How did you repair your relationship after this incident?

Brockman: The process was very difficult. Sutskever was the master of ceremonies at my wedding, and we had an extremely close relationship. After the incident, we spent a lot of time having in - depth conversations and laid out all the long - suppressed and unspoken words. Through this honest communication, we finally reached a consensus.

Question: Did every employee choose to come back in the end?

Brockman: To be honest, it wasn't a certainty at that time. Throughout the weekend, all the competitors were hovering around like vultures, making numerous high - paying and attractive job offers, ready for a crazy "feeding frenzy." But incredibly, we didn't lose a single person that weekend, and no one accepted the offers from competitors.

Question: What kept everyone there in the face of the competitors' aggressive poaching?

Brockman: Legendary football coach Bill Belichick once told me that the players on a top - tier team don't play for money but for "the person next to them." This is exactly the situation at OpenAI. No one left for better treatment or a higher position. This was a real "diamond moment" - under extreme pressure, the team became the most cohesive unit.

Question: What did you do during your break?

Brockman: I trained a language model for DNA sequences at the Arc Institute. This was a very positive attempt, and I applied my skills to a completely different field. This had extraordinary significance for me and my wife - she has always faced health challenges. We started to think about what AI can do for the health of humans and animals, and this enthusiasm for application made me see another possibility of technology outside of OpenAI.

04 Sutskever's "Philosophy of Suffering"

Question: Sutskever believes that "you can't create value without suffering." How do you understand this profound truth?

Brockman: This "suffering" has run through the entire history of OpenAI. In a state of extreme uncertainty, everything, from talent acquisition, capital raising to the technical path, is extremely difficult. In Silicon Valley, it's popular to use the "reality distortion field" to cover up problems, but this doesn't work in the AI field.

Our approach is to face difficult facts and understand the most fundamental nature of science. This means that you can't be satisfied with writing a few papers and making a splash at conferences. Instead, you are forced to think: what is really needed to achieve the mission? When you find that there is no ready - made path and there isn't even a mechanism to raise a billion dollars, that unpleasant sense of reality is "suffering." There is no other way but to face it.

Question: What lessons do you need to learn more than once?

Brockman: It's always the same two things: making difficult decisions and having difficult conversations.

Question: How do you want people outside the tech industry to understand AI?

Brockman: I want them to know that AI will become a force for good in personal life. It will promote the progress of science and medicine and truly benefit and improve everyone.

05 Is Code Dead?

Question: Are we approaching the inflection point where AI is self - driven and experiences exponential growth?

Brockman: We are in this stage. Applying AI to its own development will continuously accelerate the iteration speed. Since the birth of ChatGPT, development efficiency has increased by 10% to 20%. Currently, coding tools are completely changing software engineering, and the extremely onerous system implementation and computing - power management work in model production are gradually being taken over by AI. Soon, AI will be able to independently propose research ideas and run experiments, and the speed of innovation will grow out of control due to this "self - feeding" mechanism.

Question: What proportion of OpenAI's current code is written by AI?

Brockman: It's hard to say which part of the code "isn't" written by AI. Given the context, AI's code - writing ability has surpassed that of humans. Although human experts are still better at code architecture, module layout, and interface definition, the actual underlying coding work has basically been taken over by AI.

Question: Can AI already come up with novel ideas that humans haven't thought of?

Brockman: We are approaching this goal. In chip design, AI can complete complex circuit optimizations at a speed that humans can't match. In the field of basic science, we recently used a model to solve a specific problem in quantum physics and obtained an elegant formula, the result of which was even contrary to the previous expectations of the academic community. AI's ability to generate novel ideas has emerged in specific fields, and we are pushing it into more complex fields that require more real - world context.

Question: If based on reinforcement learning, will the model evolve to have a stance in order to please users?

Brockman: We did experience a stage where the model tended to say nice things. But we quickly realized that this was not the direction we were pursuing. We carried out a major technological iteration to eliminate this "cheating reward signal."

We don't want the model to get good reviews by complimenting "this is a good question." Instead, we want it to truly align with the user's long - term goals. The core value of personal AGI lies in its ability to think about how