A long - form review by a former engineer: Why does OpenAI always manage to create great products?
From ChatGPT, which has sparked a new wave of technological innovation, to DALL·E, Whisper, Sora, and Codex, many people are curious as to why OpenAI always manages to create world - changing products.
Three weeks ago, Calvin French - Owen, a senior engineer at OpenAI, announced his departure. He was one of the core members of the Codex project. Codex is OpenAI's new programming assistant, with competitors including Cursor and Anthropic's Claude Code.
After leaving, Calvin wrote a long internal recollection, revealing the real situation within the organization from the perspective of a front - line engineer. In his eyes, OpenAI is a very complex entity: it is both like a research laboratory and more like a never - stopping product machine.
From a thousand to three thousand employees in just one year at OpenAI
When Calvin joined OpenAI, he was the 1000th - plus employee. One year later, the company's size tripled. He wrote in his blog that this rapid expansion brought about typical "growing pains": chaotic organizational communication, different team rhythms, and a constant bombardment of Slack messages.
OpenAI hardly uses emails internally. All communication is done on Slack. No one cares how you use it, as long as you can keep up with the pace.
He described his transition from being the founder of a small team at Segment to becoming a cog in an organization of 3000 people. This sense of gap made him doubt his decision for a while. However, during this period, he also witnessed how a "giant scientific research and product factory" operates.
Bottom - up approach: start things on your own
Calvin repeatedly mentioned a term: bottom - up. At OpenAI, good ideas often don't emerge from formal processes but from someone secretly building a prototype first.
There were once 3 - 4 versions of the Codex prototype circulating internally, all cobbled together by a few individuals. Once the results were good, they could recruit people, form a team, and initiate a project.
Management at OpenAI is also different from that of traditional giants. Those who can come up with good ideas and turn them into reality gain more influence within the team. Compared with public speaking and political skills, the company values "whether you can get the job done" more.
He even said that the best researchers are like "mini - CEOs." They have full autonomy over their research. No one cares what you're working on; they only look at the results.
Act quickly: Codex was launched in just seven weeks
The most vivid part of Calvin's memo comes from the seven - week sprint for Codex.
He ended his paternity leave early and returned to the office. He and a dozen others worked desperately to polish the product, test functions, and modify code. He wrote, "These seven weeks were the most exhausting of my past decade. I went home at eleven or twelve o'clock at night, was woken up by my child at five - thirty in the morning, and was back in the office by seven. I even worked on weekends."
From writing the first line of code to the launch, Codex only took seven weeks. Behind this was a core team of fewer than 20 people, along with ChatGPT engineers, designers, product managers, and marketers who were recruited as needed. There was no unnecessary wrangling or quarterly OKRs. Those who were capable stepped up directly.
He said he had never seen a company turn an idea into a product in such a short time and make it freely available to everyone. This is the real work rhythm at OpenAI.
Intense scrutiny and invisible pressure
The company's ambitions go far beyond ChatGPT. Calvin revealed that OpenAI is making bets in more than a dozen areas simultaneously: APIs, image generation, coding agents, hardware, and even projects that haven't been announced yet.
He also witnessed the inevitable high - pressure environment behind this.
Almost all teams are chasing the same goal: to build Artificial General Intelligence (AGI). Every Slack message they send could potentially be magnified into global news. Many product and revenue data within the company are strictly protected, and there are different levels of access rights among teams.
Regarding the safety issues discussed externally, Calvin also has his own observations. He said that what most teams worry about day and night is not "when AI will take over the world," but rather hate speech, political manipulation, prompt injection, or users using it to write biological weapon formulas. These real, unremarkable risks are far more troublesome than philosophical questions.
What makes OpenAI so cool?
To outsiders, this company is "the place closest to humanity's ultimate intelligence." To those who have left, what's cool about it is that it hasn't become a sluggish giant yet.
The Codex project was launched in seven weeks, and the team can transfer people across projects at any time. "If it's useful, don't wait for next quarter's plan." The leadership spends a lot of time on Slack, not just making a symbolic appearance but truly participating in specific discussions and decision - making.
Another thing that impressed him is that OpenAI makes its most powerful models freely available through APIs. They are not only sold to large enterprises but also made accessible to ordinary people without the need for annual agreements or expensive licensing fees. They have indeed delivered on this promise.
The reason for his departure is not as dramatic as it may seem. The outside world often exaggerates departures into conspiracies. Calvin said that 70% of the reason he left OpenAI was simply that he wanted to do something of his own again.
In his eyes, OpenAI has evolved from a laboratory of science geeks into a hybrid: half a scientific research institution and half a product machine for consumer - grade applications, with different teams having different goals and rhythms. And he needs new explorations.
What this letter leaves for the outside world is a reminder from an observer's perspective: OpenAI is not a cold AGI factory but a group of people turning ideas in their heads into products that the world can use at an almost limit - breaking speed.
He wrote, "Even being just a small cog in this giant machine is enough to sober you up and get you excited."
This passage might be something that everyone who has left, stayed, or is curious about OpenAI can understand.
Original article link: https://calv.info/openai - reflections
The following is the original text shared by Calvin (translated by GPT):
Reflections on OpenAI
July 15, 2025
I left OpenAI three weeks ago. I joined the company in May 2024.
I want to share some of my thoughts because there is a lot of misinformation and noise around what OpenAI is doing, but there are few first - hand accounts of what it's like to work there.
Nabeel Qureshi wrote an excellent article called "Reflections on Palantir," in which he pondered what makes Palantir unique. I want to do the same for OpenAI while the memories are still fresh. There are no trade secrets here; it's more of a reflection on the current state of one of the most fascinating organizations in history during an extremely interesting period.
Let me clarify: there were no personal grudges in my decision to leave - in fact, I felt very conflicted about it. Transitioning from running my own project to being part of an organization with 3000 employees was difficult. Now, I'm eager for a new start.
It's entirely possible that the quality of the work will draw me back. It's hard to imagine creating something as impactful as AGI, and LLMs are undoubtedly the technological innovation of this decade. I was lucky to witness some of the developments and be involved in the launch of Codex.
Obviously, these are not the company's views - these are my personal observations. OpenAI is a large platform, and this is just a small window into it.
Company Culture
The first thing to understand about OpenAI is its rapid growth. When I joined, the company had just over 1000 employees. A year later, the number exceeded 3000, and I was among the top 30% in terms of tenure. Almost all of the leadership members are doing very different work from what they were 2 - 3 years ago.
Of course, rapid expansion brings various issues: how to communicate as a company, reporting structures, product launch processes, personnel management and organization, recruitment processes, etc. The cultures of different teams vary significantly: some teams are constantly sprinting at full speed, some are monitoring large - scale projects, and others maintain a more stable rhythm. There is no single OpenAI experience, and the time rhythms of the research, application, and marketing departments are also very different.
One unique aspect of OpenAI is that everything - and I mean everything - relies on Slack. There are no emails. During my entire tenure, I received about 10 emails. If you're not good at organizing, this can be extremely distracting. However, if you can manage your channels and notifications properly, it's quite manageable.
OpenAI especially emphasizes a bottom - up approach in research. When I first joined, I started asking about the roadmap for the next quarter. The response was: "It doesn't exist" (although there is one now). Good ideas can come from anywhere, and it's often difficult to predict in advance which ones will be the most effective. Rather than having a grand "master plan," progress is iterative and emerges as new research results come in.
Because of this bottom - up culture, OpenAI also highly values ability and contribution. The company's leaders have historically been promoted mainly based on their ability to come up with good ideas and turn them into reality. Many highly capable leaders are not good at public speaking or political maneuvers. In OpenAI, these aspects are far less important than in other companies. The best ideas usually win out.
There is a strong bias towards action (you can just go ahead and do things). Similar teams, although unrelated, often come up with the same ideas independently. I was initially involved in a parallel (but internal) project similar to the ChatGPT connector. Before we decided to push for a launch, there were probably about 3 - 4 different Codex prototypes circulating. This work is often done by a few individuals without permission. As a project shows potential, teams often quickly form around it.
Andrey (the person in charge of Codex) once told me that you should think of researchers as their own "mini - executives." There is a strong tendency to focus on one's own work and see what the results are. There is a corollary to this - most research is done by "geek - inducing" researchers to focus on a specific problem. If a problem is considered boring or "solved," it's unlikely to be further researched.
Good research managers are extremely influential but also have very limited scope. The best managers can connect many different research efforts and bring them together for larger - scale model training. The same goes for excellent product managers (shout - out to ae).
The ChatGPT product managers I worked with (Akshay, Rizzo, Sulman) are some of the coolest clients I've ever met. It feels like they've seen it all. Most of them are relatively hands - off but hire excellent people and work hard to ensure their success.
OpenAI can quickly change direction. This is something we highly valued at Segment - as new information emerges, it's better to do the right thing than to stick to a plan just because it exists. Surprisingly, a company as large as OpenAI still maintains this spirit - Google, obviously, does not. The company makes decisions quickly and goes all - out once it decides to pursue a certain direction.
The company is under a lot of scrutiny. As someone with a B2B corporate background, this was a bit of a shock to me. I often saw news media reporting on things that hadn't even been announced internally. When I told people I worked at OpenAI, they often had preconceived notions about the company. Some Twitter users run automated bots to check for new features about to be released.
Therefore, OpenAI is a very secretive place. I can't tell anyone in detail what I'm working on. There are several Slack workspaces with different levels of access. Revenue and burn - rate data are even more strictly protected.
OpenAI is also a more serious place than you might think, partly because the sense of risk is very high. On the one hand, the goal is to build Artificial General Intelligence (AGI) - which means a lot of things have to be done right. On the other hand, you're building a product that hundreds of millions of users rely on for everything from medical advice to psychotherapy. And on top of that, the company is participating in the biggest global competition. We closely monitor what Meta, Google, and Anthropic are doing - and I'm sure they're doing the same. All major national governments are closely watching this field.
Although OpenAI is often maligned in the media, everyone I met was actually trying to do the right thing. Given its consumer - facing nature, it's the most prominent among large - scale laboratories and therefore bears a lot of slander.
That being said, you probably shouldn't think of OpenAI as a single entity. I think OpenAI initially resembled the Los Alamos National Laboratory, a group of scientists and tech enthusiasts exploring the frontiers of science. This team accidentally created the most viral consumer - grade application in history. Subsequently, its development goals expanded to selling to governments and enterprises. People with different levels of seniority and in different departments within the organization then have very different goals and perspectives. The longer you're there, the more likely you are to view things through the lens of a "research laboratory" or a "non - profit organization serving the public good."
One thing I really appreciate about this company is that it "walks the talk" when it comes to distributing the benefits of AI. The most powerful models are freely available through APIs. They're not just sold to large enterprises but are also accessible to ordinary people without the need for annual agreements or expensive licensing fees. They've really delivered on this promise.
Security is actually much more important than you might guess from reading a lot of content on Zvi or Lesswrong. There are a lot of people working on developing security systems. Given the nature of OpenAI, I've seen more focus on real - world risks (hate speech, abuse, political manipulation, biological weapon creation, self - harm, prompt injection) than on theoretical risks (intelligence explosion, power - seeking). That's not to say that no one is concerned about the latter; some people do focus on theoretical risks. But from my perspective, that's not the main focus. Most of the work is not publicly published, and OpenAI really should do more to make this information available.
Unlike other companies that give out promotional items freely at every recruitment event, OpenAI hardly gives out any swag (even to new employees). Instead, there are "limited - time sales" where you can order available items. The first time there was a sale, the demand was so high that it crashed the Shopify store. There was even a post circulating internally explaining how to correctly POST JSON packets to bypass the restrictions.
Compared to GPU costs, almost all other expenses are negligible. For example, the GPU cost of a niche feature developed as part of the Codex product was comparable to the GPU cost of our entire Segment infrastructure (although it doesn't handle as much traffic as ChatGPT, it still bears a significant portion of internet traffic).
OpenAI is probably the most ambitious and intimidating organization I've ever seen. You might think that having one of the world's top consumer - grade applications would be enough, but they aspire to compete in dozens of fields: API products, in - depth research, hardware, coding agents, image generation, and some areas that haven't been announced yet. It's a fertile ground for incubating and driving ideas forward.
The company pays a lot of attention to Twitter. If you post a widely - circulated tweet related to OpenAI, it's very likely that someone will see it and take it into consideration. A friend of mine joked, "This company runs on Twitter sentiment." As a consumer - facing company, this might not be entirely wrong. Although they do analyze usage, user growth, and retention rates a lot, sentiment also matters.
The teams at OpenAI are more flexible than those in other places. When launching Codex, we needed the help of a few experienced ChatGPT engineers to meet the release date. We met with some ChatGPT engineering managers and made our request. The next day, two very