Ausführliche Rückschau eines ex-Ingenieurs: Warum kann OpenAI immer wieder großartige Produkte hervorbringen?
From ChatGPT, which has triggered a new wave of technology, to DALL·E, Whisper, Sora, and Codex: Many people wonder why OpenAI keeps creating groundbreaking products that change the world?
Three weeks ago, Calvin French - Owen, an experienced engineer at OpenAI, announced his departure. He was one of the core members of the Codex project. Codex is a new programming assistant from OpenAI, whose competitors are Cursor and Claude Code from Anthropic.
After his departure, Calvin wrote a long internal memoir in which, from the perspective of a front - end developer, he revealed the actual situation within this organization to all those interested in OpenAI. In his view, OpenAI is a very complex entity: It is both like a research institute and like an indefatigable product machine.
OpenAI grew from a thousand to three thousand employees within a year
When Calvin joined OpenAI, he was the thousandth or a later employee. A year later, the company's size had tripled. In his blog, he wrote that this rapid growth brought typical "growing pains": chaotic organizational communication, different work rhythms of teams, and a flood of Slack messages.
Within OpenAI, communication by email almost never happens. Everything is done on Slack. No one cares how to use it as long as you can keep up with the pace.
He described what it was like to go from being the founder of a small team at Segment to being a small part in an organization with 3000 employees. This difference made him doubt his decision for a while. But this period also allowed him to see how a "huge research and production factory" works.
Bottom - up: You can just give it a try
Calvin repeatedly mentions a word: "bottom - up". At OpenAI, good ideas often don't come from a set process but from someone simply creating a prototype on their own initiative.
There were at times 3 - 4 versions of the Codex prototype circulating within the company. All were self - assembled by small groups of people. If the results were good, they could recruit employees, form teams, and initiate projects.
The management also differs from that of traditional large corporations: Those who have good ideas and can implement them gain more power in the group. Compared to presentations and political maneuvers, more attention is paid here to "whether you can actually get things done".
He even said that the best researchers act like "small CEOs". They have full authority over their own research. No one tells them what to do; it's all about the results.
Quick action: Codex went live after only seven weeks
The liveliest part of Calvin's memorandum comes from the seven - week Codex project.
He ended his paternity leave early and returned to the office to work eagerly on the product with a small group of people, testing functions and improving the code. He wrote: "Those were the most strenuous seven weeks of my life in the past ten years. I got home at eleven or twelve o'clock every evening, was woken up by my children at five thirty, and was back in the office at seven. I also worked on weekends."
Only seven weeks passed from the first line of code to the launch of Codex. Behind this success was a core team of fewer than 20 people, supported by ChatGPT engineers, designers, product managers, and market researchers who could be called in at any time. There were no unnecessary discussions and no quarterly OKRs. Those who could, just pitched in.
He said he had never seen a company that could turn an idea into a product and make it accessible to everyone for free in such a short time - this is the real work rhythm of OpenAI.
Increased attention and invisible pressure
The company has far greater plans than just ChatGPT. Calvin revealed that OpenAI is investing in over a dozen directions simultaneously: API, image generation, programming agents, hardware, and even unpublished projects.
He also saw the inevitable high pressure behind it.
Almost all teams aim for the same goal: the development of general artificial intelligence (AGI). Every Slack message can become news worldwide. Many internal product and revenue data are strictly kept secret, and there are different access permissions within the teams.
Regarding the security issues discussed externally, Calvin also has his observations. He said that most teams don't worry so much about the question "when will AI rule the world" but about hate speech, political manipulation, prompt injection, or the possibility that users will use it to create recipes for biological weapons. These real, unremarkable risks are more difficult to handle than philosophical questions.
What makes OpenAI cool?
For outsiders, this company is "the place closest to human ultimate intelligence". For those who leave, its coolness lies in the fact that it hasn't become a sluggish corporation yet.
The Codex project went live after seven weeks. Teams can move employees between projects at any time: "If it helps, don't wait for the next quarterly plan". The management spends a lot of time on Slack, not just symbolically but actually participating in discussions and decisions.
What also impressed him: OpenAI provides the most powerful models for free via the API. They are not only sold to large corporations but also used by ordinary people without having to sign an annual contract or pay expensive license fees. In this regard, they keep their promise.
The reason for his departure was not so spectacular. Outside, his departure is often portrayed as a conspiracy. However, Calvin said that 70% of the reason was that he just wanted to do something for himself again.
In his view, OpenAI has evolved from a group of scientists experimenting in a laboratory to a mix: One part is research, and the other part is a consumer - product production machine. Different teams have different goals and rhythms. And he needs new challenges.
In the end, this letter leaves a warning from the perspective of an outsider: OpenAI is not a cold AGI factory but a group of people who are turning ideas into products that the world needs at an unprecedented speed.
He wrote: "Even if you're just a small part in this huge machine, it's exciting enough and keeps you on your toes."
This sentence might be understood by all those who leave OpenAI, stay, or are just interested in it.
Original Link: https://calv.info/openai - reflections
Here follows the original post by Calvin (translated by GPT):
Thoughts on OpenAI
July 15, 2025
I left OpenAI three weeks ago. I joined the company in May 2024.
I want to share my thoughts because there is a lot of rumor and noise around OpenAI, but few first - hand reports about the work environment there.
Nabil Qureshi wrote a wonderful article called "Reflections on Palantir" in which he thought about the peculiarities of Palantir. I want to do the same for OpenAI while the memories are still fresh. There are no business secrets here, just thoughts about the current state of one of the most fascinating organizations in history at an extremely interesting time.
First of all: My decision to leave has no personal reasons. In fact, I'm very ambivalent about it. Moving from my own project to being a member of an organization with 3000 employees was difficult. Now I'm longing for a new beginning.
It's quite possible that the quality of work will bring me back. It's hard to imagine creating something as far - reaching as AGI. And LLMs are undoubtedly the technological innovation of this decade. I was lucky to witness some developments and be involved in the release of Codex.
Obviously, these are not the views of the company - as an observer, these are my personal thoughts. OpenAI is a large platform, and this is just a small glimpse into this world.
Culture
The first thing you need to know about OpenAI is its rapid growth. When I joined, the company had just over a thousand employees. A year later, it was over three thousand, and I was among the first 30% of employees in terms of my length of service. Almost all management now has a completely different job than two or three years ago.
Naturally, rapid growth also brings problems: how the company communicates, what the reporting structure is like, how the production processes work, how personnel management and organization are designed, how the hiring process goes, and much more. The culture of different teams varies greatly: Some teams work constantly under high pressure, others oversee large projects, and still others have a more relaxed rhythm. There is no one - size - fits - all OpenAI experience, and the work rhythms of research, development, and marketing also differ significantly.
A peculiarity of OpenAI is that everything - and I really mean everything - happens on Slack. There are no emails. During my entire time at OpenAI, I might have received ten emails. If you're not well - organized, it can be very distracting. But if you can manage the channels and notifications well, it's quite doable.
OpenAI especially values a bottom - up approach in research. When I started asking about the plan for the next quarter, I got the answer: "There isn't one" (although there is one now). Good ideas can come from anywhere, and it's often difficult to tell in advance which ones will be the most successful. Instead of a large "master plan", progress develops step - by - step as new research results emerge.
For this reason, OpenAI also values abilities and contributions highly. Historically, OpenAI's management has mainly been appointed based on their ability to develop and implement good ideas. Many very competent managers are not particularly good at presentations or political maneuvers. In OpenAI, these aspects are far less important than in other companies. The best ideas usually win.
There is a strong tendency towards action (you can just get started). Similar teams that are independent of each other often come up with the same ideas. I was initially involved in a parallel (but internal) project similar to the ChatGPT connector. Before we decided to launch the project, there were maybe three or four different Codex prototypes. These works were often carried out by small groups of people without permission. If a project looks promising, teams quickly form around it.
Andrey (the head of Codex) once told me that researchers should be regarded as their own "small business managers". There is a strong tendency to focus on one's own things and wait for the results. There is a consequence of this - most research is carried out through "geek incentives" to focus researchers on a particular problem. If a problem is considered boring or "solved", it probably won't be further investigated.
Good research managers have a great influence but also very limited means. The best managers can connect many different research works and combine them into larger model trainings. The same applies to good product managers (Hi to ae).
The ChatGPT product managers I've worked with (Akshay, Rizzo, Sulman) are the coolest clients I've ever met. It seems as if they've seen it all. They give employees a lot of freedom but make sure they hire good employees and help them succeed.
OpenAI can quickly change direction. This was an aspect that was very important to us at Segment - it's better to do the right thing with new information than to stay on the old path just because of a plan. It's amazing that a company the size of OpenAI still retains this spirit - in contrast to Google. The company makes decisions quickly and then fully commits to a direction.
The company is closely watched. As someone with a background in B2B companies, I was a bit surprised by this. I often saw that the media published news that hadn't even been announced internally yet. When I tell people that I work at OpenAI, I often encounter preconceived opinions about the company. Some Twitter users operate automatic robots to look for new features.
Therefore, OpenAI is a very secretive place. I can't tell anyone exactly what I do. There are several Slack workspaces with different access permissions. Revenue and expenses are kept even more strictly secret.
OpenAI is also a more serious place than you might think, partly because of the high sense of responsibility. On the one hand, the goal is to develop general artificial intelligence (AGI) - which means many things have to be done right. On the other hand, a product is being developed on which hundreds of millions of people depend to get answers to questions, from medical advice to psychotherapy. And finally, the company is engaged in the biggest global race. We keep a close eye on Meta, Google, and Anthropic - and I'm sure they do the same.
Although OpenAI is often criticized in the media, I've found that all employees sincerely try to do the right thing. Due to its consumer - oriented nature, it's the most visible research institute and is therefore highly criticized.
But that doesn't mean you should view OpenAI as a unified entity. I think OpenAI was initially like the Los Alamos Laboratory, a group of scientists and technology enthusiasts pushing the scientific boundaries