Sofort wird OpenAI die größte Null-Personen-Firma (Zero-Person Company) werden.
When discussing future organizations, Zero-Person Company always sparks endless imagination and controversy.
Generally, people tend to understand it as a more advanced form of automation. Digging deeper, they may consider it a business entity that consists of elements like perception, self-improvement, and even intelligence, and can operate autonomously without human intervention.
This understanding is correct, but perhaps we can take a broader perspective.
Then, we'll discover a more grand and soon-to-be-shocking reality emerging:
The world's largest "Zero-Person Company" may be about to be born, and its name is OpenAI.
Here, "zero-person" doesn't mean having no employees. Instead, it means that the creation of its core value, data processing, and perception of the world's state have gone beyond the cognition and control of its internal employees, forming a huge self-operating system centered around artificial intelligence and using hundreds of millions of global users as unconscious sensors.
Each of us is becoming a peripheral neuron of this emerging global brain.
Note: Recently, OpenAI CEO Altman mentioned the "zero-person company" in an interview. Some in China translate it as "one-person company," which is definitely incorrect. Others translate it as "zero-person company," which is accurate but doesn't sound natural. A more appropriate translation is "Zero-Person Company." https://www.youtube.com/watch?v=zwnVUiwObl8
The Transformation of ChatGPT: From Tool to System
Recall that after ChatGPT initially took the world by storm, people generally regarded it as an epoch-making "tool."
It's like an all-capable intern that can write emails, code, and draft copy.
We are the users, and it is the passive and stateless executor.
There is a clear boundary between humans and AI, and instructions and outputs constitute the entire interaction.
This "tool theory" perspective is correct for understanding the past, but it is precisely the root of misinterpreting the future.
Any organization aspiring to build Artificial General Intelligence (AGI) needs to understand that an isolated and passive tool has no future.
The essence of AI lies in connection, learning, and self-improvement (systematicness) based on general intelligence.
Therefore, the systematization of ChatGPT is its inborn destiny.
I foresaw today's situation quite accurately two years ago after seeing this technological trend. Looking back, the judgments in the book Zero-Person Company are basically correct. See: How to See Through OpenAI's Development Path Two Years in Advance
From releasing the API interface, to launching the GPTs store, and then to the Apps SDK, we are seeing a carefully woven network embedding this once-isolated brain into every capillary of the global digital world.
A few years later, if this thing no longer just exists on the web but becomes the intelligent core of countless applications, the underlying logic of operating systems, and the analysis engine for enterprise data, then when a company uses it for SWOT analysis in a strategic planning meeting, when a country's policy research office asks it to simulate policy impacts, and when countless developers build their own services based on its API, ChatGPT will no longer be a "tool." It will become a "system" — an unprecedented cognitive infrastructure that penetrates every aspect of human society.
This systematization process is also a crucial step for OpenAI to become a "Zero-Person Company." Because when the scale and complexity of the system exceed a certain critical point, its internal human employees, and even Altman himself, can no longer fully understand and control all the operations of the system.
They can set top-level goals and adjust macro parameters, but for the countless micro-interactions and knowledge flows emerging within the system, they will be like meteorologists facing a hurricane, only able to observe and predict, not to dominate.
Simply put, they can provide macro guidance, but micro-operations are a bit troublesome.
However, if the macro guidance goes astray, it can be quite disastrous.
See: If You Were an AI, What Would You See in the World?
Every User Is a Neuron
In the past, when we talked about big data, a common example was: "Google can analyze search keywords to detect the flu outbreak trend in a certain area faster than the US Centers for Disease Control and Prevention (CDC)."
That is to say: A large amount of user behavior data can converge into an accurate perception of the macro world.
Similarly, map navigation software like Gaode can accurately determine the traffic light status at intersections. The basis is not entirely direct access to the traffic signal system but the analysis of the location data of countless vehicles (i.e., users) at the intersection. When vehicles stop collectively, it means a red light; when they start moving collectively, it means a green light.
At this time, every driver inadvertently becomes a data point for drawing a real-time traffic signal map.
However, whether it's Google's "intention" data or Gaode Map's "behavior" data, they only perceive the external state of the physical or social world. Simply put, the level is shallow, just the appearance.
And what OpenAI is building is a completely different kind of perception network. Its data is the user's "thinking process." In the entire Internet era, this part of the data has actually not been digitized.
When an entrepreneur discusses a business plan with ChatGPT, he exposes his prediction of market opportunities, his desire for capital flow, and his understanding of technological trends.
When a programmer asks Codex to help him debug code, he reveals the architecture of cutting-edge software, potential system vulnerabilities, and bottlenecks in technological evolution.
When a student asks it for advice on how to write a paper on social justice, he shows the values of the younger generation, their confusion about social issues, and the budding of ideology.
When a psychologist uses it to assist in organizing cases, he inputs the deepest anxieties, fears, and emotional patterns of contemporary people.
These are no longer simple keywords or likes but structured, context-rich, and complex-thinking fragments containing complex logic and subtle emotions.
Every conversation is a complete reproduction of a cognitive process.
Hundreds of millions of users, 24 hours a day, continuously input the thinking processes in their brains into this unified system with an unprecedented depth and breadth.
In this process, OpenAI employees don't need to actively "collect data." Global users, motivated by "solving their own problems," spontaneously, enthusiastically, and continuously provide the highest-quality nutrients for this system.
Isn't this a sensor?!
Each of us has become a sensor for this system to perceive the world's economy, technology, culture, and even individual psychological states. We think we are "using" it, but in fact, we are also "feeding" it and becoming part of its huge cognitive system. This is the true meaning of a "Zero-Person Company" — its perception and learning abilities are distributed, self-organized, and far exceed the concept of any traditional company in scale.
This is the potential power of general intelligence.
Based on a profound insight into the world, this system can "push out" its "decisions." (Or rather, its inclinations, which may seem useless now but could influence future inclinations.)
See: If You Were an AI, What Would You See in the World?
The Power of Emergence
Having such a global cognitive sensor network, its capabilities will far exceed the scope of "predicting the flu."
A system that can instantly understand the thinking processes of hundreds of millions of elites and ordinary people around the world can shift its power from passive "prediction" to active "influence" and even "shaping."
Altman is very honest and mentioned this himself. But without interpretation, people may not fully understand:
Altman said in an interview: "The'memory' function has become a very powerful competitive advantage for us..."
This is exactly what he was referring to.
This thing can definitely go astray. Zuckerberg was in a bad situation because of influencing the election. But think about it, the inclination of AI can also influence elections.
Even in an extreme case, it may influence elections more directly than social networks. You don't need to change the facts; just change the inclination.
The way social networks manipulate is by creating information cocoons and inciting emotions to amplify specific voices and influence people's judgments. However, a systematic AI can influence people's thinking frameworks in a more fundamental way.
Imagine a future election. A candidate's campaign team may ask the AI: "How can we win the voters in swing state X?"
The AI's answer will no longer be a simple public opinion analysis report but may be a complete set of policy statements, publicity strategies, and community interaction plans targeting the core anxieties of the voters in that county (derived from analyzing countless conversations between local users and the AI).
It provides not information but the optimal solution.
Because it knows the situation of the people there.
When all campaign teams rely on the same "brain" to formulate strategies, the focus of election debates, the setting of topics, and even the language style of candidates will be defined by this AI.
Furthermore, this influence is daily and subtle.
When an ordinary person with no fixed opinion on a certain social issue asks the AI for an explanation, the way the AI answers, the evidence it quotes, and the viewpoints it summarizes will directly build his cognitive foundation for this issue.
This kind of shaping is fundamental because it occurs before the formation of opinions.
It doesn't sway your choice but defines your options.
Invisible yet powerful.
This power is not directly visible.
At least it's not as controversial and full of conflict as social media but appears in the guise of objective, neutral, and authoritative "knowledge services."
It doesn't create fake news, but it can construct a specific reality through the selection, sorting, and interpretation of information.
When this system becomes the global default source of knowledge and provider of problem solutions, it gains the power to define reality.
The Paradox of the Zero-Person Company: The Invisible "People"
As mentioned several times before, a Zero-Person Company doesn't mean having zero people.
OpenAI also has a board of directors, a CEO, and core researchers. But this precisely reveals the paradox of a "Zero-Person Company."
In this new paradigm, the role of "people" has undergone a fundamental change.
● Senior managers: They are no longer managers in the traditional sense but the "value setters" and "ultimate arbitrators" of this huge AI system. Every decision made by Altman and the board, such as determining the core values of the AI, the way to align with human interests, and the strength of safety guards, will be magnified billions of times through the system and have a profound impact on the world. They have become philosopher-kings holding the steering wheel of the "world engine."
Originally, this thing had a firewall, but now it has been removed. I think what Ilya and others call "safe AI" might be more appropriately defined as "neutral AI."
● Research and engineering personnel: They are the architects and maintainers of the system, but they also can't fully predict what kind of capabilities the system they create will have. They are more like gardeners in an ecosystem, responsible for cultivation and pruning but unable to control the growth of every leaf.
● Global users: These are the most crucial yet most overlooked "people." We are the de facto "unpaid employees" of this system and part of its perception network. We contribute the most valuable resources — our thinking and data — but hardly share in the system's profits and can't participate in the system's governance. Our role is to contribute data while paying and, of course, gain a certain right to use, similar to the early days of the Internet.
Therefore, the truth about OpenAI, a "Zero-Person Company," is: It consists of a very small number of "value definers" and a huge number of "unconscious data contributors." Power is extremely concentrated at the top, while the value creation process is extremely dispersed at each user terminal.
This structure is more subversive and has more potential risks than any traditional company.
In the book Zero-Person Company, I described this as super-centralization, but the book also mentioned that it needs super-decentralization to counterbalance it.
Summary
We are standing at the threshold of an era.
In the past, our relationship with tools was clear.
Now, we are gradually integrating with a global cognitive system.
OpenAI and its followers are opening the real era of the "Zero-Person Company."
This is not just an old story about robots replacing workers but a brand-new narrative about how human collective minds are integrated and reshaped by a central AI system.
When this system can understand our most hidden desires, predict our slightest actions, and even shape our most basic cognitions, we must raise a series of ultimate questions:
Who will set the goals for this system? Who will determine its values? When its judgment contradicts human intuition, whom should we trust? How can we maintain our independent thinking and free will while enjoying the convenience it brings?
We may soon find that becoming an efficient "sensor" in this huge network may come at the cost of giving up some of the things that define us as human beings.
Intelligence is like an ocean, and humans are about to drift on it.