Why do we resist when AI decision-making outperforms human decision-making?
In 2015, IBM sent its most advanced supercomputer system, Watson, to work as a doctor.
This machine learned a vast amount of diagnosis and treatment knowledge for 13 types of cancer (including breast cancer, lung cancer, rectal cancer, etc.). It could read a patient's entire medical record in a few seconds, then make multiple rounds of judgments, and finally output a comprehensive and high - quality treatment plan. It sounds great. However, a few years later, many medical partners abandoned this project, and Watson Health was sold by IBM.
Where was the problem? It wasn't that the algorithm wasn't smart enough, but that it failed to understand one thing from the start: doctors aren't here to just execute orders.
Similar stories play out every day. The founder of a convenience store chain firmly believed that "every human - involved node would lead to a decrease in efficiency." So, he spent an entire year using an AI central brain to take over all decision - making processes of the convenience stores, from site selection, ordering, product display to daily operations. For example, in a store near a school, at 3 p.m. when kids were about to finish school, the system determined that snacks should be placed in the most prominent position. Thus, the clerk received an order to move the snacks to the entrance before 3:05 p.m., or they would be fined. What was the result? Many employees felt a lack of a sense of achievement, seeing themselves as mere tools. The employees' unhappiness was also passed on to consumers, who couldn't feel the warmth of the service. As a result, the convenience store lost many customers soon after.
There is a huge gap between the exponential evolution of technology and the linear growth of human cognition.
This is exactly the core problem that Professor Lu Xianghua from Fudan University School of Management aims to solve in her new book AI Revolution: Five Laws of Human - Machine Integration and Symbiosis: As AI becomes more and more powerful, what should humans do? And how should enterprises be managed? To this end, she points out five laws of human - machine integration and symbiosis.
Law 1: Optimization of AI Technology Interaction Ability
Humans naturally distrust black boxes.
There is a famous "bouncer problem" in academia: A nightclub bouncer can reject you for any reason, such as "you shouldn't wear canvas shoes here," but the real reason might be your skin color. You have no way to verify it because you can't see if there are people wearing canvas shoes inside the club when you're outside.
The same goes for algorithms. It can easily hide the real decision - making basis. So, users will instinctively resist it.
How to solve it? Three keywords: anthropomorphism, transparency, and reliability.
Anthropomorphism is easy to understand. When an AI customer service can recognize your emotions and respond in an appropriate tone, you won't think of it as a cold machine. Research shows that anthropomorphic AI is more likely to gain trust, and even when the service fails, users are more willing to forgive it.
But there is a trap here: the uncanny valley effect. When an AI is too human - like, users may feel fear and unease. What's more troublesome is that highly anthropomorphic AI will make users have unrealistic expectations - if it looks like a human, it should be as smart as a human, right? Once these expectations are not met, the disappointment will double.
So, the correct approach is to adjust the degree of anthropomorphism according to the scenario. For customer service handling simple queries, low - level anthropomorphism is enough; for financial advisors involved in complex decision - making, high intelligence combined with high - level anthropomorphism is required; for health consultations involving sensitive information, it should be "intelligent but not anthropomorphic" - users are more willing to disclose their privacy to an obvious machine because it won't laugh at you.
Transparency is also important, but it's not the more the better. Some research has found that after enterprises disclose to employees how AI performance scores are used, employees with low AI scores are less willing to work hard - they take the low scores as an anchor point.
Reliability is the bottom line. All anthropomorphism and transparency are just icing on the cake. Only reliability can determine whether users will use the product in the long run. After Google's AI search function was launched, a user asked how to stick cheese on pizza, and the AI suggested "add some glue" - because it learned an 11 - year - old joke from Reddit. As a result, the willingness to use this function dropped to 7%.
Law 2: Active Management of User Collaboration Behavior
Users' attitudes towards AI often swing between two extremes.
One extreme is resistance. There was an interesting experiment: Researchers designed two groups of customer service scenarios with the same service staff. One group told users that "it's a real person," while the other group said "it's a digital person." What was the result? The satisfaction rate of the group labeled as "digital person" was 10 - 15 percentage points lower. Even though there were real people providing the service behind the scenes, users just didn't like it.
This is called "species discrimination." Humans naturally have biases against AI, even if it performs more objectively and fairly.
The other extreme is over - reliance. In March 2018, an Uber self - driving car with a safety operator in Arizona, USA, killed a cyclist. Subsequent police analysis found that if the safety operator had been focused on the road, the car could have stopped 12.8 meters in front of the victim. But he didn't. Because he thought with AI on board, there wouldn't be any major problems, and he could completely relax.
This phenomenon is called "AI inertia." As machines become more capable, humans become more and more lazy. GPS makes people lose their sense of direction, calculators make people forget mental arithmetic, and ChatGPT is making creativity more homogeneous. Some research has found that users of GPT have a significant improvement in creativity in the first five days, but after turning off GPT on the seventh day, their creativity level drops back to the original point - and the tendency towards homogeneity remains.
Enterprises need to actively manage these two extremes. For resistance, they can increase the social attributes of technology, making AI not just a tool but a "partner"; for inertia, they can force users to think by delaying the display of AI suggestions and requiring users to submit their own judgments first.
There was an interesting experiment in medical image diagnosis: When doctors were required to submit their initial judgments before seeing AI suggestions, their blind - following rate of AI significantly decreased, and the quality of their decisions improved instead.
Law 3: Human - Machine Complementation: The Secret of 1 + 1 > 2 Lies in Division of Labor
Let's go back to the story of the convenience store at the beginning. What's its opposite?
There is a large pharmacy. They also use AI, but in a different way. After pharmacists input symptoms, the AI recommends main drugs, auxiliary drugs, and related medications. Pharmacists can either accept the suggestions or search on their own. Finally, statistics showed that the ratio of orders completely relying on AI, orders relying entirely on pharmacists' independent searches, and orders with human - machine collaboration was approximately 20%, 37%, and 43% respectively.
Which group had the best result? The human - machine collaboration group. It recommended the most drugs, and consumers had the highest acceptance rate.
What does this indicate? The value of AI won't be automatically realized. It needs humans to activate it.
The key is the division of labor. For tasks with high computability, let AI take the lead; for tasks with strong subjectivity and requiring empathy, let humans take the lead; for complex tasks, human - machine collaboration is needed. For example, in telemarketing, AI is responsible for making calls, collecting information, and screening potential customers, while human salespeople are responsible for in - depth communication and closing deals. Research has found that this division of labor can significantly enhance the creativity of human salespeople.
Furthermore, humans and AI also need to learn from each other. AI continuously improves through machine learning, and humans continuously grow through collaborating with AI, forming a spiraling upward cycle. Researchers from Tsinghua University proposed the concept of "AI ability" - the ability framework required by individuals in the AI era. The good news is that this ability can be trained.
Law 4: Adaptation of Organizational AI Management Strategies
AI not only changes individuals but also reshapes organizations.
The World Economic Forum predicts that within five years, AI will lead to a net reduction of 14 million jobs, but in the long run, it will create about 12% of new jobs. The key is that those who are replaced can't easily get the new jobs.
This means that enterprises need to redesign positions, retrain employees, and redistribute responsibilities.
Especially responsibilities. When AI is involved in decision - making, who should be responsible when something goes wrong? Currently, the law doesn't recognize AI as a responsible entity, but enterprises can't avoid management responsibilities because of this. Some platforms simply write on the interface that "the result is automatically generated by AI and for reference only." This approach avoids risks in the short term but destroys users' trust in AI in the long run.
A better approach is to let AI show a "willingness to be responsible" attitude. Professor Lu's own research found that when an AI doctor actively points out users' mistakes in medical common sense, users actually think it is more responsible and are more willing to use it. This kind of "ownership spirit" behavior design is much more effective than a disclaimer.
Law 5: Guarantee of AI Social Fairness
Algorithm biases are everywhere. Amazon's recruitment algorithm automatically reduces the weight of resumes containing the word "female"; in the ImageNet database, labels like "loser" and "criminal" reflect the biases of the annotators; price discrimination against regular customers based on big data is the algorithm "legally" discriminating against old users.
Where is the problem? There are biases in training data, loopholes in algorithm design, and defects in optimization goals.
The solution is to take multiple measures simultaneously. For example, technically, improve data collection, enhance algorithm transparency, and introduce fairness constraints; at the enterprise level, establish AI ethical guidelines and set up review committees; at the social level, improve laws and regulations and strengthen industry self - discipline.
This is not only a technical problem but also a values problem.
Epilogue
As Lewis Mumford said in Technics and Civilization: "Behind all great material inventions, there is not only the evolution of technology but also the transformation of ideas."
Every technological revolution is accompanied by a game between humans and technology. It was the same in the steam engine era, the Internet era, and the AI era is no exception. But the end of the game has never been about who replaces who, but how to coexist.
The five laws provided in this book - optimization of interaction ability, management of user behavior, enhancement of human - machine complementation, adaptation of organizational strategies, and guarantee of social fairness - are not for you to fight against AI, but for you to learn how to dance with AI. After all, the value of AI ultimately depends on humans to realize.
This article is from the WeChat official account "Fudan Business Knowledge" (ID: BKfudan). Author: Fudan Business Knowledge. Published by 36Kr with authorization.