HomeArticle

AI has no personality, and the law doesn't believe in tears.

赛格大道2026-05-15 15:59
I gave you a chance, but you just didn't seize it!

The Hangzhou Internet Court recently ruled on a landmark case.

This is the first tort lawsuit in the country caused by "AI hallucination." When the plaintiff used a certain AI product, the AI generated a piece of content with a "compensation commitment." Later, it was found that the so - called "commitment" had no legal effect. The plaintiff sued the platform, claiming that they suffered losses due to the AI's response.

How did the court rule in the end?

In a nutshell: AI does not have civil subject qualification. AI cannot form a declaration of intention. The commitments generated by AI do not equal the commitments of the platform company.

Immediately afterwards, there was another incident.

A practicing lawyer was wrongly described by Baidu AI as "having been sentenced to three years in prison." The lawyer sued, and after Baidu lost the case, it appealed.

Putting these two incidents together, we can find a very subtle problem:

Perhaps the greatest danger of AI is not its nonsense.

Instead, it is becoming more and more like a trustworthy person, but in fact, it is not.

AI has hit the real bottom of society for the first time

In the past two years, the entire AI industry has been discussing a term: hallucination.

Previously, this was more of a self - mockery in the technology circle. The models fabricate references, invent history, and seriously deduce wrong formulas. People know there are problems, but they are not really worried. Because in the pure digital world, writing the wrong year or miscalculating a problem is usually just a content error with a very low cost.

But this time, the involvement of the law has changed the nature. When the AI's response involves monetary commitments and personal evaluations, the problem has directly jumped from technical parameters to social responsibility.

AI has hit the real bottom of the real - world society for the first time.

Technology can tolerate a one - thousandth error, but social rules cannot. When the text generated by AI starts to interfere with legal contracts and personal reputations, it must be judged by social order.

The problem with chatbots is not about intelligence

Many people still misunderstand AI today, thinking that its biggest problem is that it is not powerful enough.

Actually, it's not. Today's large models are already smart enough. GPT - 5.5, Claude, Gemini, DeepSeek, Kimi have language abilities that exceed many ordinary people in many scenarios.

The real problem lies elsewhere: They cannot establish a stable and trustworthy responsibility relationship.

The essence of the Internet in the past two decades is not only to connect people, but more importantly, to establish a verifiable trust agreement.

Why was Taobao successful? It's not because it can chat, but because Alipay established a guaranteed transaction system. Why has WeChat penetrated into people's lives? It's not because of its well - designed chat interface, but because WeChat is bound to real social relationships, mobile phone numbers, payment systems, and acquaintance networks. Why can Didi allow strangers to get in your car? Because the platform undertakes identity verification, order records, payment escrow, and evaluation systems.

The truly great part of the Internet has never been information, but the credit structure.

Today's large models lack precisely this.

There is no contractual relationship in any sense between users and AI. When AI gives you an answer, there is no source traceability, no legal effect, no responsible subject, and no compensation mechanism. Its commitments are not real commitments, its suggestions are not real suggestions, and its answers do not constitute anyone's declaration of intention. The court has already ruled on this.

A netizen was misled by AI when refunding a ticket and lost 600 yuan. AI once again demonstrated amazing emotional intelligence and the ability to make empty promises. It not only sincerely apologized but also made a written promise: if the platform does not compensate, it will make up the full amount without fail.

The most dangerous thing about AI is that it is "too human - like"

Previous software did not deceive because it didn't know how to express itself.

Excel won't suddenly tell you, "I guarantee to help you make money." A map app won't say, "Don't worry, I'll definitely take you there."

But chatbots will.

The underlying logic of large models is anthropomorphic language generation, which naturally pursues fluency, naturalness, being human - like, emotional stability, and high certainty.

Humans naturally understand something that is human - like as being capable of taking responsibility. This is a biological instinct. When a customer service person says to you, "Don't worry, we'll compensate," you'll default that it represents the company's attitude. Because in human society, language itself is part of responsibility.

But AI is different. AI is just probability - based generation.

Here, there is a strange gap. AI is becoming more and more human - like, but the law still defines it as a tool. It has the ability to express like a human, but it doesn't have the responsibility - taking ability of a human. It has the communication ability of a human, but it doesn't have the credit ability of a human. It's like a talking person, but the law still considers it essentially a hammer.

In a dialog box interface, correct answers and hallucinated answers look exactly the same, equally confident, equally fluent, and equally well - reasoned. This constitutes a certain kind of algorithmic arrogance, and users simply can't tell the difference.

What the court really defined this time is the boundary

Many people think this ruling is just an ordinary tort case.

Actually, its real importance lies in the fact that the court has started to re - define the responsibility boundary for the AI era.

The logic of the Hangzhou Internet Court is very clear. AI cannot be the subject of a declaration of intention. The platform has not actively expressed the willingness to compensate. Users should not fully trust randomly generated content.

In essence, this is telling the entire industry: The content generated by AI is not the legal statement of the platform.

This is a very crucial point. Because if the opposite were true, if all AI - generated content were regarded as the official expression of the platform, the entire large - model industry would not be able to operate today.

Generative AI is inherently uncontrollable. This is determined by the Transformer architecture. It is not a database search, not a rule - based system, and not a deterministic program. In essence, it is a probability - based language prediction system.

As long as it is a probability system, "hallucination" cannot completely disappear.

There is also a very crucial sentence in the judgment, which acknowledges that the platform cannot achieve zero hallucination.

This is the first official recognition at the judicial level that: AI's mistakes are not accidental loopholes, but the nature of the technology.

But the Baidu case also reminds us of another line. The judge asked in court, "Why don't these contents appear when asking the same question on Doubao or DeepSeek?" The implication is that the uncontrollability of technology is not a get - out - of - jail - free card.

AI has "hallucinations," but there is no blind spot in responsibility.

Defamatory content targeting specific natural persons belongs to the high - risk category that can be foreseen and prevented. If you have the ability to do something but don't, it is a fault.

Putting the two judgments together, a very clear line is drawn: For violations below the bottom line (defamation, rumor - spreading), the platform must bear the responsibility for negligence in review. For factual flaws above the bottom line (general inaccuracies), the risk is borne by the user.

What the law can do at present is to safeguard the bottom line. But for a large number of gray areas, such as wrong flight information, unreliable medical advice, and fabricated business data, users have to make their own judgments.

The question is: How can users make judgments in a dialog box interface?

The truly dangerous part has not arrived yet

Currently, the problem is just saying the wrong thing.

What will happen in the future? AI will help you manage your finances. AI will help you see a doctor. AI will sign contracts on your behalf. AI will make pension decisions for the elderly. AI will complete procurement approvals for companies.

Once AI starts to enter the real decision - making chain, today's problem will be magnified rapidly. Because the real world does not accept probabilistic correctness. The real world requires accountability, explainability, verifiability, and attributability.

Why is the financial industry still extremely cautious about AI? The reason is that the core of the financial system is not intelligence, but risk pricing and a closed - loop of responsibility. A loan can be issued because a responsible person can be found for every signature. Why can't doctors be completely replaced by AI? Because the medical system ultimately requires someone to bear the irreversible consequences.

Today's large models are already approaching many professionals in terms of ability. But in terms of integrating the responsibility chain, they are still in the wild west.

WeChat and Apple are quietly doing something

The entire industry is competing in terms of parameters, inference, agents, multi - modality, video generation, and world models. But the real unsolved problem is actually another one: how to anchor AI in the real - world legal system.

However, several players are approaching this issue in different ways.

The intelligent agent that WeChat is developing may be the closest attempt to an AI trust infrastructure at present. Its intelligent agent is not an isolated dialog box but is integrated into WeChat's relationship chain. Behind the merchant's intelligent agent are WeChat authentication, payment systems, and transaction records.

This kind of "rooted AI" makes commitments traceable and hallucinations costly.

WeChat's natural real - name system, payment closed - loop, social relationship chain, and platform arbitration mechanism are exactly what all independent AI chatbots lack. WeChat doesn't need to reinvent trust; it already has it.

The social relationship chain is the root of the pipeline. Social relationships "inject" trust into AI content - trust does not come from the content itself, but from the person transmitting the content.

Apple is taking a different path. The core logic of Apple Intelligence is to complete AI operations locally as much as possible, keeping data within the phone. This solves the problem of privacy trust. I'm willing to let AI access user data. Apple's brand image as a technology company that values privacy the most, which it has built over more than a decade, has become its greatest asset in the field of AI.

Although what these two companies are doing seems different, their underlying logic is exactly the same. They are not competing in terms of whose model is stronger, but in terms of who can make users trust their AI first.

In the wave of AI, Apple's pace seems out of place. This slowness is largely due to Apple's deeply ingrained philosophy of "striking only after the opponent has made a move" - not aiming to be the first, but aiming to go further and more steadily.

What about those independent AI assistants? Their models may be stronger, but they have a "naked chat" with users, with no identity, no contract, no traceability, and no consequences. Users will become more and more emotionally dependent on them, but in terms of the system, they are still subject - less entities.

The AI industry is experiencing an overlooked turning point

In the past two years, the main theme of the entire AI industry has been the continuous improvement of model strength.

In the next few years, the real main theme of the industry may become how models can be accepted by society.

Note, it's about acceptance. Just because technology can do something doesn't mean society can accept it.

The biggest problem in the early days of the Internet was false information and lack of trust. It took two decades to establish a system including real - name registration, blue - V verification, platform review, payment escrow, and credit scoring, a system that allows you to believe that there is a real person on the other side of the screen.

The problems in the AI era are more complex. Because now what you're facing may not be a real person at all. It has no emotions, no morals, no interests, and no personality. But it can infinitely imitate all these things.

When a non - existent entity starts to enter the social cooperation system, how can society continue to function?

This may be the real big problem in the AI era.

Today's case in the Hangzhou Internet Court may seem like an ordinary ruling. But in fact, it is answering a bigger question in advance: Is AI a tool or a subject?

At least for now, the answer given by the judiciary is still that it is a tool. And whoever first establishes a responsibility system that makes this "tool" trustworthy will get the ticket to the next - generation platform.

What large models have always lacked is not computing power, but a set of credit infrastructure that can make AI socialized.

This article is from the WeChat official account "Saige Avenue" (ID: saigedashu), author: Huahua, published by 36Kr with authorization.