HomeArticle

E-commerce platforms witness a “magic showdown”: Sellers use AI-generated fake product images to deceive buyers into placing orders, while buyers use AI-generated pictures of spoiled fruits to deceive sellers into giving refunds.

机器之心2025-08-05 16:52
The rapidly developing and poorly regulated AI has lowered the threshold for fraud by both buyers and sellers.

AI image generation is not only used by sellers but also by buyers.

Recently, many netizens have shared a ridiculous operation: In order to take a little advantage from sellers, some buyers deliberately claim that the products have defects and demand a refund. In fact, the defect pictures are created by themselves using AI, such as turning a good durian into a rotten one.

Since it's inconvenient to return fruits for verification (even if they are not bad at the time of purchase, they will probably go bad on the way back, and the shipping cost is high), merchants have to refund the money reluctantly.

Some products can be returned for verification, but due to the relatively low unit price and the cumbersome return process, it's actually cheaper for merchants to refund the money rather than go through the return process. So when receiving feedback about defects, such merchants usually choose to refund the money or compensate a partial amount.

However, merchants have a countermeasure. They require buyers to cut the defective products to ensure they lose their usability. But now, even this measure has been cracked by AI.

We learned from e - commerce sellers that this kind of fraud has a long history. About a decade ago, some buyers used tools like PS to add defects to photos of normal products. But with the average user's photo - editing skills, most of the time, sellers could spot the fake when zooming in on the pictures. However, it's much more difficult to distinguish AI - generated pictures.

This seems like a farce of "magic against magic" because on today's shopping platforms, it's not uncommon for merchants to abuse AI, resulting in many buyers receiving products that don't match the pictures.

Merchants use AI for various forms of misleading: generating non - existent product pictures out of thin air, over - beautifying ordinary products; using virtual models that can't truly reflect the wearing effect to save costs; and even using AI to mass - produce seemingly real "buyer shows" and detailed positive reviews.

Some of the above - mentioned buyers, besides being greedy for small gains, may also have a complex "retaliatory rights - protection" psychology. When some buyers have been deceived by merchants with beautiful AI - rendered pictures and received products that don't match the description, they may have the impulse to "fight fire with fire".

Out of curiosity, we tried some AI tools and found that adding defects to normal products is as easy as speaking. Although AI tools add watermarks to the generated pictures, it's very simple to crop them out.

However, it's still a bit difficult for the models to present the effect of damaged products.

In response to this problem, many netizens have proposed some solutions, but these solutions seem to have their own limitations.

For example, some people think that buyers can be required to send back videos of the "defective" products. It's hard to say whether this method is effective because after several attempts, many video - generation tools can also generate very good - looking videos, which depends on the AI tools chosen by buyers and the effort they put in.

Some others think that buyers can be required to take several pictures from different angles. This idea takes advantage of a major weakness of current AI generation technology: "multi - view consistency". However, this is at best a temporary "patch". After all, the technological iteration speed of AI is calculated in days. As soon as we find its flaw, it may be fixed tomorrow.

There is also a suggestion to require buyers to take and upload pictures within the app and prohibit the use of the photo album. This can prevent the direct upload of AI - generated finished products, but it can't prevent the "physical cheat" - taking pictures with two mobile phones, and this so - called restriction becomes useless.

In summary, single forms of evidence are extremely easy to forge. Perhaps a more feasible way is to require buyers to provide a hard - to - forge and logically complete evidence chain: a multi - angle, full - process real - shot video or a series of continuous pictures that include key steps such as 'unboxing, cutting, and showing defects'.

We also tried to "fight fire with fire": using AI to identify AI. In fact, there are many AI detectors on the market, but in actual tests, the identification effect is like "drawing cards". Most of the time, they are also "uncertain" whether the content is AI - generated.

A more forward - looking technical solution is to introduce digital watermarking and content tracing technologies. Currently, the technology community is promoting industry technical standards such as C2PA, and Google has launched a tool called SynthID, which can directly embed invisible digital watermarks in AI - generated pictures, videos, and audios.

This technology is like a built - in digital ID for AI content, which can record key information about its generation and modification. If mainstream AI models are required to add such marks when generating content in the future, tracing and identification will become much easier.

Ultimately, it is the rapidly developing and poorly regulated AI that has lowered the threshold for both buyers and sellers to cheat. It's difficult to prevent this with individual efforts, and it may also cause additional costs for merchants' after - sales service and buyers' rights protection.

This trust crisis may be a continuous "attack - defense battle".

On the one hand, AI generation technology is constantly evolving, making forged pictures and videos more and more realistic; on the other hand, AI detection technology is also struggling to catch up, trying to identify forged traces by analyzing the microscopic features of images. This is like an endless cat - and - mouse game, testing the iteration speed of both sides' algorithms.

In this context, platform providers are also exploring a combination of various technologies and strategies. For example, by strengthening the integrity of the evidence chain to increase the difficulty of forgery, and giving higher weight to photos with original information such as timestamps and geographical locations taken by the in - app camera.

Taobao has also previously issued an announcement to regulate AI fake pictures and severely crack down on the illegal acts of using AI fake pictures to deceive consumers and infringe on the rights of original brand merchants.

At the same time, using big data analysis to establish a user credit model and conducting stricter reviews on accounts with abnormal behaviors is also a common risk control measure. When technical means can't provide a definite answer, introducing an independent third - party appraisal service provides a solution that combines technology and human judgment.

In the long run, the industry generally believes that it is crucial to establish a unified and traceable digital content standard. If digital watermarking technology like SynthID can become an industry consensus and be widely applied, it will undoubtedly provide the most effective technical path to solve the current trust dilemma.

Ultimately, the problems caused by technology may still need to be solved by more mature and unified technology.

This article is from the WeChat official account "MachineHeart" (ID: almosthuman2014), author: MachineHeart focusing on AI. It is published by 36Kr with authorization.