GPT-Image 2 has made the phenomenon of "pictures without truth" run rampant. The loss of ethics is more terrifying than technological breakthroughs.
A forged screenshot of a media flash caused Kingsoft Software's stock price to plummet. A fake official announcement picture of "Tim Cook joining Xiaomi Auto" spread like wildfire on social platforms. Even though Xiaomi executives quickly refuted the rumor, it was hard to stop its spread. There was also a synthesized video of "Yu Chengdong and Lei Jun fighting live", which made many netizens believe it was real due to its realistic lighting effects and facial expressions of the characters.
The emergence of GPT-Image 2 under OpenAI has taken the art of "faking to perfection" to a new level with technological breakthroughs. However, it has also turned the joke of "seeing is not believing" into a social problem that we must face head-on. When technology races ahead while ethics fall into disorder, even the most amazing innovations may turn into tools for creating chaos.
01
The gap between GPT-Image 2 and its previous generation of AI image generation tools is not just about "improved image quality".
In the LMSYS Image Arena evaluation, it led by a huge margin with a high score of 1512 points, 242 points higher than the second-place Google Nano Banana 2, setting a record for the largest point difference in the list. Its core breakthrough lies in solving two key problems. First, the accuracy of text rendering has jumped from 90% - 95% to over 99%. Non-Latin scripts such as Chinese no longer appear as garbled characters, and even the micro-carved regular script on the tip of a metal needle can be clearly identified. Second, the "thinking mode" allows the AI to break down tasks, conduct online searches, plan the layout, and then self-review and correct errors before generating images, significantly reducing the failure rate of complex spatial reasoning.
What's even more alarming is that it supports native 4K resolution and the generation speed has increased by 6 times. An ordinary user can get a realistic poster, certificate, or news screenshot in just 3 seconds by inputting a sentence, completely lowering the threshold for forgery.
The revolutionary progress of technology, which should have been a powerful tool for improving production efficiency, has quickly been distorted in an environment lacking constraints. From entertainment memes to malicious rumors, the boundaries of the abuse of GPT-Image 2 are constantly being broken.
Netizens used it to forge a corporate announcement of "Xishanju's dissolution". Due to its realistic details and imitation of the format of regular media, it directly affected the stability of the capital market and eventually triggered legal liability. A woman in Anhui used AI to generate a picture of a "homeless man sitting in a restaurant" to test her husband, which led to the police's emergency dispatch and wasted public resources. A child of a property owner in a community in Guangdong forged a picture of a "homeless man breaking into the house", causing panic among all the residents in the building.
There are also more hidden commercial frauds. Some e-commerce sellers have found that consumers use AI to generate fake pictures of product deterioration to apply for "refund only", making it difficult to distinguish between true and false during rights protection. Some education institutions use AI to generate group photos of "successful families" to create false personas and sell high-priced courses, and parents can hardly detect the flaws.
All these cases confirm that when the cost of forgery approaches zero, the ethical bottom line will be easily trampled on, and the technological dividends will soon be swallowed up by risks.
02
The prevalence of "seeing is not believing" can easily shake the trust foundation of social operation.
On an individual level, fake indecent photos and forged chat record screenshots generated by AI may bring unwarranted disasters to ordinary people. They may suffer damage to their reputation but find it difficult to defend their rights. On a business level, false negative news pictures of enterprises and forged pictures of product quality problems can destroy a company's reputation in a short time and cause stock price fluctuations. On a social level, AI-generated pictures of disaster scenes and group events may intensify public panic, and even stir up social contradictions and disrupt public order. GPT-Image 2 can perfectly forge legally valid documents such as ID cards, business licenses, and transfer records, facilitating criminal activities such as fraud and extortion.
Although the "Measures for the Identification of Artificially Generated and Synthesized Content" have been implemented, there is still a large amount of AI-generated content on the Internet without author labels or platform prompts. Some software even supports paying to remove watermarks, making supervision even more difficult.
Facing the challenges brought by technology, enterprises and platforms cannot be "hands-off managers". Image generation technology enterprises should take on the responsibility of source governance, embed ethical constraints into technological design, and enforce the addition of non-tamperable C2PA digital watermarks and explicit labels. Even paying users should not be allowed to remove them. In addition, a content review mechanism should be established to filter the risks of generation requests related to celebrities, enterprises, and public events and block the channels for malicious forgery.
Social and content platforms need to upgrade their detection technology, actively label suspected AI-generated pictures and videos, establish a multi-layer protection system such as data isolation and audit logs with reference to the security architecture of enterprise-level solutions, simplify the reporting process, and increase the punishment for accounts that maliciously spread false AI content.
The reason is simple - technological innovation cannot come at the cost of public safety. The social responsibility of enterprises is the core competitiveness for long-term development.
For users, surviving in the "post-truth era" requires cultivating a new media literacy: default suspicion. When seeing any screenshot, photo, or "live video", the first reaction should not be to forward it, but to trace its origin. Verifying through official channels, comparing multi-source information, and magnifying details for inspection should become basic operations. Users who use AI to generate images should also be clear about the legal boundaries and know that maliciously forging and spreading false AI content may face administrative detention or even criminal punishment.
The technological breakthrough of GPT-Image 2 is worthy of applause. It makes design democratization possible and brings an efficiency revolution to the creative industry. However, we need to clearly recognize that technology itself has no good or evil, and its value depends on the boundaries of use and the bottom line of ethics.
When the traditional perception of "seeing is believing" is broken, we cannot let false information run rampant, and we cannot let technological progress come at the cost of social trust. After all, no matter how powerful the technological breakthrough is, if it loses ethical constraints, it will eventually backfire on society.
Let the AI image generation technology truly serve humanity, rather than becoming a tool for creating lies - this is the bottom line we must hold.
This article is from the WeChat official account "Jiemian News". Author: Song Jianan. Republished by 36Kr with permission.