HomeArticle

ChatGPT will start adding ads in 2026. Even the AI that understands you best has started to betray you.

爱范儿2025-12-25 12:05
The AI that understands you best starts to promote products for you.

In 2026, it's very likely the year when humans first need to install an "ad blocker" for AI.

Just early this morning, according to a report by The Information, OpenAI employees are figuring out how to make ChatGPT prioritize displaying "sponsored content" when users ask relevant questions. For example, if you ask for mascara recommendations, you might see soft ads from manufacturers.

In recent weeks, OpenAI employees have also created prototypes of various ad display methods, including the possibility of ads appearing in the sidebar of the ChatGPT interface.

From 2023 to 2024, the mainstream imagination in Silicon Valley was elegant. Many were certain that large models could follow the SaaS model, and users would pay $20 a month, just like subscribing to Netflix or Spotify, and then use AI services cleanly and neatly.

But this year, this fantasy basically collapsed.

Because AGI hasn't arrived yet, but the bills have. It's foreseeable that next year, more AI products will start tentatively "putting up ads." Some will say it explicitly, some will disguise them as recommendations and partnerships, and some will simply embed them in the interactions.

This is a bit of black humor: while we're still looking up to the grand vision of AGI ruling the world, unexpectedly, the first survival skill it learned was to "make a living" by ads.

Leaked code for ChatGPT ad placement | Image source: Tibor

Adding ads to AI is a shortcut to recover losses and also a bankruptcy of imagination

First, admit a reality. In the era of large models that continuously burn money, "adding ads to AI" is indeed the safest and fastest way to recover losses.

The Internet has already paved the way for them. The earliest portals sold ad spaces, then search engines sold keywords, and social networks and short - video platforms sold in - stream ads.

The routine hasn't changed much. First, gather people, and then package these attentions and sell them to advertisers. The form of ads has become more and more hidden, but the system has become more and more mature.

The situation AI is facing now is quite similar to that of the Internet back then.

The number of users is skyrocketing, but the revenue can't keep up. Subscriptions are still slowly educating the market, and corporate paid projects have long cycles. There is an increasingly large deficit hole between the ideal and the reality.

So, selling ads has become a life - saving straw on the AI table. Whoever is under greater pressure has to reach for it first. However, whoever blatantly stuffs ads into the conversation first may be the first to drive the most sensitive and picky users to other models.

This is the principle of the prisoner's dilemma.

As long as there is still one company insisting on not adding ads, other players will have concerns when adding ads, fearing that they will be the first to be abandoned. Once multiple companies take that step at the same time, these concerns will be collectively diluted, and no one needs to pretend to be innocent anymore.

Putting this perspective on Gemini makes it clearer. Recently, many media outlets have cited the news from ad agency buyers that the operators of Google Gemini have told some advertisers that they plan to implant ads in Gemini AI in 2026.

From the advertisers' perspective, this is a very attractive new channel: The end of large models is not AGI, but CPM (cost per mille), and the chat environment + a huge number of users = a highly potential monetization space.

But soon, Dan Taylor, the global head of Google's advertising department, directly denied this statement on social media, saying that "the Gemini App currently has no ads and there is no current plan to change this." This shows that Google is at least cautious in the public context.

Looking at OpenAI CEO Sam Altman, we can see a very typical wavering trajectory.

In the first one or two years when ChatGPT became popular, he repeatedly emphasized that he didn't like ads, especially the combination of "ads + AI," and he called it "extremely disturbing" in public.

He preferred a clean subscription model: users pay directly to get answers that are not influenced by advertisers. At most, he could accept the idea of "referral commissions" - users do their own research and place orders by themselves, and the platform takes a small cut from the transactions, rather than being paid to change the order of answers.

In 2025, his tone softened significantly.

He began to admit that "he actually quite liked those targeted ads on Instagram," thinking it was cool to help him discover good things. Then he changed his tune: ads may not be completely useless, the key lies in whether the form is useful enough and not annoying enough.

According to a report by The Information, OpenAI is seeking to create a "new type of digital ad" rather than simply copying the existing social media ad forms.

ChatGPT can collect a large amount of user - interest - related information through detailed conversations. OpenAI has considered whether it can display ads based on these chat records. One of the plans is to give priority to displaying "sponsored information" when users ask questions on ChatGPT. For example, it can be set to insert ad content first when generating answers.

According to people familiar with the matter, in some recent ad prototypes, ads are designed to appear in the sidebar of the main answer window of ChatGPT. In addition, employees have also discussed whether to add a statement like "This answer contains sponsored content."

A person familiar with the matter said that OpenAI's goal is to make ads as "unobtrusive to users" as possible while maintaining user trust. For example, ads only appear after the conversation reaches a certain stage: when a user asks about a trip to Barcelona, ChatGPT will recommend the Sagrada Familia (non - sponsored), but after clicking the link, a sponsored merchant for a paid guided tour service may pop up.

Meanwhile, for OpenAI's commercial development, Altman has been extremely worried. He invited executives in charge of applications and commercialization early in the morning and publicly recruited an "ad director" to explore the path of turning ChatGPT into an ad platform. For example, CFO Sarah Friar is a veteran who has been tempered in the ad system.

Even though Altman has sounded the alarm, revenue is still the top priority. Therefore, he recruited former Slack CEO Denise Dresser as the chief revenue officer, raising the issue of "how to make money" to the highest priority of the company.

He talks about idealism, but in fact, he is all about business.

Of course, just looking at the business logic, there is actually nothing wrong with this. The data doesn't lie. OpenAI's annualized revenue is about more than $12 billion, which sounds impressive, but the money - burning rate may be three times the public data.

Pre - training costs money, and every inference after going online also costs money. Although the inference cost is indeed decreasing, Jevons' paradox confirms that when computing power becomes a little cheaper, users will immediately use it to run more complex models. This means that enterprises need to buy more and more GPUs, and the electricity bill is like a snowball.

In short, although the unit cost has decreased, the total bill has not been saved at all.

According to OpenAI's statistics as of July this year, ChatGPT has about 35 million paying users, accounting for 5% of weekly active users. At the same time, subscription revenue also accounts for the majority of most AI enterprises represented by OpenAI.

In this context, all AI companies have to face a simple and crude question: where does the money come from?

The most direct answer is to stuff ads into AI.

Ads have become the original sin because the Internet lacked other effective commercial options back then. Similarly, in the AI era, if there is no innovative model, ads will still be the only means to cover most users' costs.

Of course, simply copying the money - making methods of the previous era is obviously a path dependence lacking imagination. The traditional Internet has proven once that when you only have a hammer, all problems seem like nails; when you only know how to do ads, all products seem like ad spaces.

ChatGPT ads also face challenges: as of June this year, only 2.1% of queries were related to shopping. For this reason, OpenAI has connected functions such as Stripe payment, Shopify e - commerce, Zillow real estate, and DoorDash food delivery, both to cultivate users' shopping habits and to accumulate data for ad placement.

The revenue model determines the product form, and user experience usually becomes the variable to be sacrificed. AI was originally highly anticipated as an opportunity to jump out of the quagmire of the old era. No one wants to end up rolling in the same mud pit after going around in circles.

The AI that knows you best starts to promote products to you

The essence of adding ads to the traditional Internet is nothing more than selling attention through eye - catching positions, and the typical example is the early search engine ads.

The page looks like search results, but in fact, the top few are all paid rankings. Looking back now, the accidents and controversies at that time still make one's hair stand on end.

Inserting ads into AI is more dangerous than these.

We experienced users have a natural sense of caution towards web ads, and we also know to compare several search results and that the top ones are likely to be ads. But the trap with anthropomorphic and empathetic AI is that we may forget that there may be a sales team behind the screen. You regard AI as a teacher, but it regards you as a potential customer to be converted.

Looking back at history, Su Dongpo wrote a poem for a fried dough stick stall: "With slender hands, they're kneaded fair and white, In green oil fried, a tender yellow sight. Last night in spring slumber, who can tell the weight, Flattened like a beauty's arm - clasping jade." Since then, customers have flocked to the stall. People were not buying the fried dough sticks themselves, but the trust in the celebrity Su Dongpo.

Today's AI is, in many scenarios, the trusted Su Dongpo in the eyes of ordinary users.

What's particularly dangerous is that now there is not only simple ad implantation, but also people using GEO for "content poisoning."

GEO, as the name suggests, is "Generative Engine Optimization," aiming to make a web page or article be preferentially cited in AI answer engines such as ChatGPT, Gemini, and Perplexity.

Imagine such a scenario: Some manufacturers or interest groups pre - publish a large number of optimized web articles, writing very authoritatively and comprehensively about a certain product or service, and adding structured tags, SEO metadata, keyword prompts, etc.

Their purpose is not to help, but to ensure that when users ask relevant questions in AI, their content will be preferentially output. Then, AI incorporates this content into the answers.

For users, this is "authoritative advice + neutral information." In fact, it may be a commercial promotion/result of content poisoning packaged as expert advice.

This is more terrifying than traditional ads or soft ads because it is hidden in the core of the "answer," not in a prominent ad position, but in the advice or conclusion that users trust the most. Every few paragraphs, we need to confirm whether AI's advice is really for our benefit or just promoting products for someone.

Just hiding an ad in a sentence is already quite dangerous. More importantly, AI is planning the next step to move to the upstream of all apps and simply take over the matter of "who shows you ads."

In the traditional Internet era, every super - app wanted to be the entry point. They enclosed their own areas and built their own walls. Users directly opened them, and then they were responsible for stuffing content, services, and ads in front of you.

Super - apps