OpenAI Brings Ads to ChatGPT for Free Users and ‘Go’ Subscribers
George Hammond and Cristina Criddle, reporting for the Financial Times:
OpenAI is introducing advertising on ChatGPT, as the $500bn start-up seeks new sources of revenue to fuel its continued expansion and fend off fierce competition from rivals Google and Anthropic.
The San Francisco-headquartered company announced on Friday that it would begin testing adverts on its free chatbot and cheapest paid offering.
The ads will appear at the bottom of ChatGPT answers in the coming weeks. OpenAI said the marketing messages will be clearly labelled and will appear if relevant to the query.
Sam Altman, OpenAI’s chief executive, in 2024: “I kind of think of ads as like a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world, like, access to great services, but if we can find something that doesn’t do that, I’d prefer that.”
It’s safe to say that OpenAI hasn’t found that in the year since Altman gave this quote. But unlike most others in the artificial intelligence space who learned this news on Friday, I don’t think Altman was lying in this interview. I really do think OpenAI hasn’t found a way to monetize its users and is now looking for a way to make it big in an initial public offering. Today, ChatGPT loses money on every free user, and its only business strategy to stop hemorrhaging cash has been upselling people on $20 and $200 monthly subscriptions. That freemium business model doesn’t work when a company wants to make a profit — there’s no room for losing money.
Come to think of it, OpenAI really isn’t doing great these days. The company is under tremendous legal pressure after a series of high-profile deaths involving ChatGPT. The working population of the United States has largely soured on AI, despite using it heavily, wary that it might contribute to job loss. The tide is turning on AI, and people are directing their attention to its negative effects. Meanwhile, Anthropic’s Claude Code and flagship Claude Opus 4.5 model have demanded attention across the internet. People are vibe-coding — using AI to write software with little to no human intervention — various apps, games, and websites for themselves, then raving about the results on social media. Fortune 500 companies run on Claude these days — even Apple uses it internally, and even Elon Musk, one of the most antisocial figures in technology history, can’t help but admit Anthropic has done something incredible in the professional AI space.
On the consumer side of the market, Gemini has shot up in the rankings. As of mid-January, it claims the top spots on the LMArena benchmark, a website that pits models head-to-head against each other and asks users to choose the response they prefer. Gemini’s pre-training technology is simply best-in-class, and both Gemini 3 Flash and Gemini 3 Pro are leading frontier models that put OpenAI’s GPT-5.2 to shame. Where Anthropic has a distinct advantage in post-training, Google is superior in pre-training. GPT-5.2 has decent pre-training — though seemingly not close to Gemini — but OpenAI’s post-training is horrible. GPT-5.2’s responses sound like a human resources video about sexual harassment in the workplace. They’re consistently below awful and just plain unhelpful. Gemini doesn’t have as much of a personality as Claude, but it makes up for it in pre-training, and it shows. Comparing Gemini’s responses to ChatGPT is just embarrassing for Altman’s company.
OpenAI hasn’t been with it since at least the launch of OpenAI o1, the first reasoning model from the GPT-4o line of large language models. That was the last groundbreaking, class-leading model from OpenAI — it was the first model to use a chain of thought, and the result was significantly better responses. Gemini was lackluster at the time, and programmers flocked to o1 before Claude Code, the command-line utility. Since then, OpenAI has been on a downward trajectory, clearly indicated in both its model quality and apparent desperation to turn a profit. So now we’re here, at the beginning of 2026, and OpenAI announced two last-ditch efforts to make more money for an IPO: the new ChatGPT Go plan and advertising. (ChatGPT Go was available in poorer countries, like India, to lower the cost for more usage. The fact that it’s now available in the United States is either a recession indicator or proof that OpenAI isn’t doing well financially.)
I don’t think this is going to go well for OpenAI. When the AI bubble inevitably bursts, I imagine the company will plunge into bankruptcy. The “bubble” I speak of is more of a financial situation than a cultural one. Currently, the AI industry effectively works on glorified gift cards. Microsoft gives OpenAI a gift card for free Azure credits to run ChatGPT. Nvidia gives OpenAI a gift card to buy more graphics processing units. OpenAI gives Cursor a gift card for access to ChatGPT. I think the Cursor example is perfect for this: neither company is profitable. Whenever this bubble bursts, every company that isn’t already profitable will go bankrupt, since the gift cards will all expire. At some point, OpenAI will need to pay Microsoft real United States dollars to get access to servers. Microsoft will need to pay OpenAI real money to use ChatGPT models in Copilot. This is not very good news for the severely-in-debt OpenAI, which is working with way too many gift cards and way too little cash.
Here’s how I imagine the next five years will go: OpenAI will IPO, the bubble will burst, its stock will crash, and the company will become a group of patents and debt. Some company, whether it be Apple, Google, or someone else, will acquire OpenAI’s debt and technology, roll it into their existing model infrastructure, and OpenAI will be reduced to a Wikipedia page we tell children about in the year 2042. I have no reason to believe this won’t happen. OpenAI’s actions clearly allude to the fact that it is desperate for cash, and companies in a mad scramble for revenue in a bubble don’t tend to fare well outside of that bubble. Meanwhile, Google is profitable, and while its AI ventures are loss leaders, Google Search is well equipped to take over ChatGPT’s prominence. As a secondary and less ambitious prediction, I believe the separate Gemini website and app will be integrated into Google’s AI Mode, which would replace the “AI Overviews” that currently have primary placement in Google Search.
Google really does seem primed to usurp the consumer AI market. Its deal with Apple to handle pre-training for the models that power Apple Intelligence is a possible center of revenue — similar to the infamous search deal between the two companies — and AI Mode increasingly looks like the future of consumer AI. Google is quite literally the most popular website on the internet. With the way Google DeepMind keeps chugging away at pre-training, Gemini’s models will eventually become so good at web search and agentic work that, combined with Google Search’s prominence and deals with other websites, Gemini models will become the primary way people interact with generative artificial intelligence on the web. Meanwhile, Anthropic seems ready to capitalize on the enterprise and professional markets — the company makes most of its money selling enterprise subscriptions to Claude. I don’t imagine that’ll change. OpenAI is nowhere in this equation.