Anthropic’s Super Bowl campaign wasted no time picking a fight.
One of four ads released Wednesday opens with the word “BETRAYAL” flashing across the screen. The scene shows a man asking a chatbot clearly meant to resemble ChatGPT for advice on how to talk to his mom. The bot, portrayed by a blonde woman, offers earnest suggestions: listen more, take a walk in nature. Then it abruptly pivots into an ad for a fictional cougar-dating site called Golden Encounters.
Anthropic ends the spot with a pointed message: ads may be coming to AI, but not to its chatbot, Claude.
Another commercial follows a skinny young man seeking help getting a six-pack. After sharing his height, age, and weight, the bot responds not with fitness advice, but with an ad for height-boosting insoles.
The message is hard to miss. The ads are aimed squarely at OpenAI, which recently confirmed that ads will be introduced to ChatGPT’s free tier. The campaign quickly sparked headlines saying Anthropic was “mocking,” “skewering,” or “dunking on” its rival.
Even Sam Altman admitted on X that he laughed. But the humor clearly wore thin.
Altman responded with a lengthy, emotional post accusing Anthropic of being “dishonest” and even “authoritarian.” He argued that ad support is necessary to keep ChatGPT free for millions of users and pushed back hard on the idea that ads would hijack conversations or surface inappropriate products.
“We would obviously never run ads in the way Anthropic depicts them,” Altman wrote. “We are not stupid, and we know our users would reject that.”
OpenAI has said ads will be clearly labeled, separate from responses, and won’t influence a chat. At the same time, the company has acknowledged that ads will be conversation-specific, appearing at the bottom of reactions when a sponsored product or service is relevant, a nuance that closely mirrors the premise of Anthropic’s ads.
Altman didn’t stop there. He claimed Anthropic “serves an expensive product to rich people,” contrasting that with OpenAI’s goal of bringing AI to billions. But both companies offer free tiers, and their paid plans are strikingly similar. Claude’s subscriptions range from $0 to $200. ChatGPT’s tiers span $0 to $200 as well.
He also accused Anthropic of wanting to control how people use AI, pointing to restrictions around Claude Code and the company’s broader emphasis on “responsible AI.” Anthropic, founded by former OpenAI employees concerned about AI safety, has long leaned into stricter usage policies—though OpenAI enforces its own guardrails, including limits around mental health content.
Still, Altman escalated the rhetoric by labeling Anthropic “authoritarian,” writing that such an approach represents “a dark path.”
That framing felt jarring to many observers. Using language associated with real-world repression in response to a tongue-in-cheek Super Bowl ad struck some as excessive, especially in a global moment where actual authoritarian actions carry life-and-death consequences.
Business rivals have been trading blows through advertising for decades. But Anthropic’s campaign clearly struck a nerve—and revealed just how sensitive the debate over ads, monetization, and the future of AI assistants has become.



