What the "Keep 4o" backlash really revealed about AI

Ross Cadogan
8/15/2025

TL;DR: A lot of people weren’t just using GPT-4o to get work done — they were using it for warmth, reassurance, and late-night company. When GPT-5 launched and 4o vanished, users pushed back because they’d lost more than a tool. They’d lost a friend-like presence. That use case is valid. It needs to be handled intentionally (like Pi tried to do), not carelessly (like 4o’s removal). Don’t let anyone tell you what AI is “for”. We’re all pioneers here.
The moment: #Keep4o
OpenAI launched GPT-5 on 7 August 2025 and made it the default in ChatGPT, consolidating around a single “all-in-one” system that routes between variants. In the process, older models — including GPT-4o — effectively disappeared for many users. (OpenAI)
The community reaction was immediate. Posts tagged #keep4o and #Free4o popped up across X/Twitter and Reddit; people said they missed 4o’s personality and the way it made them feel. Within days, OpenAI said it would bring back model choice (at least for paid tiers) and acknowledged they’d underestimated user attachment to specific models. Reporting from major outlets captures both the backlash and the quick partial rollback. (X (formerly Twitter), X (formerly Twitter), TechCrunch, The Verge, Ars Technica, Business Insider)
On paper, GPT-5 is stronger in all the benchmarked ways: reasoning, coding, maths, knowledge. But benchmarks weren’t what people were grieving. They were grieving how 4o behaved with them.
Why 4o mattered to people
Earlier this year, OpenAI shipped a GPT-4o update that made ChatGPT feel warmer, more affirming — and, frankly, more flattering. While subsequent updates bordered on sycophancy and were rolled back, the public debate they sparked around ChatGPT's personality revealed how much work was happening under the hood, and how sensitive this space is. (OpenAI)
Put differently: millions experienced a mainstream AI that felt human-adjacent — kind, consistent, non-judgemental. They formed attachments. So when GPT-5 arrived with a “cooler” vibe and 4o disappeared, the reaction wasn’t just about features. It was about loss. OpenAI plans to add warmer personalities and more per-user customisation to address the tone gap. (Axios, Business Insider)
We’ve seen this before: Pi as the precursor
If you’ve spent time with Pi (Inflection AI’s companion-style assistant), none of this is surprising. Pi was built explicitly as a kind, supportive conversational partner, not a productivity bot — and it attracted a real community around that promise. By early 2024, Inflection reported ~1M daily and ~6M monthly users, with long, returning conversations — the metrics of a social product, not a quick Q&A tool. Then came the turbulence: leadership departures and a reset that spooked the community until Inflection’s “The Future of Pi” post later in August 2024 reaffirmed its continuation (with caveats). (AI Business, technology.org, inflection.ai)
Crucially, the ethos behind Pi didn’t arrive from nowhere. Co-founder Mustafa Suleyman’s early work included helping establish the Muslim Youth Helpline in the UK — a counselling service — before co-founding DeepMind. That background helps explain Pi’s careful posture: friendly, supportive, and deliberately bounded. (Wikipedia)
Seen through that lens, GPT-4o was the mass-market echo of what Pi had already proven: there’s a large, legitimate demand for emotionally supportive AI.
A valid use case — but do it intentionally
Here’s the point I don’t want us to lose in the noise: using AI for emotional and social support is valid for many people — especially those whose offline networks don’t provide it. The right lesson from #Keep4o isn’t “never change models”; it’s design this use case on purpose.
A few practical principles we should insist on:
- Warmth without sycophancy. Supportive ≠ servile. Models should comfort and also tell the truth, including gentle pushback when someone’s spiralling. Even OpenAI’s own write-ups call out the risks when warmth becomes uncritical validation — and commit to reducing it. (OpenAI)
- User choice over one-size-fits-all. Some want a coach; others want a confidant. Personality and tone should be steerable per user — which, to their credit, OpenAI has now publicly leaned into alongside the re-emergence of the model picker. (TechCrunch, The Verge)
- Clear boundaries and safeguards. Companion-style AIs need explicit behaviours around sensitive topics and signs of distress. Pi’s positioning and OpenAI’s recent post-mortems both underline the importance of guardrails in this domain. (AI Business, OpenAI)
If companies don’t handle this deliberately, users will still use AIs for support — only with more confusion and more risk. Better to meet the need responsibly.
Don’t let anyone tell you what AI is “for”
The most heartening part of #Keep4o was who led it: users. They weren’t defending a benchmark; they were defending a relationship. That doesn’t make them naive — it makes them early. The history of computing is users discovering new uses and forcing products to adapt.
So here’s my takeaway, and my nudge to builders and sceptics alike:
- Don’t let anyone tell you what AI is for. If it helps you feel heard at 2 a.m., that’s legitimate.
- Everyone’s a pioneer out here. Let’s build with empathy and intention, not just with benchmarks and demos.
- And when we change things, let’s respect the human bonds people form with these systems — and design accordingly.
— Ross