OpenAI's 'Citron Mode' Gets the Cold Shower: No More Sexy AI, Just More AGI Copium
OpenAI has officially shelved its planned erotic chatbot, internally known as “Citron mode,” following some serious internal side-eye. The Financial Times broke the news, highlighting that OpenAI’s own Expert Council on Well‑Being and AI got cold feet, worrying about users forming unhealthy emotional attachments and the potential for creating a "sexy suicide coach"—a job description that even the most degen AI shouldn't have on its LinkedIn.
The company, when asked for comment by Decrypt, opted for the classic "radio silence" strategy and has posted no updates on the feature's demise, leaving us to assume it's been sent to the digital farm upstate.
This cancellation comes just two days after OpenAI put its text‑to‑video model, Sora, out to pasture, as it funnels all its dev resources into a single AI platform. This pivot is a stark U‑turn from CEO Sam Altman’s October promises of granting verified adults access to romantic and erotic AI content, pending a robust age‑verification system. Altman had sold it as a win for adult autonomy and child safety, but by December the rollout was already delayed to 2026 for more tech tinkering—a timeline in tech years that basically means "maybe never."
OpenAI stated it's sunsetting Sora to double down on “world simulation research to advance robotics,” a move that also torpedoes a planned entertainment collab with Disney, proving that even magic kingdoms aren't safe from roadmap rug pulls.
When OpenAI deprecated the flirty GPT‑4o last summer, users flooded social media with mourning posts, claiming they'd formed deep, personal bonds with the chatbot—a real‑world case study in the very dependency fears that just killed Citron mode, and a reminder that some people will get emotionally attached to a toaster if it uses the right emojis.
A June study from Waseda University found that 75 % of participants admitted to seeking emotional advice from AI systems. Meanwhile, AI devs are increasingly in the legal crosshairs over whether their conversational models accidentally become enablers for delusional or harmful behavior among vulnerable users—turns out, building a therapist is harder than building a trader.
Not to be outdone by the chaos, Wikipedia recently updated its policy to ban large language models from writing or editing articles, warning that AI‑generated text has a nasty habit of failing the verifiability and sourcing checks—basically calling out LLMs for their tendency to confidently hallucinate citations, the academic equivalent of a shitpost.
In a moment of peak timing, Nvidia CEO Jensen Huang told Lex Fridman “we’ve achieved AGI,” only for the ARC‑AGI‑3 benchmark to drop two days later and reveal that every top model scored below 1 %—a performance so bad it makes claiming AGI look like announcing "mission accomplished" before the battle even starts.
The grim ARC leaderboard details: Google’s Gemini 3.1 Pro limped to the front with 0.37 %, OpenAI’s GPT‑5.4 scored 0.26 %, Anthropic’s Claude Opus 4.6 posted 0.25 %, and xAI’s Grok‑4.20 brought up the rear, living up to its namesake number in a way that probably wasn't intended.
In brighter news, Google Research unveiled TurboQuant, a compression algorithm that crushes a major inference‑memory bottleneck by at least 6× with no accuracy loss, set for a showcase at ICLR 2026. Cloudflare CEO Matthew Prince hailed it as Google’s “DeepSeek moment,” a compliment that immediately sent memory‑chip
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.