GasCope
OpenAI's GPT-5.4 Mini & Nano: When Waiting 45 Seconds for Three Lines of Code Feels Like a Bear Market
Back to feed

OpenAI's GPT-5.4 Mini & Nano: When Waiting 45 Seconds for Three Lines of Code Feels Like a Bear Market

OpenAI just unleashed GPT-5.4 Mini and GPT-5.4 Nano, because apparently waiting for an AI to process is the new watching paint dry. This drop lands less than two weeks after GPT-5.4 itself, which debuted a mere two days after GPT-5.3. At this pace, their release cadence is starting to resemble a meme coin's "Q2-Q4 2024" roadmap—aggressively optimistic and slightly confusing.

These aren't just diet models; they're specialized engines for when you need speed, not a virtual professor. Picture a customer service bot fielding its millionth "reset my password" query. You don't need the model that can debate quantum mechanics; you need the one that replies before the user rage-quits and costs less than a fraction of a penny to run.

But let's not mistake speed for stupidity. On coding tests, GPT-5.4 Mini nailed 54.4% on SWE-Bench Pro (which asks it to fix real GitHub bugs), up from the old Mini's 45.7% and not far from the flagship's 57.7%. On OSWorld-Verified (making it operate a desktop via screenshots), Mini hit 72.1%, nearly matching the big model's 75.0%—both casually outperforming the human baseline of 72.4%, which is mildly concerning for us meatbags.

GPT-5.4 Nano posted 52.4% on SWE-Bench Pro and 39.0% on OSWorld. Lower scores than its Mini sibling, but still a generational glow-up from previous Nano models. As Perplexity Deputy CTO Jerry Ma observed: "Mini delivers strong reasoning, while Nano is responsive and efficient for live conversational workflows." Or, in degen terms: one's for the complex trades, the other's for spamming the chat.

The real genius play here? Instead of funneling every single request through the expensive flagship model—a bit like using a Lamborghini for grocery runs—you can architect systems where the big brain makes the plan and the smaller, cheaper models execute the tedious tasks in parallel, like searching code or parsing documents.

On to the numbers that make CFOs smile: GPT-5.4 Mini costs $0.75 per million input tokens and $4.50 per million output tokens via API. GPT-5.4 Nano is even more budget-friendly at $0.20 per million input and $1.25 per million output. Nano is roughly four times cheaper than Mini on inputs, turning "massive daily query volume" from a startup-killer into a manageable line item.

For the normies on ChatGPT: GPT-5.4 Mini is live right now for Free and Go users via the "Thinking" option in the plus menu. Paid subscribers who bump into their GPT-5.4 rate limits will automatically downgrade to Mini. GPT-5.4 Nano, however, is API-only for the moment—clearly built for devs to tinker with, not for consumers to argue with about philosophy.

Share:
Publishergascope.com
Published
UpdatedMar 17, 2026, 23:51 UTC

Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.

See our Terms of Service, Privacy Policy, and Editorial Policy.