OpenAI's 'Trust No One' Cyber Product: Because Letting Everyone Access a Code-Breaking AI Would Be Totally Fine
OpenAI is building a new cybersecurity product that won't be available to the general public—because apparently, giving everyone access to a powerful AI hacking tool didn't sound like a great idea. The product will roll out exclusively through OpenAI's "Trusted Access for Cyber" program, which was announced back in February as a controlled release mechanism designed to keep certain tools out of the wrong hands. In a stunning display of "we definitely thought this through," the company decided maybe, just maybe, handing out code-breaking capabilities to anyone with a credit card wasn't the move.
The timing makes sense: OpenAI dropped GPT-5.3-Codex, its most capable cybersecurity model yet, and is backing participant access with $10 million in API credits. Meanwhile, the industry is still processing what Anthropic's Claude Mythos did to everyone's nerves. OpenAI's basically saying "our turn" while everyone is still reeling from the last AI giving security researchers existential crises.
For those who missed it, Anthropic's Mythos turned out to be so good at finding security vulnerabilities that it identified zero-days in every major operating system and browser—basically the cybersecurity equivalent of walking into a room and pointing out every hidden trap. The model is described as "extremely autonomous" and reasons like a senior security researcher. When you're that good at breaking things, maybe don't give API keys to just anyone. Actually, maybe don't give them to anyone at all. But money, so... limited release it is.
Anthropic's solution was Project Glasswing—a restricted access program that hands Mythos Preview only to vetted organizations including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and roughly 40 other critical infrastructure players. The company also committed $100 million in usage credits and $4 million in donations to open-source security groups. Nothing says "we made something scary" quite like handing it exclusively to every major tech company and financial institution simultaneously. Good luck, humanity.
The benchmark situation is almost funny. Anthropic admitted that Cybench, the tool used to measure whether an AI poses serious cyber risk, "is no longer sufficiently informative of current frontier model capabilities"—because Mythos aced it completely. The test designed to flag danger now just shrugs and says "good luck." When your safety test gets a perfect score from the thing it's supposed to be warning people about, maybe that's a you problem.
OpenAI appears to be trying to get ahead of regulatory pressure by voluntarily restricting access before the government forces their hand. That's a contrast to Anthropic, which is currently in a legal battle after the Pentagon designated it a supply chain risk following its refusal to lift usage restrictions on Claude for surveillance and autonomous weapons applications. Federal agencies have been scrutinizing AI safety protocols with increasing intensity since early April. Nothing like a Pentagon lawsuit to really crystallize your priorities.
Meanwhile, Florida Attorney General James Uthmeier announced Thursday that his office is launching an investigation into OpenAI to examine whether its AI systems pose risks related to national security, criminal misuse, and child safety. Subpoenas are coming. Nothing says "welcome to the future" like getting investigated by Florida AGs. But hey, at least they're being thorough.
On a completely different note, Google is letting YouTube Shorts creators generate videos using digital avatars of themselves through a new feature called "Make a video with my avatar," powered by Veo 3.1. It's rolling out now. Meanwhile, AI companies are out here breaking security benchmarks and triggering federal investigations, and Google's like "yeah but can you make TikToks of yourself." The vibes are immaculate.
And in case you were wondering if any of this is making money: enterprise revenue now accounts for more than 40% of OpenAI's total revenue, with parity to consumer revenue expected by the end of 2026. OpenAI hit $25 billion in annualized revenue in February, up from $20 billion at the end of 2025. The AI safety race continues, but hey, at least the checks are clearing. Nothing like existential risk and record revenue in the same quarterly update.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.