GasCope
Safety? We Shipped That: OpenAI's $200K Fellowship Offers External Research Without Internal Teams
Back to feed

Safety? We Shipped That: OpenAI's $200K Fellowship Offers External Research Without Internal Teams

OpenAI dropped a Safety Fellowship this week, offering $3,850 weekly stipends to external researchers studying AI risks — announced hours after The New Yorker reported the company had dissolved its internal safety teams and quietly dropped "safely" from its mission statement. It's giving "thoughts and prayers" vibes with extra steps, funding outside researchers while its own safety infrastructure looks like a rug that got pulled so hard it took three internal teams with it.

The fellowship launched April 6 as "a pilot program to support independent safety and alignment research." It pays over $200,000 annually plus roughly $15,000 monthly in compute credits and mentorship. Fellows work from Constellation's Berkeley space or remotely, with applications closing May 3. The program isn't limited to AI specialists — OpenAI is recruiting from cybersecurity, social science, and human-computer interaction. Apparently they're casting a wide net because the internal talent pipeline has more holes than a blockchain bridge after a hack.

The timing writes the headline itself. Ronan Farrow's investigation, published the same day, documented that OpenAI dissolved three consecutive internal safety organizations over 22 months. You can't make this up — it's giving "we definitely meant to do that" energy while the timeline screams "oopsie."

The superalignment team was cut in May 2024 after co-leads Ilya Sutskever and Jan Leike left. Leike noted on exit that "safety culture and processes have taken a backseat to shiny products." The AGI Readiness team followed in October 2024. The Mission Alignment team was disbanded in February 2026 after just 16 months. Three safety teams in 22 months — that's not a reorg, that's a degen trade gone wrong except instead of money, they're liquidating accountability.

The New Yorker also reported that when a journalist requested interviews with OpenAI's existential safety researchers, a company rep responded: "What do you mean by existential safety? That's not, like, a thing." And there it is — the most nonchalant gaslight since SBF explained his risk management. Cool, cool, totally not concerning that the people building god-tier AI aren't sure what existential safety means. WAGMI though.

The fellowship explicitly doesn't replace internal infrastructure. Fellows get API credits and compute but no system access — arm's-length research funding rather than a rebuilt safety operation. It's like hiring consultants to check your car for engine problems while not letting them open the hood. Very "trust me bro" vibes, but with a $200K stipend attached.

Seven priority areas define the research agenda: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Each fellow must deliver a paper,

Share:
Publishergascope.com
Published
UpdatedApr 11, 2026, 21:20 UTC

Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.

See our Terms of Service, Privacy Policy, and Editorial Policy.