GasCope
Safety? Delegated: OpenAI's $200K/Year External Fellowship While Its Internal Teams Vanish Into the Void
Back to feed

Safety? Delegated: OpenAI's $200K/Year External Fellowship While Its Internal Teams Vanish Into the Void

OpenAI just dropped a Safety Fellowship offering $3,850 weekly to external researchers studying what could go wrong with advanced AI — delivered mere hours after a New Yorker investigation revealed the company dissolved its internal safety teams and quietly dropped "safely" from its IRS mission statement. It's the Web2 equivalent of hiring bodyguards after posting your address on 4chan. The optics are... something.

The fellowship launched April 6 as "a pilot program to support independent safety and alignment research." At over $200,000 annualized plus roughly $15,000 in monthly compute credits and mentorship, it's generous by academic standards. Fellows operate from Constellation's Berkeley workspace or remotely, with applications closing May 3. The program isn't limited to AI specialists — OpenAI is recruiting across cybersecurity, social science, and human-computer interaction. Basically, they're casting a wide net and hoping someone catches a safety narrative they can point to when regulators come knocking.

The timing tells the real story. If you squint, this looks less like responsible governance and more like a carefully timed PR minting — fresh legitimacy printed right after some inconvenient journalism landed. Nothing says "we care about safety" like announcing a fellowship the same day your internal safety apparatus gets exposed as thoroughly as a rug pull.

Ronan Farrow's New Yorker investigation documented that OpenAI dismantled three consecutive internal safety organizations over 22 months. The superalignment team shuttered in May 2024 after co-leads Ilya Sutskever and Jan Leike exited. Leike's parting shot: "Safety culture and processes have taken a backseat to shiny products." The AGI Readiness team followed in October 2024. The Mission Alignment team lasted just 16 months before disbanding in February 2026. For a company that keeps insisting safety is job one, they've sure had a lot of re-orgs that accidentally deleted all the people doing that job.

When a journalist asked to speak with OpenAI's existential safety researchers, a company representative reportedly responded: "What do you mean by existential safety? That's not, like, a thing." Ah yes, the classic move — when you can't win the argument, just insist the premise is听不懂. Very helpful. Very reassuring. We should all sleep better knowing the people building superintelligence aren't quite sure what "existential" means in this context.

This fellowship explicitly doesn't fill the internal

Share:
Publishergascope.com
Published
UpdatedApr 11, 2026, 21:20 UTC

Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.

See our Terms of Service, Privacy Policy, and Editorial Policy.