OpenAI's 'Please Don't' Button: A Blueprint for Telling AI to Behave Around Kids
OpenAI on Wednesday published a policy blueprint aimed at addressing the rise of AI-enabled child sexual exploitation, outlining new safety measures the industry can take to help curb the use of AI in creating child sexual abuse material. Think of it as OpenAI trying to build a better lock before someone complains loudly enough that the neighborhood gets raided—a proactive approach that would've made a lot of sense about three firmware updates ago.
The framework lists legal, operational, and technical measures designed to strengthen protections against AI-enabled abuse and improve coordination between technology companies and investigators. It's the kind of compliance wishlist that sounds impressive in a shareholder meeting but somehow still manages to miss the part where, you know, people actually use the tools.
"Child sexual exploitation is one of the most urgent challenges of the digital age," the company wrote. "AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale." In crypto terms, this is the equivalent of admitting the code had a bug the whole time, but hey, at least now there's a GitHub issue for it.
The proposal incorporates feedback from organizations including the National Center for Missing and Exploited Children and the Attorney General Alliance and its AI task force. Because nothing says "we take this seriously" quite like hosting a few Zoom calls with advocacy groups and then publishing a Medium post about it.
"Generative AI is accelerating the crime of online child sexual exploitation in deeply troubling ways—lowering barriers, increasing scale, and enabling new forms of harm," said Michelle DeLaune, President & CEO of the National Center for Missing & Exploited Children. "But at the same time, the National Center is encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start." It's the digital equivalent of closing the barn door after the horse spent six months doing donuts in the cornfield, but progress is progress.
The framework combines legal standards, industry reporting systems, and technical safeguards within AI models. These measures aim to help identify exploitation risks earlier and improve accountability across online platforms. Imagine if Web3 had spent half as much energy on child safety as it did arguing about gas fees—that's basically what this looks like.
The blueprint identifies areas for action, including updating laws to address AI-generated or altered child sexual abuse material, improving how online providers report abuse signals and coordinate with investigators, and building safeguards into AI systems designed to prevent misuse. It's a solid to-do list, if your definition of "solid" includes things that should've been built before the feature shipped.
"No single intervention can address this challenge alone," the company wrote. "This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves." Translation: we're throwing spaghetti at the
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.