OpenAI's 'Safety' Token Fails Its Own KYC Check: Coalition Demands a Rug Pull
A coalition of more than two dozen advocacy groups—including Encode AI, the Center for Humane Technology, and the Electronic Privacy Information Center—has just sent a formal letter to OpenAI. The demand? To execute a full-scale rug pull on the California “Parents & Kids Safe AI Act” ballot initiative. It seems the project's whitepaper isn't passing the community audit.
The groups argue the proposal, which is backed by OpenAI and Common Sense Media, would mint a narrow, immutable definition of child-safety into the state's constitution. This would effectively limit families' ability to sue and would hard-fork the state’s future AI-lawmaking powers into a dead chain. A major bug in the code: the measure’s definition of “severe harm” only validates transactions for physical injury tied to suicide or violence, completely ignoring the massive mempool of mental-health impacts flagged by researchers and parents.
Other critical vulnerabilities in the proposal's smart contract include: a function that bans parents and children from filing claims under the act; a governance model with limited enforcement tools for state officials; a suspiciously broad definition of “encrypted user content” that could block families from presenting chatbot logs as evidence in court; and an amendment process requiring a two-thirds legislative vote and tying any changes to vague “economic progress” oracles, effectively making the rules non-upgradeable.
“OpenAI has the admin keys to withdraw it or keep feeding liquidity into the signature drive,” said Adam Billen, co-executive director of Encode AI. The coalition notes OpenAI has already provided a $10 million seed round to the ballot committee, deploying a classic California governance attack: fund the proposal, and threaten to launch it on-chain unless the validators (legislators) soft-fork to your demands.
Billen warns this is a copy-paste of a broader tech lobbying playbook, citing similar sybil attacks from Google, Meta, and Amazon on AI regulation. He stresses, in what should be obvious to anyone not completely diluted, that companies should not be the ones writing the smart contracts that govern their own products—that’s called a pre-mine.
OpenAI has not yet responded to the request for comment, likely busy checking its node. The coalition says its immediate alpha is simple: get the company to withdraw the measure so lawmakers can handle the issue through the normal, messy, but crucially public, governance process.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.