GasCope
San Fran's AI Grifters Get a Side of Street Theater: 200 Protesters Demand a Ctrl+Alt+Delete on the AGI Arms Race
Back to feed

San Fran's AI Grifters Get a Side of Street Theater: 200 Protesters Demand a Ctrl+Alt+Delete on the AGI Arms Race

On Saturday, a crew of roughly 200 doomers, academics, and professional worriers descended on the San Francisco offices of Anthropic, OpenAI, and xAI. Their mission? To beg the labs to please, for the love of all that is good, hit the pause button on the mad dash for god-like AI before someone accidentally prompts the apocalypse. The crowd—a coalition of groups with names straight out of a dystopian B-movie, like MIRI, PauseAI, and QuitGPT—marched from Anthropic at high noon, making stops at OpenAI and finally xAI for a round of solemn speeches. It was less Woodstock, more a support group for those who’ve read too much sci-fi.

“There are a lot of people who care about this risk from advanced AI systems,” Stop the AI Race founder Michael Trazzi told Decrypt, probably while staring meaningfully into the middle distance. “Having everyone marching together shows people are not isolated in thinking about this by themselves.” Nothing builds community like shared existential dread, after all.

Trazzi’s group isn't asking for a full shutdown—just a highly conditional one, like a crypto dev promising a token unlock "soon." Their plan: labs would stop building frontier models and pivot to safety, but only if other major labs pinky-swear to do the same. He dreams of a US‑China agreement to freeze dangerous model development, which he claims could free up resources for actually useful stuff, like medical AI that doesn't secretly want to turn us into paperclips.

This sidewalk seminar is just the latest in a growing genre of AI-safety performance art. The trend kicked off in March 2023 when the Future of Life Institute dropped an open letter calling for a moratorium post-ChatGPT; signatories ranged from Elon Musk to Ripple co‑founder Chris Larsen, because when you're talking existential risk, you want the guy from XRP in the mix. That letter now boasts over 33,000 signatures. Not to be outdone, Trazzi himself previously staged a week‑long hunger strike outside DeepMind’s London office, a move that screams "commitment" but probably just made him really hangry.

Of course, government suits have a different take, warning that slowing U.S. AI research is a fantastic way to hand the digital crown to foreign competitors. Last week, the Trump camp unveiled an AI framework pledging to “win the AI race,” while the White House mumbled something about governance standards. The message is clear: America must dominate in AI, even if that dominance ends with a superintelligence using the nuclear codes for a particularly spicy meme.

The great "will it save us or skin us?" debate also got an airing on a Humanity+ panel. There, AI doomer-in-chief Eliezer Yudkowsky shared a stage with philosopher Max More and other transhumanists to argue whether AGI is humanity's final boss or its ultimate power-up. It's the intellectual equivalent of watching two factions debate the halving while the block reward ticks down.

Trazzi admits that even a voluntary pause would be tougher to verify than a shitcoin's anonymous dev team. His practical suggestion? Put a hard cap on the compute power a lab can use for training. “If you limit how much compute a company can use to build these systems, then you’re pretty much limiting developing new models,” he said. It's a bit like trying to stop a degen by limiting their gas fees—crude, but it might just work.

Looking ahead, Trazzi hinted that more demonstrations could materialize wherever major AI labs set up shop. “We want to show up where

Mentioned Coins

$XRP
Share:
Publishergascope.com
Published
UpdatedMar 24, 2026, 06:27 UTC

Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.

See our Terms of Service, Privacy Policy, and Editorial Policy.