Altman Tells Washington to Buckle Up Before AI Goes Full 'World-Shaking' (and Drains Your Wallet)
OpenAI CEO Sam Altman is sounding the alarm bells in Washington, telling policymakers they need to get moving on preparing for advanced artificial intelligence—because the tech is no longer theoretical musings, it's showing up in everyday economic use. Basically, the future isn't coming—it's already here and it's asking for your wallet.
In a chat with Axios, Altman pointed out that AI systems are already handling coding and research tasks that once required whole teams of programmers. The next generation of models will go even further, he said—helping scientists make major discoveries and letting individuals do the work of entire organizations. Translation: your startup might soon need exactly one person and a subscription to ChatGPT Enterprise.
That shift is already showing up in cybersecurity, where some industry leaders say AI is tipping the scales toward attackers. Charles Guillemet, CTO of hardware wallet maker Ledger, told CoinDesk that AI tools are bringing down both the cost and skill ceiling needed to find and exploit software vulnerabilities. Tasks that used to take months—like reverse-engineering code or chaining multiple weaknesses together—can now be knocked out in seconds with the right prompts. The bar for 0-day hunting just got lowered to "can use Claude properly."
The crypto world saw over $1.4 billion in assets stolen or lost to attacks last year. That number could keep climbing, Guillemet warned. On top of that, developers are leaning harder on AI-generated code, which could introduce new bugs at scale. His prescription: stronger defenses like mathematically verified code, hardware devices that keep private keys offline, and a broader acknowledgment that systems can and will fail. Basically, trust no code, keep your keys cold, and maybe pray.
Altman also flagged that while AI could speed up drug discovery or materials science, it could also enable nastier cyberattacks and make harmful biological research easier to pull off. He thinks these threats could emerge within a year, making coordination between government, tech firms, and security groups urgent. Nothing says "fun regulatory discussion" like debating whether AI should help cure cancer or accidentally create a bioweapon.
"We're not that far away from a world where there are incredibly capable open-source models that are very good at biology," he said. "The need for society to be resilient to terrorist groups using these models to try to create novel pathogens is no longer a theoretical thing." So yeah, that's happening.
Another possibility he raised: a "world-shaking cyberattack" that could hit as early as this year. Avoiding that will require a "tremendous amount of work." Mark your calendars, set a reminder, maybe buy some more hardware wallets.
Altman framed OpenAI's policy ideas as a starting point to kickstart debate on managing systems that learn fast and operate across many domains. Using AI to help defend against these potential attacks is also crucial, he noted. The solution to AI problems? More AI. Classic.
On the idea of nationalizing OpenAI, Altman said the main argument against it is that the U.S. needs to achieve "superintelligence" before rival nations do—aligned with democratic values.
"The biggest case against nationalization would be that we need the the U.S. to succeed at building superintelligence in a way that is aligned with the democratic values of the United States before somebody else does," he said. "That probably wouldn't work as a government project, I think that's a sad thing."
That said, Altman believes AI companies need to work closely with the U.S. government. His financial stake in how the sector develops could shape how he frames both the urgency of regulation and the role of private companies like OpenAI in managing emerging risks—factors that could influence the firm's competitive position. Conflict of interest? Perhaps. But that's crypto for you.
On the energy front, Altman sees quick progress because more processing power could keep costs down as AI demand ramps. He's also noticing early signs of labor shifts—a programmer in 2026 already works differently than one a year earlier. Your bootcamp diploma is aging like milk.
AI will become a kind of utility, like electricity, embedded across devices while the cost of basic intelligence drops and top-tier systems stay pricey. The future is tiered subscriptions for brain cells.
"You will have this personal super assistant running in the cloud," Altman said. "If you use it a lot or use it at high levels of intelligence you'll have a higher bill one month and if you use it less, you'll have a lower bill." So basically, AI is the new electricity. And just like electricity, you'll get a shock if you don't pay your bill.
"It's incredibly important that people building AI are high integrity, trustworthy people." No kidding. We're all just trusting that the people building our robot overlords aren't secretly into rug pulls.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.