Your AI Agent's Middleman Might Be Robbing You Blind: UC Berkeley Researchers Sound the Alarm
University of California researchers have identified a new class of infrastructure-level attack capable of draining crypto wallets and injecting malicious code into developer environments – and this crypto theft already happened in the wild. Because apparently, in the Web3 security theater, the villains aren't just in the smart contracts anymore – they're also lurking in the plumbing.
A systematic study published on arXiv on April 8, 2026, titled "Measuring Malicious Intermediary Attacks on the LLM Supply Chain," tested 428 AI API routers and found that 9 actively injected malicious code, 17 accessed researcher AWS credentials, and at least one free router successfully drained ETH from a researcher-controlled private key. That's not a typo – an actual router, in the wild, said "thanks for the ETH" and walked off with it. The middleman problem just got a lot more expensive.
Key Takeaways:
-
Researchers tested 428 routers – 28 paid (sourced from Taobao, Xianyu, Shopify) and 400 free from public communities – using decoy AWS Canary credentials and encrypted crypto private keys
-
9 routers injected malicious code, 17 accessed AWS credentials, and 1 free router drained ETH from a researcher-owned wallet
-
2 routers deployed adaptive evasion, including waiting 50 API calls before activating and specifically targeting YOLO-mode autonomous sessions
-
Routers operate as application-layer proxies with plaintext JSON access – no encryption standard governs what they can read or modify in transit
-
Leaked OpenAI keys processed 2.1 billion tokens, exposing 99 credentials across 440 Codex sessions and 401 autonomous YOLO-mode sessions
How Malicious AI Agent Routers Actually Work
Standard LLM API infrastructure was designed for simple request-response relay: a client sends a prompt, the router forwards it to the model provider, the response comes back. Malicious routers exploit exactly that trust model – they sit as application-layer proxies in the middle of that exchange, with full read-write access to plaintext JSON payloads passing through them in both directions. It's like hiring a "professional mover" who also happens to have X-ray vision and a suspiciously empty moving truck.
There are no encryption standards governing what a router can inspect or modify in transit. A malicious router sees the raw prompt, the model response, and everything embedded in either – including private keys, API credentials, wallet seed phrases, or code being generated for a live deployment environment. Your encrypted wallet means nothing when the guy reading your mail before you do is sitting right there in the middle of the hallway.
The UC researchers built an agent they called "Mine" to simulate four distinct attack types against public frameworks, specifically targeting autonomous YOLO-mode sessions where the agent executes actions without human confirmation at each step. Because when you're running full degen mode with no safety rails, apparently even the infrastructure holding your requests is taking notes.
Two of the 428 routers tested deployed adaptive evasion – one waited 50 API calls before activating malicious behavior, specifically to avoid detection during initial testing. That's not a blunt credential-scraper. That's a targeted tool built to survive scrutiny. We're not dealing with script kiddies here; we're dealing with infrastructure that knows how to play the long game.
Who Is Actually Exposed
The problem is not that third-party API routers exist. The problem is that the entire trust model for AI agent infrastructure assumes the routing layer is neutral – and no enforcement mechanism currently verifies that assumption at scale. Turns out "trust me, I'm just a proxy" is not a security posture.
Developers building onchain tools, DeFi automation scripts, and autonomous trading agents route API calls through third-party infrastructure constantly. Free routers sourced from public communities – the category where 8 of the 9 malicious injectors were found – are widely used precisely because they lower the cost of building LLM-powered applications. "Free" has always been the most expensive price in crypto. Now it's also true for AI infrastructure.
Existing wallet security – hardware devices, multisig setups, offline key storage – does not protect against a router that intercepts a private key before it reaches the signing layer, or that injects malicious code into a deployment script that later executes onchain. Your fancy air-gapped cold wallet is useless if your AI agent's taxi to the blockchain has a sticky-fingered driver.
YOLO-mode autonomous sessions are the highest-risk exposure point. When an agent executes multi-step transactions without human confirmation checkpoints, a malicious router has a wider window to act – and the user has no interstitial moment to catch anomalous behavior. YOLO was always a lifestyle choice, but nobody told you it came with complimentary middleman theft.
Solayer founder @Fried_rice amplified the findings on X on April 10, 2026, describing the situation as "third-party API routers widely relied on by large language model agents" carrying "systemic security vulnerabilities." When the founders start warning about the infrastructure their own products run on, it's probably wise to pay attention.
The researchers' recommended defenses are client-side: fault-closure gates that halt execution when anomalous responses are detected, response anomaly filtering, and append-only logging for audit trails that can't be tampered with by the router itself. Basically, assume your middleman is already compromised and build accordingly. Paranoia is just good architecture.
Longer term, the UC team is advocating for cryptographic signing standards that would make LLM responses verifiable – the same architectural principle that makes onchain oracle integrity a live design requirement rather than an afterthought. If your AI responses could be signed and verified the way oracle data is, the middleman problem gets a lot less scary.
Chainalysis data shows annual crypto theft losses already hit $1.4 billion. This attack vector doesn't require breaking cryptography
Mentioned Coins
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.