GasCope
Anthropic Scores a $5B Pad While the Pentagon Gets Told to Cool It on the Claude Ban
Back to feed

Anthropic Scores a $5B Pad While the Pentagon Gets Told to Cool It on the Claude Ban

Google is apparently betting big on Anthropic's insatiable appetite for compute, preparing to back a multibillion-dollar data center project in Texas leased to the AI upstart. Think of it as Google saying "here's some land, build your electricity-guzzling empire on it." The project, operated by Nexus Data Centers, could exceed $5 billion in its initial phase. Google is expected to provide construction loans, with a consortium of banks racing to arrange financing by mid-year—like a financial arm race, but make it boring and corporate. Anthropic recently signed a lease for the 2,800-acre campus, which forms part of its broader infrastructure tie-up with Google. Construction is already underway, supported by early-stage debt financing from Eagle Point. The site is expected to deliver around 500 megawatts of capacity by late 2026—roughly equivalent to powering 500,000 homes—with potential expansion to 7.7 gigawatts. Its location near major gas pipelines operated by Enterprise Products Partners, Energy Transfer, and Atmos Energy allows the project to rely on on-site gas turbines. Because nothing says "advanced AI" quite like being propped up by good old-fashioned fossil fuels.

Judge Blocks Pentagon Ban on Anthropic

On Thursday, a US federal judge in San Francisco temporarily blocked the Pentagon from labeling Anthropic a national security risk and halting government use of its AI tools. It's like the courtroom equivalent of a rug pull, except the government got rugged. Judge Rita Lin granted a preliminary injunction, pausing a directive backed by President Donald Trump that sought to cut off federal use of Anthropic's chatbot, Claude. The ruling came in a lawsuit filed by Anthropic, which argued that Defense Secretary Pete Hegseth overstepped his authority by designating the company a supply chain risk. The judge described the government's actions as "arbitrary" and warned against branding a US company as a threat without clear legal basis. Ouch. That's legal speak for "y'all didn't read the fine print."

The dispute followed a breakdown in negotiations between Anthropic and the Pentagon over the military use of its AI. Anthropic resisted allowing its models to be used for lethal autonomous weapons or mass surveillance, leading to a broader standoff with the administration. In her decision, Lin said the government may have retaliated against Anthropic for its public stance, calling the measures a likely violation of First Amendment protections. Apparently, having ethical boundaries is now grounds for government beef. Web3 developers everywhere are taking notes on how to get canceled by the Pentagon for not being evil enough.

US Military Used Anthropic AI in Iran Strike

US military units reportedly used Anthropic's Claude AI model during a major airstrike on Iran, even after the ban order by Trump. Military commands, including US Central Command in the Middle East, reportedly used the AI model for operational support. Because nothing says "ethical AI company" quite like your product being used in a geopolitical event that makes your compliance policy look like a

Share:
Publishergascope.com
Published
UpdatedMar 28, 2026, 17:35 UTC

Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.

See our Terms of Service, Privacy Policy, and Editorial Policy.