Terafab: Musk’s Plan to Turn Orbit Into a Terawatt AI Chip Mint
Elon Musk has revealed Terafab, a hyper-scale chip foundry designed to pump out a full terawatt of AI compute annually – a figure that makes the current global output look like a proof-of-work miner's space heater. The venture is a co-production of Tesla, SpaceX, and xAI, all now conveniently living under Musk's aerospace umbrella.
Musk framed the ambition as a civilizational moonshot, or more accurately, a civilization-scale moonshot: “To scale civilization we must scale power in space, because we capture a tiny fraction of the sun’s energy on Earth.” He envisions a galaxy-spanning society where ships can go full send anywhere, anytime, powered by solar-fed AI satellites.
One-stop shop for the ultimate ASIC Terafab will cram the entire chip pipeline – mask lithography, wafer fab, testing, and redesign – into a single Texas facility with state backing. Musk claims this integrated approach will slash iteration times compared to today’s fragmented, multi-continent supply chain, which moves slower than a congested Ethereum mainnet.
Two chip families for two frontiers
- Edge inference chips destined for Tesla’s Optimus robots, its autonomous fleet, and the upcoming Cybercab. Musk predicts robot production could hit 1-10 billion units yearly, a number that utterly dwarfs the ~100 million cars built today and suggests we'll all be renting out robot CPU cycles soon.
- Space-hardened chips engineered to survive cosmic particle bombardment and higher temps, trimming radiator mass on orbiting platforms because in space, no one can hear your cooling fans scream.
Why the launchpad? A terawatt of compute simply can’t live on Earth – total US electricity generation is only ~0.5 TW. Instead, the bulk of the hardware would orbit the planet on solar-powered AI satellites, basically turning the sky into a massive, distributed GPU farm. A prototype “mini-satellite” is slated for 100 kW output, with later versions scaling to the megawatt range, because going big is the only degen play Musk knows.
To hit that 1 TW goal, Musk estimates roughly 10 million tons of material must be launched annually at 100 kW per ton. Current Starship V3 can lift ~100 tons per flight; V4 aims for ~200 tons. Launch costs have cratered from >$65,000/kg (Shuttle era) to $1,000-$2,000/kg today, and Musk targets $100-$200/kg with Starship optimization – a price point he believes will make space-based AI cheaper than ground-based rigs within 2-3 years, finally making "cloud" computing literal.
Reusable launchers like Starship are “critical” to moving these payloads. Musk also floated long-term ideas like lunar manufacturing and mass-driver logistics to further cut orbital deployment costs, because if you're going to build a galactic civilization, you might as well go full resource extraction.
The credibility gap Global AI compute capacity today sits at about 20 GW per year. All existing semiconductor fabs together account for roughly 2 % of what Terafab would need to reach its terawatt target. While existing foundries remain important, Musk delivered a classic ultimatum: “We either build the Terafab or we don’t have the chips. And we need the chips, so we build the Terafab.” It's the "we're gonna need a bigger boat" of compute infrastructure.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.