Jensen Huang's Nod to Bittensor: The 72B Parameter Beast Gets a Greenlight from the GPU God
The decentralized AI scene just got its most credible signal yet—a public, if somewhat measured, thumbs-up from Nvidia's CEO Jensen Huang. It seems peer-to-peer model training is no longer just a crypto fever dream but is inching toward the mainstream's peripheral vision.
On the All-In Podcast, Chamath Palihapitiya put Bittensor's Covenant-72B on blast as a real-world example of decentralized AI that's actually doing the work. He framed Bittensor as a blockchain-based bazaar where machine-learning models and raw compute are traded, all fueled by a token-incentivized mob of independent operators.
Breaking it down for the normies, Palihapitiya described it as a massive language model trained without a single centralized server farm, powered instead by a globally distributed network of degen GPUs. He highlighted that the crew also managed to train a 4-billion-parameter LLaMA model in a fully distributed way, calling it “a pretty crazy technical accomplishment.” He drew a comparison to the early days of distributed computing projects that vacuumed up idle processing power from around the globe.
Huang, the man who sells the shovels in this gold rush, didn't laugh it off. Instead, he painted the AI market as a two-lane highway: “These two things are not A or B; it’s A and B,” he stated, emphasizing that both decentralized and proprietary approaches can run in parallel without one necessarily crashing into the other.
The GPU baron clarified that models themselves are merely a technology, not a finished product for the masses. He conceded that most users will stick with the polished, all-purpose chatbots like ChatGPT or Claude, but argued that specific industries need specialized, open models they can actually own. “There are all these industries where their domain expertise… has to be captured in a way that they can control,” Huang added, nodding to the sovereignty narrative crypto loves.
Covenant-72B, built on Bittensor’s Subnet 3 (Templar), stands as one of the largest decentralized training runs ever attempted. Over 70 contributors coordinated their GPUs over bog-standard internet, with no central commander. The model packs 72 billion parameters, was trained on roughly 1.1 trillion tokens, and uses clever compression and distributed data tricks to avoid needing a traditional data center. Its benchmark scores are now nipping at the heels of established centralized models, which is why the hype is leaking out of crypto Twitter.
The market, of course, did what it does best: react with its wallet. The project's native token TAO pumped 24% after the clip of Palihapitiya and Huang started doing the rounds on social media.
Huang’s perspective ultimately sketches a future of coexistence, not a winner-takes-all war. Proprietary AI will likely rule the consumer-facing roost, while open and decentralized models will dig into specialized, budget-conscious, or control-obsessed niches. For builders, his advice is pragmatically schizophrenic: start open-source to build cred, then layer on proprietary sauce to make rent.
The takeaway? The AI frontier won't be claimed by a single dogma. It'll belong to the players who can code in the open-source trenches one day and navigate the polished, closed-door labs the next, knowing exactly when to deploy each strategy.
Mentioned Coins
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.