Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Bittensor is the hope of the entire crypto community
In the grand debate of “Does Crypto Still Have a Purpose,” Bittensor is providing the most compelling answer in the entire industry.
Author: 0xai
Special thanks to @DistStateAndMe and their team for their contributions to open-source AI models, as well as for their valuable advice and support for this article.
Why You Should Read This Report
If “decentralized AI training” has gone from impossible to possible, how underestimated is Bittensor?
Early 2026, the entire crypto community was filled with a sense of fatigue.
The afterglow of the last bull run had long faded, and talent was rapidly flowing into the AI industry. Those who once talked about “the next 100x” now discussed Claude CodeOpenclaw. “Crypto is a waste of time”—you might have heard this more than once.
But on March 10, 2026, a subnet called Templar within Bittensor quietly announced something.
Over 70 independent participants from around the world, no central servers, no big corporations coordinating—just relying on crypto incentives—collaboratively trained a 720-billion-parameter AI large model.
The model and related papers have been published on HuggingFace and arXiv, with data openly verifiable.
More importantly: In multiple key tests, this model outperformed Meta’s similarly sized models trained with heavy investment.
After the announcement, TAO’s price remained silent for nearly 2 days. It only started to surge on the third day, and after 6 days, it had increased by about +40%. Why the 2-day delay?
The core argument of this report is: Crypto investors see “another open-source model” and think it’s not as good as GPT or Claude used daily; AI researchers ignore crypto. The gap between these two communities is creating a cognitive arbitrage window.
Reading Framework
This report is divided into two logical parts:
Part I — Technological Breakthrough: Explains what SN3 Templar actually achieved and why this matters in AI and crypto history.
Part II — Industry Significance: Explains why this event indicates that the Bittensor ecosystem is systematically underestimated and why Bittensor is the hope of the entire crypto community.
Part I: Breakthrough in Decentralized AI Training
1. What does SN3 do?
What is needed to train a large language model?
Traditional answer: Build a massive data center, buy thousands of top GPUs, spend hundreds of millions of dollars, coordinated by a company’s engineering team. That’s the approach of Meta, Google, OpenAI.
SN3 Templar’s approach: Let scattered individuals around the world each contribute one or several GPUs, like puzzle pieces, to combine computing power and jointly train a complete large model.
But here’s a fundamental challenge: If participants are globally distributed, distrust each other, and network latency is unstable, how to ensure the training results are effective? How to prevent laziness or cheating? How to motivate continuous contribution?
Bittensor’s answer: Use TAO tokens as incentives. The more effective a participant’s gradient (understood as “contribution to model improvement”), the more TAO they earn. The system automatically scores and settles rewards without any centralized organization.
This is Bittensor’s SN3 (Subnet #3), code-named Templar.
If Bitcoin proved that decentralized “money” is possible, SN3 is proving that decentralized “AI training” is also possible.
2. What has SN3 achieved?
On March 10, 2026, SN3 Templar announced the completion of training a large language model called Covenant-72B.
What does “72B” mean?: 72 billion parameters. Parameters are the “knowledge storage units” of AI models; more parameters usually mean a smarter model. GPT-3 has 175 billion, Meta’s open-source LLaMA-2 has 70 billion. Covenant-72B is in the same size range as LLaMA-2.
Training scale?: About 1.1 trillion tokens (~5.5 million books, assuming 200,000 words per book).
Who participated?: Over 70 independent miners, contributing compute power sequentially (up to about 20 nodes per sync round). Training started on September 12, 2025, lasting about 6 months. No central server, no unified organization.
Model performance?: Using mainstream AI benchmarks:
Data source: HuggingFace 1Covenant/Covenant-72B-Chat model card
Open source: Apache 2.0 license. Anyone can download, use, and commercialize without restrictions.
Academic backing: Paper submitted to [arXiv 2603.08163], with core techniques (SparseLoCo optimizer and Gauntlet anti-cheat mechanism) presented at NeurIPS Optimization Workshop.
3. What does this achievement mean?
For the open-source AI community: Previously, due to funding and compute barriers, training 70B-scale models was the domain of a few big companies. Covenant-72B proves for the first time: A community can train models of comparable size without any centralized funding. This changes the boundaries of who can participate in foundational AI model development.
For AI power structures: The current landscape of foundational models is highly centralized—OpenAI, Google, Meta, Anthropic control the most powerful models. The emergence of decentralized training means this moat is no longer invulnerable. The premise that “only big companies can do foundational models” is being fundamentally challenged.
For the crypto industry: This is the first time crypto projects have made real technical contributions to AI, rather than just “riding the trend.” Covenant-72B comes with HuggingFace models, arXiv papers, and open benchmark data. It sets a precedent: Crypto incentives can serve as a serious infrastructure for AI research.
For Bittensor itself: The success of SN3 transforms Bittensor from a “theoretically feasible decentralized AI protocol” into “a practically validated decentralized AI infrastructure.” This is a qualitative leap from 0 to 1.
4. SN3’s historical significance
Decentralized AI training wasn’t pioneered solely by SN3, but SN3 reached places others hadn’t.
Evolution of decentralized training:
In 4 years, parameters grew from 6B to 72B—20x increase. But more important than size is quality: earlier projects mainly demonstrated “can run,” Covenant-72B is the first decentralized model to outperform centralized models on mainstream benchmarks.
Key technological breakthroughs:
5. Is decentralized training underestimated?
Let’s look at data before judging.
Evidence of underestimation
Decentralized models have surpassed Meta’s heavy-investment LLaMA-2-70B.
Gaps compared to top open models (honestly):
Gap of about 20-30 percentage points.
But the significance isn’t in beating SOTA; it’s in proving decentralized training is feasible. Behind Qwen2.5 / LLaMA-3.1 are hundreds of millions to billions of dollars, thousands of GPUs, and professional teams; Covenant-72B involves 70+ independent miners with no central coordination.
Trends matter more than snapshots:
In 4 years, decentralized training has moved from “concept experiment” to “performance comparable to centralized training.” The slope of this curve is more meaningful than any single benchmark number.
Moreover, the gap in deep reasoning performance is being addressed with planned solutions—SN81 Grail will handle post-training reinforcement learning (RLHF) to align and enhance the model’s capabilities. This is the key step that improved GPT-4 over GPT-3.
Heterogeneous SparseLoCo is the next milestone: currently, SN3 requires all miners to use the same GPU model. The next breakthrough will be Heterogeneous SparseLoCo, allowing mixed hardware (B200 + A100 + consumer GPUs) to participate in the same training. Once achieved, the compute pool will expand dramatically.
Decentralized training has crossed the feasibility threshold. The performance gap on benchmarks is an engineering problem, not a fundamental barrier.
Part II: The Market Still Doesn’t Understand This
TAO Price Timeline
After SN3’s announcement, TAO’s price movement reveals this cognitive lag:
Note the silence during these 2 days (3/10 → 3/12): the announcement was made, but the price hardly moved.
Why the delay?
Crypto investors see “Bittensor SN3 completed training an AI model”—but they may not grasp the technical significance of “72B decentralized training surpassing Meta on MMLU.”
AI researchers understand the technical significance but ignore crypto.
The cognitive gap between these two communities creates a 2-3 day price lag window.
Most crypto investors’ understanding of Bittensor remains stuck in the previous cycle. Today, over 79 active subnets on Bittensor cover AI agents, compute power, AI training, AI trading, robots, and more—completely different fields. When the market revalues Bittensor’s ecosystem breadth, this cognitive gap will close—and the correction often manifests as a sharp price surge.
Valuation mismatch of Bittensor
Placing Bittensor in the larger industry context:
SN3 has proven: Bittensor can accomplish decentralized large-model training.
If future AI requires open, permissionless training networks, the only practically validated infrastructure candidate is Bittensor.
The market is pricing an AI infrastructure-level network based on application-layer project valuations.
Even within crypto, Bitcoin’s market share has long been 50-60%; Bittensor’s share in the crypto AI track is only about 11.5%.
As the market re-understands Bittensor’s role in AI infrastructure, this misvaluation will inevitably correct.
Conclusion: Bittensor is the hope of the entire crypto community
If SN3 Templar’s Covenant-72B proves anything, it’s that:
Decentralized networks can coordinate not just capital but also compute and cutting-edge AI R&D.
In recent years, crypto’s role in AI narratives has been mostly peripheral. Many projects rely on conceptual hype, emotional speculation, or capital narratives, lacking verifiable technical output. SN3 is a clear exception.
It didn’t introduce a new token narrative nor packaged an “AI + Web3” application layer product; instead, it achieved a more fundamental, more difficult feat:
Training a 72B-scale model without centralized coordination.
Participants come from all over the world, trust isn’t required among them; the system relies on on-chain incentives and verification to automatically coordinate training contributions and reward distribution.
Crypto mechanisms have, for the first time, organized real productivity in AI.
Many still don’t grasp SN3’s historical significance. Just as many didn’t realize that Bitcoin proved not just “better payments” but the value of trustless consensus, today many see only benchmarks, model releases, or a price rally.
But the real change is that Bittensor is demonstrating:
Open-source communities can contribute code, academia can publish papers, but when it comes to super-large-scale training, long-term collaboration, cross-region scheduling, anti-cheat, and profit sharing, good intentions and reputation systems are far from enough:
Therefore, is Bittensor underestimated? The answer is not “possibly,” but “significantly and systematically underestimated.”
In the grand debate of “Does Crypto Still Have a Purpose,” Bittensor is providing the industry’s most powerful answer.
And for this reason: Bittensor is the hope of the entire crypto community.