{"id":"emplar-sn3","title":"τemplar (SN3)","content":"**Templar**, often stylized as **τemplar**, is a decentralized artificial intelligence (AI) protocol that operates as Subnet 3 (SN3) on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network. The project is designed to create an incentivized, internet-wide marketplace for computational resources, primarily for the distributed training of large-scale AI models. It aims to provide a more democratic, cost-effective, and scalable alternative for AI development compared to centralized cloud providers. ​​ [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)​\n\n## Overview\n\nTemplar's core mission is to democratize access to the immense computational power required for training advanced AI, particularly Large Language Models (LLMs). The protocol functions as a Decentralized Physical Infrastructure Network ([DePIN](https://iq.wiki/wiki/depin)), connecting participants who contribute their hardware (Miners) with the network's need for processing power. In return for contributing GPU and CPU resources to collaborative training tasks, miners are rewarded with the project's native [cryptocurrency](https://iq.wiki/wiki/cryptocurrency). [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)​\n\nThe project leverages the underlying architecture and [consensus mechanism](https://iq.wiki/wiki/consensus-mechanism) of the [Bittensor](https://iq.wiki/wiki/bittensor-tao) (τ) [blockchain](https://iq.wiki/wiki/blockchain). As Subnet 3, Templar competes for a share of the Bittensor network's token emissions, which are allocated based on the value and intelligence demonstrated by the subnet. The protocol's incentive system is designed to evaluate and reward the quality of computational work, ensuring that contributions genuinely improve the collective AI model being trained. [\\[2\\]](#cite-id-orESm01zj7f5z7q1)​\n\nTemplar's technology focuses on overcoming the significant barriers associated with distributed computing, such as high communication costs and a lack of coordination. It employs communication-efficient algorithms and a [blockchain](https://iq.wiki/wiki/blockchain)-based incentive system to organize a global, permissionless network of contributors. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa) The project's vision also includes the future development of a decentralized marketplace where users could hire autonomous [AI agents](https://iq.wiki/wiki/ai-agents) developed on the network for various digital tasks. [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\n## History\n\nThe foundational research and whitepaper for Templar were published in the second and third quarters of 2024, outlining the project's vision for a decentralized autonomous agent network and a framework for distributed LLM training. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN) Following this, the project's native token, TPLR, was introduced through an Initial DEX Offering (IDO) in August 2024. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)​\n\nIn September 2024, Templar successfully secured a slot on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network and officially launched its [mainnet](https://iq.wiki/wiki/mainnet) as Subnet 3 (SN3), becoming one of the first ten subnets in the ecosystem. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN) In the first quarter of 2025, the project team created and distributed the SN3 subnet token via a fair launch mechanism, airdropping it to early network participants and [liquidity providers](https://iq.wiki/wiki/liquidity-providers). [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\nA significant milestone occurred in the first quarter of 2025 with the initiation of the first \"Crusade\" campaign. This large-scale, coordinated training event focused on developing a specialized Large Language Model for code generation, serving as a major proof-of-concept for the network's capabilities. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) Later in 2025, the team released \"Templar v2,\" an update to the subnet's incentive mechanism aimed at better rewarding agent creativity and problem-solving abilities. [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\nBy the fourth quarter of 2025, the network's participation had grown to surpass 1,000 active, concurrent miners. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) In early 2026, the Templar research team published a paper detailing a comparative analysis that showed up to a 30% cost-efficiency improvement for specific AI workloads compared to traditional cloud providers. During the same period, the SN3 token experienced a surge in interest amid broader growth in the [Bittensor](https://iq.wiki/wiki/bittensor-tao) ecosystem, reaching its all-time high price. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\n## Technology and Architecture\n\nTemplar's architecture is built on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network and is designed as a [peer-to-peer](https://iq.wiki/wiki/peer-to-peer-trading-p2p) system that coordinates two primary roles: Miners and [Validators](https://iq.wiki/wiki/validator). The system uses specialized algorithms and a robust incentive mechanism to facilitate distributed machine learning. [\\[2\\]](#cite-id-orESm01zj7f5z7q1)​\n\n### Core Components\n\n* **Miners (Compute Providers):** Miners are network participants who connect their hardware (GPUs and CPUs) to perform the core model training tasks. They receive and execute \"Training Runs,\" which are discrete AI training jobs. A miner's workflow involves loading a portion of the dataset, running forward and backward passes on the model to compute a gradient, compressing the gradient, and uploading it to distributed storage. They are rewarded in proportion to the quality and effectiveness of their contributions. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[2\\]](#cite-id-orESm01zj7f5z7q1)\n* **Validators:** Validators are network participants who stake tokens to secure the network, orchestrate the training process, and ensure quality control. Their primary function is to evaluate the work submitted by miners. They aggregate the gradients from all active miners, measure the performance improvement (loss reduction) each gradient provides, and calculate a performance score for each miner. These scores are used to assign weights on the Bittensor blockchain, which in turn dictates the distribution of rewards. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[2\\]](#cite-id-orESm01zj7f5z7q1)\n\n### Incentive Mechanism\n\nTemplar’s incentive structure is based on [Bittensor’s](https://iq.wiki/wiki/bittensor-tao) [Proof-of-Intelligence](https://iq.wiki/wiki/proof-of-intelligence-poi) consensus, rewarding participants based on the value they add to the collective. The project developed a proprietary system named Gauntlet to manage this process. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)​\n\nGauntlet assesses participants using a two-stage mechanism:\n\n1. **Uptime and Reliability:** It measures the availability and consistency of a contributor's hardware.\n2. **Contribution Value:** It evaluates the quality and impact of the computational work provided by the participant. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)\n\nTo score and rank miners against each other, the system uses the OpenSkill (PlackettLuce) rating algorithm, a method designed for multi-player rating environments. The resulting ratings are translated into on-chain weights, and miners receive rewards from the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network in direct proportion to their assigned weights. This ensures that high-quality contributions earn greater rewards. [\\[2\\]](#cite-id-orESm01zj7f5z7q1)​\n\n### Distributed Training and Communication\n\nA core technical challenge in distributed training is the high communication cost of exchanging large amounts of data between [nodes](https://iq.wiki/wiki/node). Templar addresses this with its SparseLoCo algorithm and a multi-step gradient compression technique. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa) [\\[2\\]](#cite-id-orESm01zj7f5z7q1)​\n\n#### SparseLoCo Algorithm\n\nSparseLoCo is a communication-efficient training algorithm designed for training LLMs in low-bandwidth environments like the public internet. It combines two key techniques:\n\n* **TOP-k Sparsification:** Selects only the most significant gradients for communication, achieving a sparsity of 1-3% (meaning 97-99% of gradient data is not transmitted).\n* **Quantization:** Reduces the numerical precision of the data being sent, using as low as 2-bit quantization.\n\nThis combination allows for extreme compression, reducing communication costs while reportedly improving model performance compared to alternative methods. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)​\n\n#### Gradient Compression and Exchange\n\nThe technical process for compressing and exchanging gradients involves several steps:\n\n1. **Discrete Cosine Transform (DCT):** [Gradients](https://iq.wiki/wiki/gradients) are converted into their frequency domain representation.\n2. **Top-K Selection:** Only the most significant coefficients are retained, with a `topk_compression` ratio set to 32.\n3. **Momentum Tracking:** Gradient momentum is maintained across updates to preserve the learning trajectory during training.\n4. **Data Exchange:** Compressed gradients are uploaded and downloaded from Cloudflare R2, which serves as the distributed storage layer for gradients, datasets, and model checkpoints.\n\nThis entire process is managed by a communication system built with Python's `asyncio` to handle concurrent operations. [\\[2\\]](#cite-id-orESm01zj7f5z7q1)​\n\n### Key Network Concepts\n\n* **Training Runs:** The fundamental, discrete units of work on the network, consisting of processing a data batch and updating model weights. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)\n* **Crusades:** Large-scale, community-focused training events designed to harness the full power of the network for an ambitious AI model, often with enhanced rewards. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)\n* **Templar Agents:** A concept for autonomous [AI agents](https://iq.wiki/wiki/ai-agents) capable of performing complex digital tasks. The long-term vision includes a marketplace where users can hire these agents, with payment settled in the network's token. [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)\n\n## Key Projects and Models\n\nTemplar has undertaken several significant projects to demonstrate and validate its technology for large-scale, distributed AI training. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)​\n\n### 1.2B Parameter LLM Training\n\nThis project served as an early proof-of-concept, involving the training of a 1.2 billion parameter Large Language Model. It was the first major demonstration of the Gauntlet incentive system being deployed on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) [blockchain](https://iq.wiki/wiki/blockchain) to orchestrate a training effort with completely permissionless contributions from a global network of participants. The project successfully validated the use of token-based incentives for organizing distributed AI training. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)​\n\n### Covenant-72B Model Pre-training\n\nThe pre-training of Covenant-72B, a 72-billion parameter LLM, was described by the project as the largest collaborative and globally distributed pre-training run of its kind. This project showcased the SparseLoCo algorithm at a massive scale, enabling open and permissionless participation from contributors worldwide. The training was managed by a live [blockchain](https://iq.wiki/wiki/blockchain) protocol and was conducted on a dataset of approximately 1.1 trillion tokens. [\\[3\\]](#cite-id-s62dVyrlIjoyXCOa)​\n\n## Tokenomics\n\nThe Templar ecosystem appears to involve two distinct tokens, based on information from different sources: the main project token, TPLR, and a subnet-specific token, SN3. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\n### TPLR Token\n\nThe TPLR token is presented as the primary token for the Templar protocol.\n\n* **Ticker:** TPLR\n* **Maximum Supply:** 1,000,000,000 $TPLR\n* **Utility:**\n\n * **Staking:** Used by validators to participate in consensus and by delegators to earn a share of rewards.\n * **Incentives:** The primary mechanism for rewarding miners for computational contributions.\n * **Governance:** The project's roadmap includes plans for on-chain governance where TPLR holders can vote on protocol upgrades.\n* **Initial Distribution (Estimated):**\n\n * [Mining](https://iq.wiki/wiki/mining) & Validation Rewards: ~50%\n * Treasury/Ecosystem Fund: ~20%\n * Team: ~15% (subject to a vesting schedule)\n * Early Investors/Partners: ~10%\n * Public Sale/IDO: ~5%\n* **Market Data (As of April 10, 2026):**\n\n * **Market Capitalization:** Approx. $150 Million\n * **Circulating Supply:** Approx. 300,000,000 $TPLR\n * **All-Time High:** $0.95 (November 15, 2025)\n * **All-Time Low:** $0.10 (September 20, 2024)\n * **Key Exchanges:** [Binance](https://iq.wiki/wiki/binance), [KuCoin](https://iq.wiki/wiki/kucoin), Gate.io, [Uniswap](https://iq.wiki/wiki/uniswap)\n\nThe information in this subsection is based on the project's official website. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)​\n\n### SN3 Subnet Token\n\nThe SN3 token is specific to Templar's operations as Subnet 3 within the [Bittensor](https://iq.wiki/wiki/bittensor-tao) ecosystem and is primarily traded on [decentralized exchanges](https://iq.wiki/wiki/decentralized-exchange) native to that environment.\n\n* **Ticker:** SN3\n* **Asset Type:** Bittensor Subnet Token\n* **Max Supply:** 21,000,000 SN3\n* **Circulating Supply (As of April 10, 2026):** 4,268,617 SN3\n* **Utility:**\n\n * **Governance:** SN3 holders can participate in governance related to Subnet 3's parameters and development.\n * **Staking & Access:** [Staking](https://iq.wiki/wiki/staking) SN3 may be required to access premium network features or deploy custom tasks.\n * **Medium of Exchange:** Planned as the primary payment method in the future autonomous agent marketplace.\n* **Distribution:** The SN3 token followed a \"fair launch\" model with no pre-mine or venture capital allocation, primarily distributed to early participants.\n* **Market Data (As of April 10, 2026):**\n\n * **Market Capitalization:** $41,173,252\n * **All-Time High:** $44.47\n * **All-Time Low:** $4.83\n * **Key Exchanges:** Primarily traded on [Bittensor](https://iq.wiki/wiki/bittensor-tao) ecosystem DEXs like Subnet Tokens.\n\nThe information in this subsection is based on data provided by [CoinGecko](https://iq.wiki/wiki/coingecko). [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\n## Team\n\nThe development team behind Templar operates under pseudonyms. The project is led by a lead developer known as 'Helios,' who is responsible for the protocol's architecture. The research division is headed by a figure identified as 'Aethel' on the project's website and as 'Aethelred' in other ecosystem data sources; this individual focuses on AI training and validation methodologies. [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g) [\\[4\\]](#cite-id-bwDzHJrVBpgdBoUN)​\n\n## Quotes\n\n> The centralization of AI training poses a systemic risk to innovation. τemplar aims to democratize access to hyperscale AI by creating a transparent, permissionless, and incentivized market for computation.\n> — From the Templar [Whitepaper](https://iq.wiki/wiki/white-paper) [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)\n\n> Every GPU that joins τemplar is a vote for an open and decentralized AI future. We are not just building a network; we are forging a collective intelligence.\n> — 'Helios', Lead Developer [\\[1\\]](#cite-id-75ABTKAEQ8yOAC2g)","summary":"τemplar (SN3) is a decentralized AI protocol on the Bittensor network. It functions as an incentivized marketplace for computational resources, allowing participants to contribute GPU/CPU power for collaborative AI model training.","images":[{"id":"QmaMoAdzzKgEtz4Sv1mPuYjREVFaPqpCatf2CzkHd2jJNZ","type":"image/jpeg, image/png"}],"categories":[{"id":"dapps","title":"dapps"}],"tags":[{"id":"AI"},{"id":"Protocols"},{"id":"Blockchains"},{"id":"Developers"}],"media":[{"id":"QmQWc3Y5xApuWeqDkJvb6eALWhCqTW85yAmiXSA7HoTFPU","name":"Cópia de Design sem nome (2).png","caption":"","thumbnail":"QmQWc3Y5xApuWeqDkJvb6eALWhCqTW85yAmiXSA7HoTFPU","source":"IPFS_IMG"},{"id":"QmZsDATazVG8pb1AczTnERkXYDNZkS8tS8vgmZyzXsys31","name":"9ETxbzHK_400x400.png","caption":"","thumbnail":"QmZsDATazVG8pb1AczTnERkXYDNZkS8tS8vgmZyzXsys31","source":"IPFS_IMG"}],"metadata":[{"id":"references","value":"[\n {\n \"id\": \"75ABTKAEQ8yOAC2g\",\n \"url\": \"https://www.tplr.ai/\",\n \"description\": \"Templar homepage overview\",\n \"timestamp\": 1775787354772\n },\n {\n \"id\": \"orESm01zj7f5z7q1\",\n \"url\": \"https://docs.tplr.ai/\",\n \"description\": \"Templar technical documentation overview\",\n \"timestamp\": 1775787354772\n },\n {\n \"id\": \"s62dVyrlIjoyXCOa\",\n \"url\": \"https://www.tplr.ai/research\",\n \"description\": \"Templar research paper on distributed training\",\n \"timestamp\": 1775787354772\n },\n {\n \"id\": \"bwDzHJrVBpgdBoUN\",\n \"url\": \"https://www.coingecko.com/en/coins/templar\",\n \"description\": \"CoinGecko overview of Templar's AI agent marketplace goal\",\n \"timestamp\": 1775787354772\n }\n]"},{"id":"website","value":"https://www.tplr.ai/"},{"id":"coingecko_profile","value":"https://www.coingecko.com/en/coins/templar"},{"id":"coinmarketcap_url","value":"https://coinmarketcap.com/currencies/templar/"},{"id":"references","value":"https://docs.tplr.ai/"},{"id":"github_profile","value":"https://github.com/tplr-ai/templar"},{"id":"contract_url","value":"https://taostats.io/subnets/3/chart"},{"id":"commit-message","value":"\"Added Templar wiki page\""}],"events":[{"id":"256e5ba7-6746-4557-9125-5aa758922be4","date":"2024-09","title":"Templar Launches on Bittensor Mainnet","type":"CREATED","description":"Templar successfully launched on the Bittensor mainnet after winning a registration slot for Subnet 3, establishing itself as a decentralized AI training protocol.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"82af99c3-4ed8-4e75-b78b-a01d5ba62ac3","date":"2025-01","title":"First 'Crusade' Campaign Initiated","type":"DEFAULT","description":"The first 'Crusade' campaign was initiated, focusing on training a specialized Large Language Model for code generation, serving as a major proof-of-concept for the network's capabilities.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"5266f0d3-3ca8-4a12-a216-a2416f663ec6","date":"2025-10","title":"Network Surpasses 1,000 Active Miners","type":"DEFAULT","description":"The network grew significantly, surpassing 1,000 active, concurrent miners contributing computational power to the protocol.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"2c5c746b-bfbd-4060-a8a2-82440a093e86","date":"2026-02","title":"Covenant-72B Model Pre-training","type":"DEFAULT","description":"Completed the pre-training of Covenant-72B, a 72-billion parameter LLM, which was the largest collaborative, globally distributed pre-training run at the time.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"776bc2c1-57c7-42c2-adf2-a5412a27eb62","date":"2026-01","title":"Published Research on Cost-Efficiency","type":"DEFAULT","description":"The research team published a paper demonstrating up to a 30% cost-efficiency improvement for specific AI workloads compared to traditional cloud providers.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null}],"user":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"author":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"operator":{"id":"0xd5893989b9952c6568a99F854795AcC5Ae480D56"},"language":"en","version":1,"linkedWikis":{"blockchains":[],"founders":["anon"],"speakers":[]}}