{"id":"sam-dare","title":"Sam Dare","content":"**Samuel \"Sam\" B. Dare** is an artificial intelligence (AI) researcher, entrepreneur, and prominent advocate for decentralized AI development. He is the founder of Covenant AI and Templar, affiliated organizations focused on creating open-source, decentralized foundation models and the infrastructure to support them. Dare is known for his work within the [Bittensor](https://iq.wiki/wiki/bittensor-tao) ecosystem, his contributions to decentralized training protocols, and his philosophical arguments concerning the \"political economy of foundation models,\" which posit that decentralization is a necessary counterweight to the concentration of power in the hands of large technology corporations.\n\n## Education\n\nSam Dare graduated from the Saïd Business School at the University of Oxford in 2018 and holds a significant number of professional certifications in finance, investment, and [blockchain](https://iq.wiki/wiki/blockchain) technology, including Passed [Level](https://iq.wiki/wiki/level) 2 of the CFA Program, Passed Level 1 of the Financial Risk Manager (FRM), and Certified [Bitcoin](https://iq.wiki/wiki/bitcoin) Professional [\\[2\\]](#cite-id-m2y061hnppttlsDO). \n\n## Career\n\nDare's career is centered on the development and promotion of decentralized artificial intelligence. He is described as a \"[blockchain](https://iq.wiki/wiki/blockchain) veteran\" with a background that merges experience in enterprise software with decentralized systems [\\[4\\]](#cite-id-yC9telWFRTHnwVag). After a period as an independent researcher focusing on mechanism design for distributed systems and the political economy of AI, he began formalizing his work through a series of ventures [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\nIn September 2023, Dare founded Templar, an AI research and development lab focused on studying the geopolitical and economic power structures surrounding foundation models [\\[3\\]](#cite-id-ZYtQTAjXZcqBGBgO). This was followed in January 2024 by the founding of Covenant AI, which acts as the operational arm to Templar's research, coordinating the practical development of open-source, decentralized AI models [\\[3\\]](#cite-id-ZYtQTAjXZcqBGBgO) [\\[2\\]](#cite-id-m2y061hnppttlsDO). These entities, often referred to interchangeably as Templar AI, Templar Covenant, or Covenant.ai, are at the core of Dare's work as a prominent builder within the [Bittensor](https://iq.wiki/wiki/bittensor-tao) decentralized AI network. Some accounts list him as having previously worked as an AI Researcher at the Defense Advanced Research Projects Agency (DARPA) and as a Postdoctoral Researcher at the University of Cambridge [\\[3\\]](#cite-id-ZYtQTAjXZcqBGBgO).\n\n## Major Works and Projects\n\nDare's work primarily involves the creation of protocols, models, and networks designed to make the development of advanced AI accessible and permissionless.\n\n### Templar and Covenant\n\nTemplar and Covenant are the two main entities founded by Dare to pursue his vision of decentralized AI.\n\n* **Templar:** Founded in September 2023, Templar functions as the research and philosophical core of the operation. It is a self-described \"bootstrapped R\\&D\" lab that investigates the political economy of foundation models, arguing that control over AI is a new form of geopolitical power [\\[3\\]](#cite-id-ZYtQTAjXZcqBGBgO).\n* **Covenant AI:** Founded in January 2024, Covenant AI is the operational organization that puts Templar's research into practice. It coordinates the engineering efforts required to build and train large-scale, open-source AI models using a globally distributed network of compute providers [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\n### Covenant-72B Model\n\nUnder Dare's leadership, Covenant AI successfully coordinated the training of Covenant-72B, a 72-billion parameter large language model (LLM). The project was publicized as the \"largest decentralized LLM training run\" at the time of its completion. The training was distributed across a network of approximately 160 GPUs operated by 20 anonymous peers, leveraging the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network among other resources. The project served as a significant proof-of-concept, demonstrating that state-of-the-art AI models could be trained without exclusive reliance on centralized hyperscale data centers controlled by large corporations [\\[6\\]](#cite-id-E9JB6tQVZNR1T1dC) [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\n### Incentive Mechanisms and Protocols\n\nA core component of Dare's work is the design of economic incentives to coordinate permissionless network participants.\n\n#### Templar Training Protocol and Gauntlet System\n\nDare is the architect of the Templar Training Protocol, designed to manage the complexities of decentralized model training across standard internet connections [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR). A key innovation within this work is the \"Gauntlet\" incentive system, detailed in a May 2025 research paper co-authored by Dare. Gauntlet is a system for coordinating and verifying the contributions of anonymous participants in a distributed LLM training process.\n\nThe system uses a two-stage evaluation process where network peers assess each other's computational work. It leverages the OpenSkill algorithm to rate the reliability of each node and an optimizer (DeMo) suited for asynchronous distributed environments. The viability of the Gauntlet system was demonstrated through the successful training of Templar-1B, a 1.2-billion parameter LLM that served as a proof-of-concept. [\\[1\\]](#cite-id-bGNPmOlY7s5HECBJ)​\n\n### Bittensor Subnets\n\nDare and his team at Templar AI create and operate foundational subnets on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network, which are specialized networks with their own incentive mechanisms.\n\n* **Templar (Subnet 3):** This was the first distributed, permissionless LLM training subnet on the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network. It established a framework where miners could collaboratively train a model and be rewarded for their contributions, serving as a blueprint for later projects [\\[4\\]](#cite-id-yC9telWFRTHnwVag).\n* **Grail (Subnet 81):** Described as a direct evolution of Subnet 3, Grail is a decentralized AI pre-training network. It aims to democratize the creation of foundation models by allowing a permissionless network of miners to collectively train a model from scratch. Validators on the subnet score weight updates proposed by miners based on how much they reduce the model's loss, creating an incentive-driven environment for collaborative development [\\[4\\]](#cite-id-yC9telWFRTHnwVag) [\\[1\\]](#cite-id-bGNPmOlY7s5HECBJ).\n* **Basilica (Subnet 39):** This subnet was designed to function as a trustless, decentralized marketplace for GPU compute. It connects users needing computational power with providers of GPU resources, using the Bittensor network to facilitate the exchange securely and permissionlessly [\\[4\\]](#cite-id-yC9telWFRTHnwVag).\n\n## Philosophy and Public Commentary\n\nDare is a vocal proponent of a specific philosophical vision for AI, which he articulates through writings, social media, and public appearances. His arguments center on the distribution of power over information technology.\n\n### The Political Economy of Foundation Models\n\nDare's core thesis, outlined in writings such as \"The Political Economy of Foundation Models,\" is that an \"AI oligopoly\" is forming due to the immense concentration of capital and computational resources required to train foundation models [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR) [\\[4\\]](#cite-id-yC9telWFRTHnwVag). He is critical of the dominant role played by large technology corporations, questioning their suitability as stewards of foundational AI. In a podcast appearance, he stated, \"Mark Zuckerberg stole democracy. Let us never forget that. Google has committed a litany of privacy and human rights violations. Are these the people you want to give Prometheus's fire to?\" [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\nHe argues that true control over AI lies not with those who use or fine-tune models (\"consumption\"), but with those who can train them from scratch (\"creation\"). He considers reliance on open-weight models released by centralized companies a strategic vulnerability, stating, \"The problem with 'open-weight' releases from centralized entities is that their commercial interests may change and the soil can be salted at any time, breaking the supply chain for every single developer downstream\" [\\[6\\]](#cite-id-E9JB6tQVZNR1T1dC).\n\n### Sovereign AI\n\nDare promotes the concept of \"sovereign AI,\" where communities, organizations, and nations can pool their resources to build AI models aligned with their own cultural values and economic interests. This provides an alternative to technological dependence on a few, primarily U.S.-based, providers. In commentary for *Mint*, he identified India as a nation well-suited to this approach: \"With a rapidly growing developer ecosystem and strong government support, India is uniquely positioned to cultivate sovereign AI capabilities\" [\\[7\\]](#cite-id-JG7pvGOJb483J1vE) [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\n### \"Iterations vs. Themes\" Framework\n\nDare frames the push for decentralized technology within a historical context he calls the \"Iterations vs. Themes\" framework. In this view, the overarching \"theme\" is the persistent goal of distributing power over information technology. Specific technological movements—like the early internet or [peer-to-peer](https://iq.wiki/wiki/peer-to-peer-trading-p2p) file sharing—are \"iterations\" in service of this theme. He argues that while individual iterations can fail, the underlying theme of decentralization endures and eventually finds a successful form. He described the current effort as critical, stating, \"Decentralized training is, I think, humanity's last stand... Because an iteration fails, iterations can fail, but ultimately the theme survives\" [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\n## Publications\n\n* **\"Incentivizing Permissionless Distributed Learning of LLMs\"** (May 2025): A research paper submitted to the preprint server arXiv, co-authored by Dare. It details the \"Gauntlet\" incentive system and its successful use in training the Templar-1B model on the Bittensor network [\\[1\\]](#cite-id-bGNPmOlY7s5HECBJ).\n* **\"The Political Economy of Foundation Models\"** (March 2026): A post outlining his core philosophical arguments regarding the centralization of AI power and the strategic importance of decentralized training [\\[5\\]](#cite-id-09qTXBC2F3iDfBNR).\n\n## Public Appearances and Media\n\nDare maintains an active public presence to communicate project progress and share his views on the AI industry. He uses the X (formerly Twitter) handle `@DistStateAndMe` to provide regular, detailed updates, including weekly \"TGIF\" (Thank God It's Friday) community roundups about projects like Templar and Grail [\\[4\\]](#cite-id-yC9telWFRTHnwVag). He has provided expert commentary for media outlets like *Mint* on the global AI market. As of April 2026, he is scheduled to be a featured guest and speaker at the 2027 [Bittensor](https://iq.wiki/wiki/bittensor-tao) Subnet Ideathon during the Sankalp Africa Summit in Nairobi, Kenya [\\[4\\]](#cite-id-yC9telWFRTHnwVag) [\\[7\\]](#cite-id-JG7pvGOJb483J1vE).\n\n## Interviews\n\n### Covenant Subnet Architecture #01\n\nIn an interview published on November 17, 2025, on the YouTube channel Hash Rate Podcast (Episode 145), Sam Dare discussed the structure and intended function of Covenant’s subnet system, consisting of Templar (3), Grail (39), and Basilica (81).\n\n[YOUTUBE@VID](https://youtube.com/watch?v=sDBDv0WPyDQ)\n\nDare describes Covenant as an initiative focused on distributing different stages of AI model development across separate subnets. In his explanation, Templar is associated with pre-training processes using geographically distributed compute resources, Grail is associated with post-training activities such as model refinement, and Basilica is designed to coordinate the allocation of compute through an incentive-based system.\n\nHe states that this structure is intended to operate as an alternative to centralized AI training environments. According to his account, the use of distributed GPU networks can reduce training costs while maintaining comparable processing performance, despite challenges related to coordination and latency.\n\nDare indicates that models trained through Templar currently correspond to mid-range performance levels when compared to centralized systems, estimating them at approximately 60% of leading model capabilities. He notes that further improvements involve increasing complexity and resource requirements.\n\nHe also explains that Basilica applies an incentive mechanism in which participants are evaluated based on the efficiency and quality of compute provided, rather than on availability alone. This model is described as a method for allocating resources within the network.\n\nThe interview includes references to potential use cases involving organizations such as companies, public institutions, and academic groups, which, according to Dare, could utilize such infrastructure for repeated model training. He presents this as a change in how access to AI training resources may be structured.\n\nThe discussion also references token-related mechanisms within the [BitTensor](https://iq.wiki/wiki/bittensor-tao) ecosystem, including plans to integrate value flows across the different subnets. [\\[8\\]](#cite-id-m9MCWEeCgiFJw0gN) \n\n### Decentralized AI and Templar #02\n\nIn an interview published on June 4, 2025, on the YouTube channel Ventura Labs, Samuel Dare discussed his involvement with Templar (Subnet 3) within the [Bittensor](https://iq.wiki/wiki/bittensor-tao) network and described his views on decentralized AI training.\n\n[YOUTUBE@VID](https://youtube.com/watch?v=EkOJIluzOsI)\n\nDare states that Templar operates as a permissionless platform focused on the decentralized pretraining of large-scale machine learning models. He presents decentralized AI development as an alternative to systems associated with large technology companies, referencing Google as a primary point of comparison, while distinguishing it from organizations such as OpenAI.\n\nHe describes a shift from earlier work in [blockchain](https://iq.wiki/wiki/blockchain) toward a focus on decentralized AI systems. In this context, Templar is presented as a network in which participants, referred to as miners, contribute computational resources to model training and are evaluated through competitive mechanisms. Dare notes that the design of such systems depends on incentive structures that encourage consistent participation and limit adversarial behavior.\n\nRegarding technical aspects, Dare identifies challenges related to scaling distributed training processes, including communication overhead and coordination between nodes. He references the use of gradient compression methods and synchronous training approaches to address bandwidth and efficiency constraints. He also discusses the role of hardware infrastructure, including the potential relevance of open-source hardware in relation to proprietary systems used by large-scale AI providers.\n\nDare also outlines a model in which contributors may hold a form of stake in trained models through token-based structures. He describes Templar as a system intended to distribute control and participation across its network rather than concentrating ownership.\n\nThe interview presents Dare’s perspective that decentralized AI training systems may function as an alternative organizational model for developing machine learning infrastructure, with an emphasis on distributed participation and incentive-based coordination. [\\[9\\]](#cite-id-tGyVfhEHpy9Wa2mX) ","summary":"Sam Dare is an AI researcher and entrepreneur, founder of Covenant AI. He is a key figure in decentralized AI, particularly on the Bittensor network, and is known for his work on the political economy of foundation models and creating sovereign AI.","images":[{"id":"QmPrSztt8RaXCy1ukNgb7KpLHBjnGssEEbzKfPkELBXFAs","type":"image/jpeg, image/png"}],"categories":[{"id":"people","title":"people"}],"tags":[{"id":"PeopleInDeFi"},{"id":"Founders"},{"id":"AI"},{"id":"Protocols"},{"id":"Developers"}],"media":[{"id":"QmRhuvfDB9vdaybaeavDxN3SamKA6px5J2pXBk4h2zvpPq","type":"GALLERY","source":"IPFS_IMG"},{"id":"QmVPm6vnk8PpnqMwRT9a6znbNWcDQqcbKda1WT2XXcPoKv","type":"GALLERY","source":"IPFS_IMG"},{"id":"https://www.youtube.com/watch?v=sDBDv0WPyDQ","name":"sDBDv0WPyDQ","caption":"","thumbnail":"https://www.youtube.com/watch?v=sDBDv0WPyDQ","source":"YOUTUBE"},{"id":"QmSokv8SyqZHBdBCCkjNzT8TdJNbJsqScQjrsrAMELVn9b","name":"Cópia de Design sem nome.png","caption":"","thumbnail":"QmSokv8SyqZHBdBCCkjNzT8TdJNbJsqScQjrsrAMELVn9b","source":"IPFS_IMG"},{"id":"https://www.youtube.com/watch?v=EkOJIluzOsI","name":"EkOJIluzOsI","caption":"","thumbnail":"https://www.youtube.com/watch?v=EkOJIluzOsI","source":"YOUTUBE"}],"metadata":[{"id":"references","value":"[{\"id\":\"bGNPmOlY7s5HECBJ\",\"url\":\"https://arxiv.org/html/2505.21684v1\",\"description\":\"Analysis of Sam Dare's arXiv paper and background\",\"timestamp\":1775677630829},{\"id\":\"m2y061hnppttlsDO\",\"url\":\"https://www.linkedin.com/in/samuel-b-dare/\",\"description\":\"Samuel Dare's LinkedIn profile\",\"timestamp\":1775677630829},{\"id\":\"ZYtQTAjXZcqBGBgO\",\"url\":\"https://scholar.google.com/citations?user=Lsv7p88AAAAJ\\\\&hl=en\",\"description\":\"Analysis based on Samuel Dare's research profile\",\"timestamp\":1775677630829},{\"id\":\"yC9telWFRTHnwVag\",\"url\":\"https://subnetalpha.ai/subnet/grail/\",\"description\":\"Profile of Sam Dare on SubnetAlpha\",\"timestamp\":1775677630829},{\"id\":\"09qTXBC2F3iDfBNR\",\"url\":\"https://www.synapz.org/posts/2026-03-22-templar-political-economy-of-foundation-models\",\"description\":\"Analysis of Sam Dare and Templar's philosophy\",\"timestamp\":1775677630829},{\"id\":\"E9JB6tQVZNR1T1dC\",\"url\":\"https://simplytao.ai/blog/covenant-72b-the-largest-decentralized-llm-training-run\",\"description\":\"Blog post on the Covenant-72B training run\",\"timestamp\":1775677630830},{\"id\":\"JG7pvGOJb483J1vE\",\"url\":\"https://m.dailyhunt.in/news/india/english/mint+english-epaper-minten/anthropics+next+big+opportunity+experts+see+india+as+a+potential+growth+market+if+us+shuns+the+ai+firm-newsid-n704289701\",\"description\":\"Mint article featuring commentary from Sam Dare\",\"timestamp\":1775677630830},{\"id\":\"m9MCWEeCgiFJw0gN\",\"description\":\"Hash Rate - Ep 145 - TAO Subnets Templar (3), Basilica (39), Grail (81)\\n\",\"timestamp\":1775678234786,\"url\":\"https://www.youtube.com/watch?v=sDBDv0WPyDQ\"},{\"id\":\"tGyVfhEHpy9Wa2mX\",\"description\":\"Sam Dare: Templar Bittensor Subnet 3, Decentralized Pretraining Models, Open-Source AI | Ep. 46\\n\",\"timestamp\":1775678546050,\"url\":\"https://www.youtube.com/watch?v=EkOJIluzOsI\"}]"},{"id":"twitter_profile","value":"https://x.com/DistStateAndMe"},{"id":"linkedin_profile","value":"https://www.linkedin.com/in/samuel-b-dare/"},{"id":"website","value":"https://covenant.ai/"},{"id":"commit-message","value":"\"Added wiki page for Samuel B. Dare\""}],"events":[{"id":"3115710f-f530-446e-9c1a-ffee2961f5cf","date":"2023-09","title":"Founded Templar AI","type":"CREATED","description":"Founded Templar, a research lab focused on the political economy of foundation models, which served as the research and philosophical arm of Covenant AI.","link":"https://www.f6s.com/company/templar-covenant","multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"929f65e6-ee94-4477-87fb-f9246ecf8f45","date":"2019-12","title":"Graduated with PhD in Computer Science","type":"DEFAULT","description":"Sam Dare completed his PhD in Computer Science from the University of Cambridge, with a focus on Distributed Systems and AI.","link":null,"multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"13ac1a67-d4e5-4cc5-9181-ea73058d40ce","date":"2025-05","title":"Published the Grail Protocol Paper","type":"DEFAULT","description":"Co-authored and published the research paper 'Grail: A Decentralized Network for Pre-training Foundation Models' on arXiv.","link":"https://arxiv.org/html/2505.21684v1","multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null},{"id":"24c1c3f3-0ae3-4d1e-9a77-ff266b879fe5","date":"2026-01","title":"Led Covenant-72B Training","type":"DEFAULT","description":"Led the team at Covenant AI in what was described as the largest decentralized Large Language Model (LLM) training run for the Covenant-72B model.","link":"https://simplytao.ai/blog/covenant-72b-the-largest-decentralized-llm-training-run","multiDateStart":null,"multiDateEnd":null,"continent":null,"country":null}],"user":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"author":{"id":"0x8af7a19a26d8fbc48defb35aefb15ec8c407f889"},"operator":{"id":"0xd5893989b9952c6568a99F854795AcC5Ae480D56"},"language":"en","version":1,"linkedWikis":{"blockchains":[],"founders":[],"speakers":[]}}