Running Bittensor

Note: This article was originally published by Ronan Broadhead and Derek Edwards. Highly recommend giving them a follow for more insightful research, and check out the original article right here.

In 2008, Satoshi Nakamoto published the Bitcoin whitepaper.

The paper laid out a new, gamified accounting system where computers could work together over time and space to agree on a shared truth, namely the state of Bitcoin’s ledger.

Bitcoin’s first product? A digital money called BTC. Since Bitcoin began running in 2009, distributed computers all over the world have been incentivized to provide computation to validate and secure this digital money system.

In that time, it has proven tamper-resistant (immutable), auditable (transparent), and outside the control of any single entity (distributed).

And as time passed, Bitcoin’s architecture has become progressively more decentralized.

Hal Finney / January 2009
Hal Finney / January 2009

Today, BTC is the world’s most valuable digital commodity money, with a market cap of $1 trillion at the time of writing.

In many ways, BTC’s early success was achieved by leveraging four economic design principles:

Fair Launch. Bitcoin’s initial distribution was characterized by an open network that any individual with a computer could participate in. This played a pivotal role shaping the narrative of a credibly neutral system.

Proof of Work. In Proof-of-Work (PoW) systems, decentralized miners all over the world solve complex mathematical problems to validate network transactions. The computation these miners contribute lays the security for a global, tamper-resistant, auditable, digital money to securely exist on top.

Fixed Supply. Bitcoin’s capped supply of 21 million BTC leads to sustained cycles of price discovery when aggregate demand to hold the asset outweighs the aggregate demand to sell.

Programmatic Halvings. Bitcoin’s programmatic halving occurs roughly every four years. They are transparent, understood, controlled scarcity events in the digital commodity’s supply schedule.

Taken together, these early economic features solved the biggest problem to creating a new type of digital money:

The coordination problem.

Jack Butcher
Jack Butcher

The State of AI

In February 2023, Meta released an open-sourced, seven billion parameter large language model, called LLaMA. Later that May, an anonymous memo reportedly written by a Google AI researcher argued that open source software was “quietly eating our lunch.”

“The impact on the community cannot be overstated,” the leaked memo described. “Suddenly anyone [can] experiment.”

Outside of its growing role in the emerging AI space, the benefits of open source development today are well understood:

Collaborative Innovation. Open-source systems facilitate peer to peer collaboration and knowledge sharing, accelerating the pace (and often quality) of innovation.

Freedom and Flexibility. Developers are not locked into the single agenda of a corporation. They can make customizations to the network in real time.

Transparency. Users have the ability to inspect, modify and understand the software. This often fosters higher levels of trust and collaboration, removing skepticism around an author’s motives.

To understand the upper bound of open-source outcomes, look no further than the rise of Linux.

In the 1990’s, Linux successfully harnessed the collective social and intellectual capital required to develop an operating system that matched (and in some dimensions, surpassed) centralized OS competitors in performance. In 2022, the Linux operating system market was valued at USD 15 billion, with projections to grow to $66 billion by 2030.

Eric Raymond’s “The Cathedral and The Bazaar” detailed Linux’s ability to thrive and succeed — while simultaneously ignoring commonly understood rules around software development (i.e. small project teams, closely managed objects, and narrowly scoped complexity).

Attempting to understand how Linux thrived outside these rules, Raymond classified and analyzed the two competing development approaches as The Cathedral (closed source) and The Bazaar (open source).

Closed Source (Cathedral) vs. Open Source (Bazaar)
Closed Source (Cathedral) vs. Open Source (Bazaar)

The Cathedral* (Closed Source)*. The conventional and closed software development model. This model has strict guidelines and goal setting within small project teams, operating under a hierarchical chain of command.

The Bazaar* (Open Source)*. An open and peer-to-peer software development model — ‘Bazaar’ meaning an open market. This model was characterized by short release cycles, with constant feedback and contribution from users and developers alike.

Fast forward to 2024. Today, these same concepts apply to the debates happening across the emerging AI stack. The question on everyone’s mind — can open-source AI compete?

We can look to the open source success stories of the past to attempt a prediction of the future. Though, it’s important to note that one key difference exists that makes these efforts difficult.

Unlike previous open source software development:

  • The modern AI movement requires massive amounts of data.

  • This data requires massive amounts of computation.

  • This computation requires massive amounts of capital.

Today, the machine power required to train and optimize AI is extremely expensive. Conservative estimates suggest that training GPT-3 cost roughly $4.6M for a single training run.

Given the steep computational requirements to train proprietary, centralized AI — emerging giants like OpenAI are enjoying capital-related AI moats around computation, speed, and innovation.

And these capital moats appear to be only getting larger.

Wall Street Journal — February 2024
Wall Street Journal — February 2024

Taking a step back, centralization and decentralization have long been in dance with one another — a kind of ebb and flow across history.

When new innovation is introduced, markets trend towards verticalization as a natural desire for greater efficiency and lower costs. Early winners favor systems that can best leverage scale to meet consumer demands.

As time progresses —the market’s needs refine, a new category of players emerge, and component parts of the vertical stack get unbundled. Often, if the technology is truly disruptive enough, governments step in to enroll geography-specific regulations constructed to temper centralized power.

This opens the door for decentralized upstarts to unbundle layer-specific innovations.

And round and round we go.

Although the history of technology is just a series of bundling and unbundling, we believe over the long arc of time, one thing remains constant:

Progress bends towards decentralization.

Jack Butcher
Jack Butcher

Today, we’ve only now started to see the open-source AI movement start to take shape.

HuggingFace is an open AI community and platform that facilitates collaboration on models, datasets, and applications within the machine learning community. The project provides open-access tools and resources for researchers, developers, and organizations to collaborate, share, and build upon the latest advancements in AI and natural language processing.

Today, over 516k models exist on HuggingFace.

The Huggingface LLM Leaderboard is also the internet’s town hall for tracking, ranking, and evaluating open-source LLMs; however, a number of recent issues have plagued the community’s ability to effectively rank these open source efforts. Performance metrics are gameable, leading to optimization to achieve the highest benchmark scores, rather than pushing open source performance forward.

As a result, the leaderboard ends up with more highly ranked — but not commercially better — open source models.

To truly unleash the benefits of open source AI and machine learning, the co-authors of this article believe Eric Raymond’s design space needs an upgrade.

Today, a technology exists that has the potential to combine the capital formation of the Cathedral (Closed Source), with the collaborative and open nature of the Bazaar (Open Source).

The cryptonomic commons.

Over the last 15 years, the design principles of a decentralized, trust-minimized, digital money have come to be understood by our global markets. The U.S. Bitcoin spot ETF, approved by the SEC in January 2024, has already seen flows of over $5.5B+ in its first month of existence.

Now standing at $1 trillion in market capitalization, it is clear across both retail and institutional audiences that Bitcoin is a valuable digital commodity used to store value in the same way as scarce physical commodities like gold ($8T), real estate ($325T), or art ($1.7T).

No new digital economy can perfectly replicate the genesis creation event of Bitcoin. However, to solve the current limitations of open source AI, we must orient towards Bitcoin’s foundational principles.

Only by solving these same coordination problems, can we fully operationalize the power of the open source market for intelligence.

Meet Bittensor

Bittensor is an open-source network and protocol founded in 2019 by two AI researchers, Jacob Steeves and Ala Shaabana. The whitepaper was written by pseudonymous author, Yuma Rao.

The public network was founded to deliver on a simple mission — use programmable incentives to accelerate development for open-source intelligence markets. Intelligence can include — but is certainly not limited to — text/image/audio/3D generation, data scraping, price predictions, decentralized data storage, x-rays and medical diagnostics, model fine tuning, and a virtually limitless number of other categories.

What may be non-obvious upon first look, is that the project obviates the need to directly tackle the most complex challenges in AI research. For the Bittensor network to thrive over the coming decades, the core Bittensor protocol does not:

  • Rely on “currently unsolved” AI/ML research problems;

  • Require outside technological innovation (e.g. Zero-Knowledge development).

Simply, Bittensor is working right now.

The network and protocol function together as a coordination layer for intelligence, much in the same way that Bitcoin functions as a coordination layer for commodity money.

As we’ll discuss later in this article, the Bittensor network is agnostic to the types of markets that form on top of it over time. The system has been devised to naturally rotate out weak-performing markets, based on pre-defined, objectifiable criteria.

To solve its initial market coordination problem, Bittensor’s system programmatically emits a native TAO token at an inflation rate of roughly 7,200 per day.

Notably, Bittensor is built from the same four economic design principles that laid the foundation for Bitcoin:

(1) Fair Launch. In January 2021, the Bittensor network was openly marketed and discussed widely before activation. On its first day, any miner or validator around the internet could begin earning TAO by contributing to the system.

(2) Proof of Work. Instead of solving for random strings of characters like Bitcoin, miners on Bittensor solve machine learning problems by competing across any number of intelligence games. Today, intelligence games on Bittensor include the creation of latent representations of images, sentences, 3D objects, health data, storage, training, tuning, or other commodity markets. Miners are rewarded by validators when their representations are similar to those of others. Importantly, scoring rules are uniquely constructed by each game creator, so that validation of miner scores is narrowly scoped to the “fuzzy work” accomplished in the subnet game.

(3) Fixed Supply. Bittensor’s capped supply of 21 million TAO facilitates sustained cycles of price discovery for the monetary system when aggregate demand to hold outweighs the aggregate demand to sell.

(4) Programmatic Halvings. Bittensor’s programmatic halvenings, occurring roughly every four years, controls Bittensor’s inflation rate. By halving the reward for mining new blocks, this introduces transparent, understood, controlled scarcity events into the supply schedule. As subnet slots are added to the system — starting today with 32 unique subnets, then later 64 subnets, then 128 subnets — more miners will enter the system to compete for the same 7,200 TAO distributed daily across all subnets. Similar to how aggregate miner costs on Bitcoin create a floor for Bitcoin price, we believe an increasing aggregate miner cost may function similarly in the Bittensor ecosystem as the subnets double.

By optimizing for the coordination layer, Bittensor created an open source network these co-authors believe meets the intelligence market as it exists today — and can efficiently scale to any size over time.

We see three benefits to this approach:

Liveness. Focusing on the coordination layer empowers intelligence markets to go from inert to action now, instead of waiting for predicated advancements in new technologies. As research and developments in new technologies improve, they can be cleanly incorporated into Bittensor as additive to the coordination layer.

Neutrality. Because the tech stack is largely unopinionated, projects building on Bittensor can create wildly differentiated intelligence markets that draw on the same economic substrate; yet, be uniquely positioned toward the intelligence space desired.

Opportunity. Simply, Bittensor incentivizes participants to deploy, iterate, and earn from their creations in a way that’s not possible through centralized services and platforms.

We discuss these applications later in the article. For now, to project out the implications of Bittensor’s cryptoeconomic model, one only needs to look to its economic predecessor — the Bitcoin network.

Today, the largest supercomputer in the entire world is Bitcoin — a network orders of magnitude larger than the combined network sizes of Amazon, Google, and Microsoft.

Wielding the same economic design at its foundation, Bittensor’s ambitions are equally large; though, instead of optimizing for community-owned commodity-money, it optimizes for community-owned synthetic intelligence.

And it’s already working.

Bittensor’s annual inflation value — a programmatic commodity waterfall used to attract leading, mission-driven, AI/ML talent — is already comparable to the largest centralized AI players in the world.

Bootstrapped only by the value of a market-priced TAO.

In December 2023, OpenAI’s annual run rate — a measure of one month’s revenue multiplied by 12 months, hit the $2 billion revenue mark. While not perfectly analogous, at current TAO prices (~ $580 per TAO), Bittensor emissions has the capacity to pay ecosystem participants a comparable $1.5B+ annually to contribute to the decentralized open network.

Moreover, the economic upside generated by OpenAI’s success can only be enjoyed by a select few, namely, large OpenAI shareholders. Conversely, and in the true spirit of open-source technology, Bittensor’s success can be enjoyed freely by the world — AI/ML engineers, developers, and market participants of all types across the internet can hold and be rewarded for contributions to the open-source intelligence project through Bittensor’s native TAO token.

All this from a fair launched, leaderless, credibly neutral protocol for open source intelligence.

Over the rest of this article, we’ll unpack the inner workings of the Bittensor system, across the following four sections:

I. The Subnets

II. The Applications

III. The Proposal

IV. The North Star

I. The Subnets

At its foundation, the Bittensor project consists of three main layers:

(1) Blockchain. Bittensor is a Substrate-based, Proof-of-Authority (PoA) blockchain. At this layer, we store (a) the aggregate validator evaluations of miner outputs; and (b) the associated calculation of both miner and validator rewards.

(2) API. The Bittensor API is the communication interface from the blockchain to the 32 subnets.

(3) Subnets. Bittensor’s incentives are delivered across 32 unique subnets. Each subnet is its own unique digital commodity market, where subnet owners can curate “game-like” constraints in order to extract desired intelligence from the market.

Subnets are the lifeblood of the ecosystem and are broken down into three incentivized participants:

Subnet Owners. These individuals each register and manage one of the 32 different subnets. Each of the 32 subnet owners determines the type of commodity market they wish to create, and designs the associated incentive mechanism for miners (and grading rubric for validators).

Subnet Miners. These individuals are tasked with producing intelligence narrowly defined by the specific subnet’s rules.

Subnet Validators. These individuals are responsible for independently evaluating the intelligence that miners produce, according to the grading rubric established by the subnet owner.

Within each subnet, it can be tricky to determine how validators can reach consensus for otherwise “fuzzy” miner-produced intelligence. Evaluating intelligence is fundamentally an indeterministic one, as intelligence cannot be measured directly in binary terms. As a result, Bittensor created a stake-weighted mechanism known as Yuma Consensus (YC).

Yuma Consensus ensures that Subnet Validators are making accurate evaluations and not colluding with Subnet Miners. Subnet Validators evaluate and express miners performance on a 0 to 1 scale based on intelligence, speed, and diversity (defined by the incentive mechanism of the given Subnet), through a set of weights (w𝑖).

Next, Subnet Validators transmit their evaluations of Subnet Miners performance to Subtensor via the Bittensor API. Validators’ weights are then aggregated onchain to produce a Weight Matrix (𝑊) — this Matrix is a function of the stake-weighted evaluations (scores) from Subnet Validators which translates to the calculation of rewards distribution to Subnet Miners and Subnet Validators.

Finally, Subnet Miners are rewarded with TAO based on their 0 to 1 score, and Subnet Validators are rewarded with TAO for producing evaluations that are in agreement with the evaluations produced by other independent Subnet Validators.

A common critique questions whether Subnet Validators are qualified to grade Subnet Miner evaluations. Between the system’s stake-weighted system and the narrow grading rules created by individual Subnet Owners, we believe an elegant system has been devised to account for grading “fuzzy” intelligence across the Bittensor network.

Further, the Subnet Owner has ongoing autonomy over the design of this grading process. Owners can and must be constantly updating their incentive mechanism in order to extract the desired intelligence from the market.

Validators and miners are in a constant push and pull in this way.

If the Subnet is not providing a valuable service, or the grading rubric is not narrowly scoped to strong Validator grading, that Subnet will not see economic weight flow in. In time, the Subnet will be rotated and de-registered out of the network, and a new Subnet will replace them.

DREAD BONGO — Twitter
DREAD BONGO — Twitter

To properly align incentives across the network — all rewards are distributed in Bittensor’s native token, TAO.

As mentioned previously, TAO tokenomics are heavily inspired by Bitcoin — a fixed supply of 21 million units, with a linear four year “halvening” of emissions. Dissimilar to Bitcoin, the halvings are a function of current supply and not a per block basis given Bittensor’s unique recycling feature, which recycles miner/validator registration back into the protocol.

Currently, Bittensor emits one (1) TAO per block equating to roughly every ~12 seconds (or 7200 TAO per day), of which 41% goes to Validators, 41% to Miners, and 18% to Subnet Owners across the entire Bittensor network.

Bittensor’s Daily Splits
Bittensor’s Daily Splits

With the first halving slated for roughly September 2025, today the market cap of TAO sits at $3.9 billion, with ~30% of all TAO in circulation. Given Bittensor’s unique characteristics of a fair launch distribution, known supply emissions, and programmatic halvings, the co-authors believe that the outstanding network supply is far more forgiving and should not be compared to a typical venture-backed network project.

It serves repeating again that the Bittensor protocol and network are completely unopinionated in the types of intelligence they produce — the system’s constraints only serve to create the rails for digital commodity markets to form.

Over time, the highest performing Subnet Owners, Subnet Validators, and Subnet Miners are rewarded with a greater proportion of TAO, and the weakest performing Subnets are de-registered out of the system.

II. The Applications

Today we have already seen a wide variety of use cases and applications across the subnets that includes text/image/audio generation, asset price prediction, X-Ray diagnostics, data storage and more.

A sample of live Subnets include:

  • Subnet 4 — Text-Based Inference (Manifold Labs)

  • Subnet 6 — Fine Tuning (Nous Research)

  • Subnet 8 — Time-Series Prediction (Taoshi)

  • Subnet 9 — Pre-Training (Const)

  • Subnet 21 — Storage (FileTAO)

  • Subnet 31 — Healthcare Diagnostics (btHealthcare)


Because the desired intelligence differs greatly across the Subnets, so too must the incentive mechanisms Subnet Owners utilize.

A spectrum exists in the competition level and associated reward distribution across the Subnets:

On one side of the spectrum is maximum competition leading to a top heavy rewards distribution. This approach enhances the utility of intelligence by only rewarding improvements, ensuring that each new output surpasses the previous one.

On the other side of the spectrum is minimal to no competition, leading to a linear rewards distribution. This approach optimizes for continuous uptime of intelligence and maximum bandwidth.

Below are examples of two subnets that have created intelligence games on opposite sides of the spectrum.

Subnet 9 (Pretraining, SN 9) exists on the most competitive end of the spectrum — rewarding miners for creating pre-trained models with a comparative performance-based incentive mechanism.

The subnet rewards favor early and more efficient models, encouraging innovation over imitation. The mechanism promotes iteration upon competing models to enhance the overall quality.

This competition based mechanism has a very top heavy and infrequent rewards profile for its miners.

Subnet 9: Miner Incentive Distribution (via Taostats)
Subnet 9: Miner Incentive Distribution (via Taostats)

On the other side of the spectrum exists Subnet 18 (Cortext_t, SN 18). Cortext_t is designed to provide high-quality text and image responses, utilizing synthetic data to overcome data collection hurdles.

Cortext_t prioritizes uptime and availability, resulting in a more even reward distribution for its miners. This approach gives developers access to a diverse set of applications in need of continuous synthetic data without the need for expensive plans.

Subnet 18: Miner Incentive Distribution (via Taostats)
Subnet 18: Miner Incentive Distribution (via Taostats)

This flexibility in the design of the incentive mechanism is representative of the expressiveness of the Bittensor network. Across the spectrum we see real value being created across the different subnets; supercharging open source development with market incentives to ultimately compete with closed source development.

Another unique feature of Bittensor is the ability for Subnets to reference and intersect the work of other subnets.

Subnet 6 (Fine Tuning) was recently launched by Nous Research, a leading collective of ML engineers focused on the development of open source AI — most well known for their fine tuned model Nous-hermes-13b, one of the most used open source models in production today.

Uniquely familiar with the problems that arose from the HuggingFace Open LLM leaderboard, Nous Research looked to Bittensor to solve these limitations and decided to register for Subnet 6. Nous Research knew if they could use Bittensor to incentivize creation of an open source model that was at performance parity with GPT4, it would be a strong first step on the path towards creating continual open source AI.

To start, Nous Research (Subnet 6) reviewed the work of Subnet 9, the subnet discussed above that demonstrated how open source models could be trained, shared, updated, and graded on the Bittensor network. Next, Nous Research (Subnet 6) combined a similar flow with the generated outputs of Subnet 18, which emitted a continuous stream of synthetic data that Subnet 18 miners and validators were incentivized to pull from GPT4.

Typically, producing synthetic prompt-response pairs is very expensive, but because Subnet 18 was already in production performing the service on the Bittensor network, Nous Research (Subnet 6) could leverage Subnet 18 to engage in this data synthesis in real time.

Further, Nous Research (Subnet 6) modified the evaluation framework of Subnet 9 (evaluating loss on some subset of the dataset), and created subnet rules that only evaluated open source models against Subnet 18’s latest (~15 minutes) synthetic data being generated.

HuggingFace: Subnet 6 Leaderboard
HuggingFace: Subnet 6 Leaderboard

As a result, the only way to score well on the Nous Research Subnet 6 is to actually train a better model than your peers — this is because the synthetic test data from Subnet 18 is completely unknown and constantly changing every fifteen minutes. Open source model uploaders cannot overfit the subset of questions within the benchmarks, a problem that continues to plague the HuggingFace LLM Leaderboard.

By tying in the incentive mechanisms of Subnet 9 and Subnet 18, Nous Research (Subnet 6) created a sustainable, continuously generating dataset benchmark and corresponding evaluation framework for open source models. We’ve already seen new talent enter the ecosystem to compete on Subnet 6, with some top miners receiving $101k+ in incentives over single day periods.

It should be noted that while innovating on the mechanical workings of model creation is a prominent use case of Bittensor, it is only a subsection of what the network can accomplish with its unique incentive architecture.

Take pseudonymous team 404.xyz, an EU-grant funded group of researchers that spent the last three years building AI-generated 3D tools for the gaming industry. They are now building on Bittensor as an upcoming Subnet.

The creation of virtual environments relies on the highly specialized and time consuming process of manually modeling, sculpting or procedurally scripting 3D digital assets. The costs in both financial and human capital are highly restrictive. Further, computing power is growing at an exponential rate and with it, consumers’ expectations regarding the size, density and visual fidelity of virtual worlds. The demand extends far beyond gaming, into entertainment more broadly (film and VFX), as well as consumer and retail applications. Recent consumer hardware and end-user device advances means this demand will exponentially multiply as AR, VR and XR manifestations become mainstream.

The result is a bottleneck in which creatives who lack the necessary capital are unable to fulfill consumers’ growing demand. 404.xyz’s Subnet aims to relieve that bottleneck.

To start, 404.xyz’s Subnet will be oriented around onboarding miners and validators with a clear goal to generate 3D synthetic datasets that are organized into game type and style categories so that results can be used as asset packs for immediate applications.

404.xyz — New “3D Generation” Bittensor Subnet
404.xyz — New “3D Generation” Bittensor Subnet

This initial setup will build asset packs that over time may dwarf even the largest online 3D asset stores (sketchfab, Unity marketplace, etc). As a foundation, this Bittensor-incentivized 3D asset marketplace will serve as a bridge to a larger platform oriented to game developers, creative artists, and other builders through web2 web apps and partnerships. One of the first platform partnerships will be enjoyed between 404.xyz’s Subnet x Monaverse, a leading blockchain and browser-based immersive environment.

In addition, the Bittensor project is still nascent, with room to grow both inside and outside the AI and blockchain verticals. The wide dissemination of TAO across blockchains is one natural opportunity, particularly as interest in the Bittensor network continues to thrive.

As a result, the idea of building a Liquid Staking Token (LST) network for Bittensor has started to pick up steam. We can look to Ethereum’s LST dynamics as a data point for Bittensor’s evolution in this direction over the next year.

Given its unique liquidity profile and ease of use, the primary gateway to access network rewards today on Ethereum is through LSTs. However, only 25% of all ETH is staked today. This can be attributed to the market’s difficulty accessing Ethereum’s staking layer, which suffers from high monetary (32 ETH threshold) and intellectual capital (running validator software) requirements, and no in-protocol delegation.

While Bittensor does support in-protocol delegation, the environment is not built to support a robust DeFi ecosystem given its current lack of smart contract functionality.

Enter liquid-staked TAO, a new LST initiative led by Tensorplex Labs. By transporting a liquid staked version of TAO to new environments, users will be able to enjoy the DeFi use cases we see across crypto today.

Further, LSTs may likely enable greater capital access to the Bittensor ecosystem, and its associated rewards, in a secure, auditable, modular fashion.

Tensorplex — AI Intelligence Infrastructure
Tensorplex — AI Intelligence Infrastructure

III. The Proposal

On January 9th 2024, the OpenTensor Foundation unveiled a new Bittensor Improvement Template (BIT001) titled ‘Dynamic TAO’.

The ‘Dynamic TAO’ proposal took aim at two core ideas:

(1) decentralizing control even further to network stakeholders; and

(2) fostering competition among subnet owners and validators.

Though the proposal is still open for community feedback, the proposal suggests eliminating the Root Network, where the top 64 validators currently decide the emissions distribution to the 32 different subnets. The current system, where a small group of validators currently wield significant influence based on their TAO holdings, would be replaced by a market-driven approach involving all TAO stakeholders.

Put differently, the Dynamic TAO proposal would eliminate the privileges of the Root Network and transfer its power to all $TAO holders.

To achieve this, the proposal introduces a new, non-transferrable, subnet-specific token for each subnet (as a collective name, called $dTAO). If the proposal were approved by the network, there would be 32 unique $dTAO tokens that could be exchanged (unstaked or swapped) for $TAO through subnet-specific liquidity pools.

Each Subnet would have its own liquidity pool, which contains a certain amount of $TAO and the corresponding subnet’s $dTAO. The pricing mechanism for exchanging $TAO and a subnet-specific $dTAO would follow Uniswap V2’s constant product formula (x * y = k).

Unlike Uniswap V2, however, user liquidity cannot be added to the $dTAO liquidity pool. When $TAO holders stake, these stakers would effectively purchase (selling TAO) a corresponding amount of $dTAO; when $dTAO holders unstake, they effectively purchase $TAO (selling dTAO) and can exit the subnet’s economics, relative to the new price in $TAO.

All newly issued $TAO that gets allocated to a Subnet would no longer be directly distributed to a Subnet Validator, Subnet Miner, or Subnet Owner. Instead, all newly issued $TAO would be injected into the subnet’s liquidity pool for backing. From there, 50% of the newly issued $dTAO will remain in the liquidity pool, with the remaining 50% distributed to Validator/Miner/Owner performing services according to the Subnet’s designated incentive mechanism.

To summarize, what is the proposed net effect of ‘Dynamic TAO’ on the Bittensor economy?

First, it will no longer be the Root Network that determines emissions. Moving forward, it is the TAO holders themselves that determine subnet emissions distribution across the subnets. That said, given time/resource constraints at the individual level, many TAO holders will likely rely on delegating to those validators who can outperform on the relevant metrics — namely research, risk, yield, and governance.

Now that subnet economies can be priced in real time, the most efficient and well-researched validators who can price these subnet economies accurately will rise to the top.

We believe this competition among validators will lead to more accurate subnet valuations, leading to dispersion in performance. We also believe that yields across validators are expected to change, encouraging new forms of incentives back to nominating stakeholders — like yield kickbacks, structured products, and other financial products and services.

In a hyper-competitive subnet architecture where only 32 subnets can exist at any given moment, Subnet Owners will also need to demonstrate their value to attract stakes. As a result, we believe there will be meaningful Subnet Owner innovation, where Subnet Owners create front ends, novel token economies above and beyond $TAO, and other value-additive products/services. Doing so will serve the role of (1) attracting stake; and (2) serving high-quality validator and miner performance. In a ‘Dynamic TAO’ paradigm, underperforming Subnet Owners will be rotated out of the Bittensor economy quickly.

One issue we could foresee with this design is that TAO issuance to each subnet is directed based on the value of each subnet’s liquidity pool and not correlated to a subnets’ described fundamentals. This could lead to an over-reliance on marketing efforts and optimizing for short-term successes.

Refinements to the proposal are actively being discussed, potentially giving way to BIT2.

IV. The North Star

Ultimately, true success for Bittensor will be evaluated based on its ability to transcend its own incentive network, and provide meaningful “real world” value by creating products, services, and new forms of revenue-generating applications.

Today, we’re starting to see these opportunities emerge across the network.

One example is the role of Bittensor’s machine-generated intelligence pointed toward financial predictions. Over the last 20 years, ML-driven trading has penetrated our financial markets, optimized to digest the intersectional implications of factors like time, dates, weather, event outcomes — in concert with the velocity and volatility of new market movements. Implementing and sustaining high-performing strategies requires significant financial and intellectual investment, alongside substantial infrastructure development.

Subnet 8, developed by Taoshi, is a Time Series Prediction Subnet that currently focuses on predicting financial market prices using ML models. Taoshi equips the mining pool with a foundational ML model and preprocessing features, solicits price predictions at specific intervals, and encourages miners to enhance the model with their own custom configurations.

Afterward, Subnet 8 evaluates the miner outcomes, identifies factors impacting performance, and integrates these insights to refine the model it provides back into the mining pool. This process leads to a cycle of continuous improvement and competition within Subnet 8, with the least performing quartile of miners constantly replaced by newcomers.

In the future, Subnet 8 will allow for the creation of revenue-generating applications on top of this network that can be publicly accessed. To start this process, the subnet’s owner (Taoshi) has started building a ‘Request Layer’ on top of Subnet 8. This layer will facilitate communication between client applications and validators, allowing the latter to store predictions for easy access by applications, like hedge funds or web applications, without needing to interact with miners directly. This setup enables clients to request specific insights, which validators can then provide and monetize. Subnet 8’s request network will enjoy validators running individual ‘request nodes’, creating a competitive marketplace for requests based on speed, quality, and price. Eventually, this layer will be entirely open-sourced allowing for any subnet to integrate it.

Ultimately, the co-authors of this piece believe that Bittensor as a substrate will power all types of consumer applications in the future. Commercial applications will be supercharged by Bittensor’s web of subnets — across valuable categories like gaming, entertainment media, art/collectibles, finance, markets, social applications, and many more.

In closing — much of this article discussed how Bittensor works — a view into the technical architecture is paramount to understanding the complete picture.

Yet, at their core, both crypto and AI are very much social technologies. Together, they will change the very nature of how humans coordinate, interact, and mobilize alongside each other over the coming decades.

Specifically, we believe that the AI revolution represents a technological step-change to society similar — and in some ways more important — to the rise of the internet.

Without proper open and decentralized access to this technology, unintended outcomes may be realized — including privacy violations, deepening societal distrust, widening economic inequality, and the gatekeeping and rent-seeking of synthetic intelligence.

As we move into a new future, the true unlock is pushing forth on the ideals of censorship resistance, credible neutrality, and data transparency. These values are a natural counterbalance to the gravitational pull of centralized technology and money. We have seen this story play out on the biggest stages, both in our global financial systems and technology monopolies.

Projects like Bittensor ensure the next digital frontier remains in the hands of the many — and not the few.

And that is a world worth fighting for.

For more insights on the intersection of AI x crypto, follow Collab+Currency and the article’s co-authors on Twitter/X —

Ronan Broadhead:* https://twitter.com/Ronangmi*

Derek Edwards:* https://twitter.com/derekedws*

Collab+Currency:* https://twitter.com/Collab_Currency*

A special thanks* to those who contributed conversation, review, insights, or design during the construction of this piece, including Ala Shaabana (Opentensor), James Woodman (SN4), Emozilla, Tom Shaughnessy (Delphi), Sami (Messari), Xponent and CK (Tensorplex), Anand Iyer (Canonical), Jasmine (A&T), Keith (Bittensor Guru), Ben Roy (Seed Club), Jack Butcher (Visualize Value / Checks), ChrisF (Tribute DAO Overlord), Johnny (Distributed Global), Gmoney (9dcc), Jerry (Synergis Capital), Jason Choi (Tangent), toptickcrypto, James (IDTheory), Carl V (6th Man Ventures), Atley Kasky (Collaborative Fund), Andrew Jiang (Curated), and the Collab+Currency team.*

Disclosure / Disclaimer:* At time of publication, Collab+Currency or its members may have exposure to some of the networks and projects described in this piece. The co-authors and Collab+Currency do not endorse or recommend ownership of any project, digital asset or collection described in this article.*

While attempts have been made to verify the accuracy of the information provided we cannot make any guarantees. Investors should be aware that investing in digital assets involves a high level of risk and should be undertaken only by individuals prepared to endure such risks. Any forward-looking statements made are based on certain assumptions and analyses of historical trends, current conditions, and expected future developments, as well as other factors deemed appropriate under the circumstances. Such statements are not guarantees of future performance and are subject to certain risks, uncertainties, and assumptions that are difficult to predict. Past performance is not necessarily indicative of future results.

Subscribe to hive mind
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.