How Crypto Solves the Problem of Public Goods (Whitepaper)

Toward a Decentralized Abundance Economy

Download and read Abundance Protocol: A Decentralized Solution to the Problem of Public Goods whitepaper PDF here

Abstract: a value-preserving coin issuance mechanism, supported by an on-chain domain-specific reputation system, would allow contributors to public goods projects to be compensated proportionately to the economic impact of their work. All participants in the ecosystem have an economic incentive to preserve the value of their currency while maximizing economic growth derived from public goods. When contributors are compensated through new coin issuance, economic growth is maximized and the value of the currency is preserved when the compensation value is equal to the realized economic impact of the public good. Accurately estimating the economic impact of public goods in a decentralized system requires reliable data, credible validation techniques, and mechanisms to counteract potential fraud and collusion — these result from the dynamics produced by the protocol’s incentive structure and built-in mechanisms.

Table of Contents

  1. Introduction

  2. Background
    - The function of money
    - The problem of public goods
    - Regenerative economics

  3. Theoretical Framework
    - The value of public goods
    - Value-preserving coin inflation
    - Game-theoretic equilibrium

  4. Mechanism
    4.1. Incentive alignment for accuracy
    4.2. Modular protocol
    4.3. Bad actors
    4.4. Expertise Categories
    4.5. Investing in Public Goods

  5. Protocol
    5.1. Step 1: Project Post
    5.2. Step 2: Impact Estimate Post
    - 5.2.1. Project Hash
    - 5.2.2. Timestamp
    - 5.2.3. Estimated Impact Score
    - 5.2.4. Credibility Score
    - 5.2.5. Project Categories
    - 5.2.6. Validation Urgency
    - 5.2.7. Validation Effort Level
    - 5.2.8. Comments
    5.3. Step 3: Waiting Lists
    5.4. Step 4: Validators Selection
    5.5. Step 5: Periodic Validation
    5.6. Step 6: Coin Issuance

  6. Implications
    6.1. Incentivizing Innovation & Collaboration
    6.2. Decentralized Economy
    6.3. Building Capacity
    6.4. Currency Sell Pressure
    6.5. Regional & Community Currencies

  7. Conclusion

1. Introduction

In the absence of an effective economic mechanism for public goods, innovation has come to rely almost exclusively on the market mechanism. While the system works well enough for some forms of innovation, it still falls far short of producing economic efficiency. Meanwhile, no significant funding is going toward any forms of innovation where value cannot be directly captured within the market — regardless of the magnitude of impact such innovation may have on the economy.

Government funding for public goods too is both inefficient and inadequate. It is inefficient because funding is allocated according to bureaucratic processes; simply put, the people in charge lack the necessary incentives (and/or expertise) to fund the most beneficial projects, while at times funding decisions may be based on politics rather than merit. Funding is inadequate because it is difficult to explain bureaucratic decisions to constituents, or justify the free-rider effect: for example, why other countries should benefit from public goods that are produced at the expense of taxpayers. It is therefore more politically expedient to reduce funding for public goods and not alienate voters.

What is needed is an economic mechanism where the incentive to produce public goods is comparable to that of producing private goods and services in the market, and where the reward for producing public goods corresponds to their impact on the economy. The mechanism needs to resolve the free-rider problem while avoiding the inefficiency and incentive misalignments of a centralized authority. Such a mechanism would result in greater economic efficiency, sustained economic growth, and benefit everyone in society — an abundance economy.

In this paper, we propose a solution to the problem of public goods using blockchain-based smart contracts to create a decentralized economic system where public goods are funded through a user-validated coin inflation mechanism that is designed to preserve the value of the coin. The protocol uses several mechanisms to maintain the integrity of validations, including: random selection of validators, weighted validations based on on-chain domain-specific reputation scores (expertise), periodic reviews, and coin locking challenge periods for validators and proposers. The protocol is also designed to achieve all of the above while remaining permissionless and preserving user pseudonymity.

2. Background

The function of money

To show the motivation behind our proposed solution, let us consider how money functions in an economy:

In the absence of money, individuals in the economy can only exchange what they produced for what others produced if there is a “double coincidence of wants” — when both parties have the products the other party wants. This makes commerce exceedingly inefficient and results in low economic output and growth. Money, therefore, acts to facilitate commerce and economic growth. To do so, money must act simultaneously as a unit of account, a store of value, and a medium of exchange.

Unit of Account: assuming any two scarce products can have an exchange rate between them (i.e., 3 apples for 7 potatoes, 20,000 apples for a car, and so on) we can conceive of a common denominator against which all products and services in the economy can be measured and then use it as a unit of account throughout the economy. Since resources in the economy are scarce, the supply of money must be scarce and stable (or at least change in a predictable manner) for it to be an effective unit of account.

Store of Value: for money to be a good store of value it must maintain its value over time. For fiat currencies, this would partly mean being able to credibly demonstrate political stability and a disciplined monetary policy over the long term. For cryptocurrencies this would also mean demonstrating sustained demand for the coin.

Medium of Exchange: for money to be a good medium of exchange it must be divisible, widely accepted, and convenient to transact with at scale. At the moment of writing, the greatest challenge for cryptocurrencies is scaling the number of transactions per second (while maintaining the security of the network) — yet, the scalability issue appears solvable in the near future.

The problem of public goods

Public goods are goods that are both non-excludable and non-rivalrous. This means that the use of such a good by any individual does not prevent others from using the good and does not reduce the amount of the available good in the economy. Given this definition it becomes evident why the market mechanism doesn’t work for public goods; the market only works well for goods that have an exchange value. Since public goods are not scarce it is impossible to establish an exchange value for “1 unit” of a public good in relation to other goods in the economy. But as we’ll later see, it is possible to determine the impact a public good has on the economy as a whole or on specific sectors in the economy — possible, though not necessarily simple or easy.

There are several strategies to try and capture the value of public goods within the market mechanism, but as we shall see, all existing solutions either result in economic inefficiency or produce perverse incentives:

Patents, copyright, subscription and licensing: the primary strategy to capture the value of public goods in the market is by giving the creator of the public good monopoly power over who gets to use or copy the good, thus turning a non-excludable and abundant good into an excludable and scarce good. Once the good is scarce it can be valued within the market. This strategy allows the creator of the good to capture some value from the good but it comes at significant costs and is limited in its application and enforceability.

  • Economic efficiency: while this strategy allows a creator to be compensated for their work, it comes at the cost of reduced economic efficiency, since a good that could have been accessible to everyone at virtually no additional cost — an abundant resource — is now only accessible by some. This strategy also stifles further innovation in the area as the original creator can restrict others from capturing value from derived work.

    Not only does this strategy reduce economic efficiency by turning an abundant good into a scarce good, but it also requires ongoing expenditure of resources to preserve the scarcity of the good (and maintain the monopoly power of the good’s creator). In the case of software licenses, for example, various techniques must be implemented to prevent unauthorized individuals from using or copying the software. This is often a never-ending cat-and-mouse game (and thus a never-ending drain on resources), where developers continually design new security schemes that are then promptly foiled by hobbyists and hackers. In addition to private resources, public resources must also be continually expended within the legal system and on law enforcement to preserve the creators’ monopoly power.

  • Limited application: in addition to its economic inefficiency, the strategy is only effective for a subset of public goods (and for a subset of regions). Consider for example a team of researchers that has been working on a cure of a particular disease. After a few years of work the team finds a cure that can lead to extending the lives of 5 million people. Now consider 2 scenarios: in both scenarios the amount of effort by the team is the same and the impact on people’s lives is the same. The only difference is that in Scenario 1 the cure is in the form of a pill, and in Scenario 2 the cure is in the form of a combination of carefully measured ingredients that are readily available in stores.

    Based on this difference alone, in Scenario 1 the team could be making billions, while in Scenario 2 the team may be struggling to cover the costs of conducting the research. The reason for this disparity has nothing to do with the additional effort to produce the pill — it could have been a simple (but non-obvious) tweak to an existing pill. It has everything to do with enforceability — it’s relatively easy to penalize a pharmaceutical company that illegally manufactures a pill; it is however impractical to track and penalize millions of people who “illegally” buy ingredients in specific amounts from stores.

    Now you may say that the team in Scenario 2 can still make money by writing books about their discovery and getting good paying jobs thanks to their research. This is true, but these are all options that are also available to the team in Scenario 1!

    Just like the strategy can only be practically enforced in some cases, it may also not be uniformly enforced everywhere. Countries may have different IP laws, or may not recognize IP claims from another country. Similarly, countries may have limited capacities in enforcing such laws, and pursuing IP claims in such countries may be futile.

Advertising: while the advertising model should produce economic efficiency in the distribution of public goods (since no one is excluded from consuming any content), in practice it creates inefficiency in the production of public goods due to the perverse incentives created by this model.

  • Perverse incentives: since the market mechanism cannot monetize abundant resources, the advertising model monetizes a scarce resource instead: people’s attention. The problem is that there is no obvious relation between the quality of content and its popularity; a scientific breakthrough may have incredible value but may only catch the attention of a few dozen people. On the other hand, an inane tweet by a celebrity may generate millions of views.

    Content that grabs the most attention tends to be emotionally charged; surprising, outrageous, divisive, or hateful content tends to generate a lot more attention than emotionally neutral or factual content. It also takes a lot less effort to generate factually-inaccurate outrageous content than well-researched quality content — making such content easier to monetize. At the same time, social media platforms design their algorithms to maximize profitability; they direct audiences toward content that would make them stay on the platform longer to watch more ads.

    Since everyone in the Attention Economy is competing for limited advertising money, everyone has the incentive to produce (and in the case of platforms, promote) low-quality, attention-grabbing content — not high-quality content.

    The result is that while content creators in the Attention Economy are technically able to monetize content that is available for all to access as a public good, they are in fact economically incentivized to create toxic and socially polarizing content — the exact opposite of a public good!

Regenerative economics

The advent of blockchain-based smart contracts, bolstered by their success in spinning off new forms of financial and social organization mechanisms, sparked interest in applying the new technology to creating new funding mechanisms, solving coordination failures, and redirecting some of the great wealth generated in the crypto space toward public goods. These efforts brought much innovation and excitement to the otherwise stagnant field of public goods funding. Some of the most notable progress has been made in developing network effects to boost crowdfunding of public goods, decentralizing and democratizing funding decision-making, and creating potential investment funnels in public goods based on retroactive funding.

Quadratic Funding: QF is an application of quadratic voting that is designed to optimize the distribution of matching funds according to the preferences of the community. This is achieved by giving more weight to the number of people who support a cause over the total monetary amount going toward the cause. By democratizing the fund matching process QF incentivizes small donors to participate in the process and get an outsized influence over which projects get more funding. Meanwhile, large donors get social capital for funding the projects that the community wants to support.

Retroactive Funding: since it’s much easier to determine the impact of a public good after the fact instead of predicting expected impact, the idea of RF is to guarantee funding for successful public goods projects retroactively — once the impact is already assessed. By guaranteeing funding, an organization can create a market for VCs and individuals to invest in public goods based on their expected impact (instead of expected profitability).

Impact Certificates: while both Quadratic Funding and Retroactive Funding require an external source of funding that can later create network effects around public goods funding, impact certificates attempt to create a market for public goods through speculation on the expected value of an NFT representing the impact of a public good. At the time of writing, this mechanism is still in early stages of development, and questions remain regarding the demand for such certificates (and whether market forces will drive investment based on actual impact, instead of distorting that market) but it shows how far the thinking in the field of regenerative economics has advanced compared to traditional funding models.

Our paper builds on some of the successes in the field, while offering a model for a self-sustaining, decentralized funding of public goods — at scale.

3. Theoretical Framework

The value of public goods

The amount of resources on the planet has not changed over the past fifty, a hundred or even ten thousand years (perhaps with the minor exception of accumulated asteroid dust over time). And yet, with the same material resources as were at the disposal of cavemen, we are fantastically more prosperous and have exponentially more goods and services available to us.

The amount of resources didn’t change, but what changed was our understanding of science, improved engineering knowledge, extraction and refining techniques, logistics, infrastructure, organizational paradigms, financial models, labor practices, sociological and psychological methods, and so on and so on. All these innovations produced the economic growth and material wealth we have today.

What all these innovations have in common is that they are all resources that can be made accessible to everyone as public goods! Allowing access to such knowledge without restrictions would also be the maximally economically efficient condition (given that the people are properly compensated for producing these public goods), since the maximal number of people would be able to apply the knowledge to produce goods and services more efficiently.

If public goods produce real economic growth, and people need to be compensated for creating public goods, what should be the proper compensation level? We argue that the proper (and optimal) compensation level should be equivalent to the monetary value associated with the resulting economic growth. We further argue that, given the inability of markets to price public goods, and the inefficacy of centralized authority to fund such goods, what is needed to solve the problem of public goods is a decentralized economic model that can credibly estimate the economic value of public goods and compensate individuals through a value-preserving coin inflation mechanism.

Value-preserving coin inflation

The first protocol to propose funding public goods through coin inflation was Bitcoin, where miners are issued new BTC as compensation for their computational work — work that secures the network and thus benefits all users. In the case of Bitcoin, the coin inflation is for a specific public good and the issuance is preprogrammed by the Bitcoin protocol itself. We are proposing a generalized protocol for public goods where the issuance of new coins is variable (based on the economic value of public goods) and user-validated.

Now how can coin inflation be value-preserving? The proposition seems to be a contradiction in terms since inflation results in the devaluation of a currency. The answer is that, while coin inflation does indeed devalue the currency, the economic growth resulting from the production of the public good appreciates the value of the currency. Thus, the result is that the coin preserves its value, people are properly compensated for the public goods they create, while everyone in society benefits from real economic growth and from unrestricted access to public goods.

But why should people agree to the devaluation of their coins instead of benefitting from an increase in the value of the currency due to public goods-induced economic growth? The simple answer is that without properly compensating people for public goods, we’re unlikely to have as much economic growth. It is therefore much better to have a currency that maintains its value in an economy experiencing sustained growth over an appreciating currency in a stagnant economy.

Game-theoretic equilibrium

Even if we accept the concept of compensating people for public goods through new coin issuance, what should this amount be? As we shall see, the game-theoretic equilibrium would be at the value where compensation is equal to the economic growth created by the public good (i.e., maintaining the value of the currency).

The rationale for this equilibrium is as follows: if there was just one event where an individual published their work as a public good, then the game-theoretic equilibrium compensation would be 0. That is because the work is already published and people prefer to keep the public good’s related currency appreciation, and have no economic incentive to want their money to be devalued back. However, we are dealing with infinitely repeated games, where individuals would continually publish and propose public goods for compensation. It is evident that in this case the compensation would have to be a positive value, since otherwise individuals would have no economic incentive to produce public goods in the economy.

We have to consider the role of the currency as a store of value; while the scarcity of money is important for it to be an effective store of value, no less important is for the monetary policy to be disciplined and predictable, since at the end of the day what matters is public trust in the economy’s long term monetary policy. For example, if the Bitcoin protocol suddenly decided to cut new BTC issuance to make the currency more scarce, that would have severe adverse effects on the protocol. That’s because even though the currency would become more scarce it would also be much harder to predict future changes to the policy (and to issuance).

The same logic applies to the Abundance Protocol — it is much better (for the value of the currency) to have coin issuance that is consistent with the economic value of public goods than if individuals were trying to preserve the value of the currency by shortchanging public goods producers. For the same reason, compensating public goods in a consistent manner would help everyone throughout the economy, and particularly those active in the decentralized public goods sector, to better estimate their expected returns, thus reducing structural risks in the sector. Notice also that individuals in the economy don’t have fixed roles — in other words, any individual can be proposing a public good on Monday, and then be a user, validator or investor on Tuesday. It is therefore in the interest of everyone within the economy for compensation to be both fair and predictable. That still doesn’t say what the optimal compensation is, but it helps us understand people’s incentives a bit better.

So far, we’ve established that the compensation has to be fair, consistent, predictable, and correspond to the economic growth resulting from the public good, but we have yet to establish the actual equilibrium value. Suppose for example that the compensation amount is equal to half the economic growth produced by the public good. In that case there will be less incentive to create public goods, while the currency will be deflationary. A deflationary currency would also mean that people would prefer not to spend money that is appreciating in value, which would mean less production and an economic slowdown. Moreover, since we’re dealing with a blockchain-based economy, contributors would prefer to go to protocols that offer better compensation for producing public goods, which also means that there will be fewer improvements that are specific to the ecosystem and its related technologies, while other protocols would be growing at a much faster pace. This would suggest that the protocol should better compensate public goods contributors. But to what extent? If the compensation is higher than the economic value generated by the public good, users would not want to participate in such an economy since their buying power would be eroding over time — they too would want to switch to a different protocol that is less inflationary — thus also resulting in less activity in the economy.

We therefore arrive at the game-theoretic equilibrium value where public goods contributors are compensated the monetary equivalent of the economic growth associated with the public good. This is the equilibrium value where neither public goods contributors nor other participants have an incentive to defect from the ecosystem, and where every participant has the incentive to contribute to the economy. It is also where public goods-induced economic growth is maximized, and where compensation is both fair and predictable, thus resulting in greater public trust in the long-term viability of the economy.

4. Mechanisms

Our general approach was to design the protocol in such a way that people’s incentives align as much as possible with our stated goal of solving the problem of public goods (through credible estimation of the economic impact of public goods and proper compensation of contributors). Whenever participants’ incentives are expected to inevitably clash, we put mechanisms in place that reduce or eliminate the effects of incentive misalignment.

4.1. Incentive alignment for accuracy

The design approach starts with the decision to compensate public goods contributors through coin inflation (and not through NFTs or project tokens, for example). As explained previously, this incentive structure creates a dynamic where it is in the self-interest of every participant in the ecosystem to have an accurate estimate of the economic impact of public goods, since inaccurate estimates (both overestimates and underestimates) hurt the credibility, and long-term viability, of the currency as a store of value.

Each validator in the protocol therefore has an incentive to represent data as accurately as possible, and even indicate her own limitations and biases in coming up with the estimate. All other participants have an incentive to share as much relevant data as is necessary (as well as not hide any relevant data) for validators to make an accurate estimate. Everyone also has the incentive to challenge incorrect or misleading estimates, thus adding another layer of integrity to estimates.

Contrast this with the dynamics in speculative or financial markets; there, the information dynamics follow Warren Buffett’s old adage: “be fearful when others are greedy, and greedy when others are fearful.” In other words, traders in a market benefit from an information asymmetry: if they know something that others don’t, they can profit from that knowledge and would rather not share it with others. Moreover, individuals would share the kind of information that is likely to benefit them financially, often making investment moves in direct opposition to their stated claims. For example, if a trader wants to sell a stock, she would extoll the company and say that it has a bright future — hoping others buy in so that she can sell at a higher price. Similarly, if a trader believes a stock will outperform, and therefore would like to buy more of the stock at a lower price, she would spread FUD about the company so that others sell. It is therefore nearly impossible to know who presents factual information and who is making claims merely to improve their portfolio position. Even when traders agree on the data itself, they may interpret (or “spin”) it completely differently — based on how the interpretation may benefit them financially.

Of course, the same logic that applies to the stock market also applies to prediction markets, NFTs, project tokens, and so on. Since all these involve scarcity (of stocks, tokens, etc.), the interaction between traders is adversarial — with winners and losers. Our protocol, on the other hand, is designed to create dynamics where everyone in the ecosystem benefits and therefore everyone’s incentives are aligned — everyone can agree on the data, share information freely, and each would interpret it with the goal of getting the most accurate estimate. That of course doesn’t mean that everyone will always agree on what the estimate should be, but they would have an incentive to figure out why their views diverge, and seek to get to the truth.

4.2. Modular Protocol

While the overarching framework of the protocol — value-preserving coin inflation supported by an on-chain reputation system — are essential for the protocol to work, other components may be interchangeable. To encourage active research and development of the protocol — and avoid unnecessary fragmentation of the ecosystem — we are proposing a modular design for the protocol, where various implementations can operate simultaneously.

In this design the Project Posting and Coin Issuance mechanisms are fixed, but the estimation and validation mechanisms work as a module that can be “swapped.” Each module would then have an associated correction coefficient (relative to the baseline of the current proposed design), based on the accuracy of the impact scores it produces (determined through an ecosystem-wide validation process).

This design would allow simultaneous live testing of different configurations, parameters, and strategies for the estimation and validation module, and promote continuous improvement of the protocol.

The mechanisms and strategies proposed in this paper are therefore just one possible implementation of the protocol. They are presented to showcase how a systematic approach to resolving conflicting incentives among participants in the ecosystem can be implemented. Other strategies — or similar strategies with different parameters — can be implemented to achieve similar results.

4.3. Bad actors

While the incentives of participants in the ecosystem (contributors, users, estimators, validators and investors) generally align, this is not necessarily the case for public goods contributors whose project is under review. In that situation, while other participants benefit from an accurate impact estimate, the contributor benefits from overestimation of the project. Note that this is not the case for contributors as a group; contributors don’t benefit from impact estimates being *generally *overestimated, since that hurts the value of the currency. They only benefit if project estimates are generally accurate, while their particular project is overestimated.

What are then some strategies a bad actor-contributor may employ to get an overestimate? Since we’re dealing with a permissionless system, the contributor may create countless accounts and post multiple projects for review — even if 1 in 100 gets some funding the contributor would be profitable. The contributor may provide a fraudulent impact estimate, or collude with validators to produce fraudulent estimates. She may also try to create multiple fake accounts to validate her own project. Or, she may try to produce fake data (through bots, for example) to give the impression that her project has more impact than it really does. Finally, if she is successful in getting overcompensated for her project, she may withdraw her funds quickly so that, even if exposed later, she’d be able to get away with the money.

Now let’s consider how the protocol systematically addresses these fraudulent strategies:

  • Funding validations: to maintain a permissionless system while avoiding sybil attacks, contributors (or Estimators) will be required to provide funds for the validation process. These funds should be in proportion to the expected impact of the project (since higher impact projects need to be scrutinized more carefully), and would go toward validating the project. If a contributor overestimates the expected impact of the project, she would simply lose the extra funds. If she significantly overestimates the impact of the project (or if the project has no value) she is likely to lose more than she’s likely to make. It is therefore in her economic interest to accurately estimate the expected impact of the project. There is also no benefit to creating multiple accounts in such a system, since the contributor’s expenditure will always be in proportion to the expected impact. The protocol will also have a mechanism for contributors to get funding for submitting projects, so that no one is restricted from submitting legitimate projects due to financial difficulties.

  • Random selection: to prevent bribing or collusion with validators, the protocol will select validators at random, which means that a contributor will not have control over who validates the project.

  • Merit-based validations: Now what about creating multiple accounts to increase the chance of ending up on the list of validators? While it’s true that in a permissionless system any user can create as many accounts as she wants, creating multiple accounts in the Abundance Protocol would bring no benefit to a user since validators are selected based on merit — their domain-specific Impact Score (or “expertise”) in a relevant category (for assessing a project’s credibility and impact), and general Impact Score (for assessing overall impact). Validations are then weighted by the Impact Score of each validator. These Impact Scores are non-transferable between users, which means that the only way to acquire these is by earning them: by either creating public goods or reviewing public goods projects as a Validator. Having one account or three accounts would therefore make no difference, since the amount of effort to generate an Impact Score will still be the same. Similarly, since Validators are selected based on their Impact Score, 3 accounts with an Impact Score of 100 would have a combined equivalent chance to be selected to validate as 1 account with an Impact Score of 300.

Though the purpose of this approach is to prevent sybil attacks and low-quality validations, it creates a problem for honest validators who don’t already have an expertise score in the category or are new to the protocol. These individuals can still get an expertise score either by contributing to a public goods project, creating an estimate post, and so on, however, these processes take a long time and create unnecessary hardship for new validators. For that reason, we are proposing two additional paths to participate in a category’s validation: the first path is for validators who have expertise scores in non-related categories. These can be selected at random into the first validation tier with a probability that corresponds to their overall expertise score (from the pool of those who opted in). The second path is for new users to the protocol that don’t have any expertise. These can be selected at random. Validators from both paths will have a small number of slots available in each validation, to avoid burdening validators with having to review low quality validations.

  • Time-locked funds: the protocol allows a certain time for anyone to challenge validations after the validation process is concluded. A challenger must provide funding for a new randomized set of validators, as well as present sources for the challenge (merely not liking the results is not enough). During the challenge period funds will be locked in the contract to prevent contributors from “cashing out” their funds.

  • Periodic reviews: following the initial estimation of a project’s impact, there are periodic reviews of the actual impact of the project during that time period. This provides an additional level of integrity to the protocol and allows validators to further refine their estimates based on new data.

  • Ecosystem dynamics: so far, we’ve discussed protocol-based solutions to bad actors, what is equally important however are the ecosystem dynamics that stem from the incentives that the protocol creates; since all participants in the ecosystem have an incentive to keep the integrity of estimates, and people are compensated for creating public goods, participants have an incentive to create AI, ML, and other powerful tools to detect fraudulent activity throughout the ecosystem. This means that even if bad actors develop their own tools to attack the ecosystem, there will always be greater economic incentive to develop public countermeasures.

    Another ecosystem dynamic has to do with collusion among bad actors; the ecosystem creates a strong disincentive to collude because there is a clear incentive in the ecosystem to expose fraud (this too is a public good after all). Since anyone can expose fraud, and be rewarded for doing so by the ecosystem, anyone who participates in a conspiracy can defect and expose all the other bad actors. This makes collusion in the ecosystem very risky and impractical.

4.4. Expertise Categories

How can the average person know if, for example, research in biochemistry is impactful or not? To expect the average person to have any insight into the intricacies of any subject matter that they’re not familiar with is unrealistic. How then does the protocol maintain the integrity of the validation process? To solve this problem, we introduce the concept of expertise categories. In other words, if we want projects to be reviewed rigorously, they must be reviewed by people with expertise in the knowledge domain(s) of the project, and the reviews of individuals with greater expertise in a specific category should have more weight — corresponding to their level of expertise.

So far everything seems reasonable, but assigning individuals a domain-specific expertise score — that actually corresponds to something meaningful in the real world — in a decentralized system is not so straightforward and involves multiple challenges; how are these expertise scores determined or assigned? Who determines the scores? And what ensures that these scores are meaningful in reality? Also, how can a category expertise score be determined ex-nihilo — in other words, if a new category is created in the ecosystem, how can anyone assign expertise scores to individuals in that category if no one has any expertise in that category to begin with?

Non-transferable tokens: in our approach category expertise scores are in the form of non-transferable tokens (NTTs/SBTs) that are directly related to the impact a user makes in the category. Users can attain category expertise scores by contributing to public goods projects, by creating estimates, or by validating Estimates.

For users who contribute to public goods projects, the expertise score is determined through the project’s impact estimation, validation, and challenge processes. For Estimators, the expertise score is determined by the validation and challenge processes, while for validators the score is determined through a three-tier validation process followed by a challenge process; in the first tier a group of validators with expertise in project-related categories reviews the project and each validator assigns the project a Credibility Score and a relative impact score (per category), and provides supporting sources and justifications to back their review. The second tier of validators — also with expertise in project-related categories — validates the reviews of the first tier and determines the weight of first tier validators through Quadratic Voting.

While the first and second tiers assign the project a Credibility Score and relative impact score within categories, the third tier of validators — with expertise in categories from across the ecosystem — modulates the overall impact score of the project based on the input they receive from the other two tiers.

Of the three tiers, the first tier receives the greatest portion of the category-specific expertise score allocated to Validators in the Estimate post (due to the difficulty of their task and the associated monetary and reputational risk involved). The second and third tiers receive a smaller portion of the allocated Expertise Score, each validator in proportion to her existing Expertise Score (for the third tier, the Expertise Score is in proportion to the Validator’s overall expertise score, though at a smaller proportion than for second tier validators).

The challenge period is designed to ensure the integrity of the validation. Anyone can challenge a Validator (though the challenger is required to provide supporting sources to justify the challenge), and, if the challenge is successful, the Validator will lose both funds and expertise related to the validation.

Thus, users in the ecosystem have a decentralized mechanism through which Expertise Scores are assigned to contributors, estimators, and validators. These expertise scores are category-specific and are determined by the impact of each user. There are multiple layers of validation and challenges that help ensure the integrity of the process.

Related categories: in addition to a user’s expertise in the category itself, the protocol also calculates users’ expertise in a category based on expertise the user has in related categories. The “relatedness” coefficient between categories is calculated based on the frequency projects share categories, adjusted for the overall impact of the project and its relative impact in each of the categories. This means that more impactful projects will have greater effect on the coefficients, as well as projects where both categories have significantly more relative impact in each of the categories.

If a user only contributed to or validated Physics projects, and has expertise in that category, the protocol would show that the user has some expertise in math (User’s Math expertise score = User’s Physics expertise score * Math<>Physics relatedness coefficient).

“Grafting” new categories: for newly created categories (generally created by Estimators while making an estimate for a project) no user would have expertise in the category from the start, and there is no data the protocol can work with to calculate a relatedness coefficient. Therefore, for Validators to be able to review a post in a newly-created category the Estimator must “graft” the new category onto existing categories. When creating the new category the Estimator would specify the related categories, and estimate the expected relatedness coefficient for these categories. Then, during the initial review of the Estimate post Validators (randomly chosen from the related categories and other specified existing categories) can adjust the relatedness coefficient or propose other related categories — thus preventing any manipulation of the system by the Estimator. After several posts where the new category is specified, the protocol will have sufficient data to automatically generate a relatedness coefficient for the new category, and the new category will have some users with expertise in the category — thus successfully grafting the new category onto the protocol.

4.5. Investing in Public Goods

While we outlined the process of estimating the impact of existing or newly created public goods, there is certainly a need to have a mechanism which allows investment in the development of public goods. This can be achieved by adding to the existing protocol a Funding Request process that works as follows:

  1. A proposer (or team) creates a funding request specifying (a) the proposed public goods project (with a detailed description of the work involved), (b) the expected impact of the project once completed, (c) timeline, (d) project categories, (e) contributors, (f) influencing sources and their expected share of the project (g) other project-relevant details. Proposer also specifies (h) the requested funding amount and (i) maximum percent contribution of the project that the investor will receive. Additional terms may be stipulated regarding milestones and payments.

  2. Funding Request is submitted to a decentralized validation mechanism (that works similarly to the validation process on Estimate posts) along with a validation fee. Validators then estimate the Credibility Score and Expected Impact Score (along with a risk factor) for the project.

  3. Investors can then use the input of validators, along with the on-chain track record (Credibility Scores and Impact Scores) of the proposer (or team) and bid on the project by offering to receive an equal or lower percent of the final project.

The project can also be crowdfunded, with investors offering a portion of the requested funding amount (along with a proportioned expected return) and proposers accepting a group of the lowest bids that would total the funding requested.

After accepting the bid, receiving the funding, and completing the project, a Project post is created with the relevant contributors (including the investor(s)) and influencing sources specified along with their respective percent contribution. This is followed by an Estimate post and validation process that will determine the actual impact of the public goods project.

Risk: obviously not all projects end up being successful, and it is entirely possible that a project (for example, a scientific research project) fails. Yet, this is the nature of investing — this is true for any venture and public goods projects are no different in that regard. For that reason, investors need to know the risk involved in the projects they’re investing in, and this is what the process outlined above seeks to provide. Once investors can credibly assess the risk involved, the expected return, and evaluate the track record of the individuals involved in the project they can make informed decisions about their investments. This process results in an entirely new business paradigm — investment in public goods.

Effective Altruism: it’s important to note here that this process works equally well for investments and philanthropists alike. In the latter case the philanthropist can simply specify 0% return in the bid. Philanthropists would still benefit from this mechanism since they would be able to more easily assess how they can maximize their impact through their donations.

5. Protocol

The steps to run the protocol are as follows:

  1. New public goods project is posted to the protocol.

  2. Each estimate for a project is posted to the protocol.

  3. Estimates are sorted by credibility, highest expected impact, and required expertise.

  4. Validators are selected at random to review the estimate.

  5. Validators are periodically selected at random to review realized project impact.

  6. Coins are issued to public goods contributors following each validation (and challenge) period, and based on realized impact.

Let us now consider each of these steps in greater detail:

5.1. Step 1: Project Post

Any project that meets the standard of being non-excludable, non-rivalrous, and having a positive impact can be considered as a public goods project in the protocol. Projects are submitted to the protocol permissionlessly. The project itself needs to be stored in decentralized storage. A content identifier (hash) will be stored on chain, along with a timestamp of when the project was posted. The project should also include the address(es) of contributors, with a breakdown of the percent contribution of each contributor to the project. Similarly, the project should include the content identifier of influencing projects or sources, with a breakdown of the percent influence of each project or source.

While estimating project impact is a public good, the distribution of funding is not (since it mostly affects the parties involved), so it should not be a part of the impact validation. Yet, coin issuance funds will be locked in the project contract unless there is a consensus on fund distribution.

Project contributors and influences (respectively) can come to an internal consensus on the appropriate breakdown of percent contribution of each individual or project to avoid the need for a costly decentralized review of appropriate distribution.

Since anyone can challenge the specified distribution (permissionlessly) — which would trigger a review — every contributor (and influencer) has an incentive to reach a fair and appropriate internal consensus. Such consensus could allocate a certain percentage of the funds to “others,” which could help avoid a challenge to the consensus and allow distribution of funds to existing contributors and influencing sources. The “others’’ allocation should still be appropriate to avoid a broader challenge.

Though challenges are permissionless, challengers need to deposit funds (proportionate to the amount claimed), along with any evidence in support of their claim, to initiate a decentralized review. If the claim is found to be fraudulent the challenger will lose their deposit. This mechanism is meant to ensure that challenges are credible and disincentivizes bad actors from creating fraudulent claims.

5.2. Step 2: Impact Estimate Post

Once a project is posted to the protocol, anyone can create an Impact Estimate for the project. The Impact Estimate initiates the validation process for the project.

Just like the project itself, the Impact Estimate too is a public good, since it provides the ecosystem with a validated impact estimate of a public good. The value of the Impact Estimate is proportional to the value of the project itself — yet the effort it may take to estimate a project varies based on the complexity of the review, access to credible data, difficulty to accurately determine economic impact, and so on. Each project also requires a certain amount of expertise to review; while higher impact projects require an overall greater amount of expertise, since a certain baseline of expertise is required even for a project with minimal impact, the amount of expertise required increases at a decreasing rate — thus making higher impact projects more profitable to Estimators.

In the Impact Estimate post the Estimator specifies the following information:

  1. Project’s on-chain hash.

  2. Timestamp of impact estimate post.

  3. Overall estimated economic impact score of the project.

  4. Project credibility score.

  5. Categories that are most relevant to the project, and the relative importance of the category to the project.

  6. Urgency of validation.

  7. Validation effort level.

  8. Comments

Based on the information provided in the post, the protocol calculates the validation fee that the Estimator will be required to submit to initiate the validation process (this fee only needs to be paid once an Estimate post clears the waiting list and validation can proceed).

Not all of the information above is required to successfully submit the Estimate post, but the more information provided the less effort is required from validators to do their task — which means that the Estimator can pay a smaller validation fee (and end up receiving a greater portion of the Impact Estimate value). Estimators therefore have the incentive to look for high impact projects (such projects should also have lots of credible data and low effort to validate), and provide as much information as possible in the Impact Estimate post.

The required fields include: project’s on-chain transaction hash, timestamp of impact estimate post (automatically generated by the contract), overall estimated economic impact score of the project (with supporting sources), relevant categories and relative importance, estimation difficulty, and validation effort — these fields are sufficient for the protocol to calculate the amount of expertise required for validation and associated validation fee required.

5.2.1. Project Hash

The project’s on-chain hash is the content identifier for the project from decentralized storage. The hash maps to the Project Post on-chain. This hash is required information in the Impact Estimate post.

5.2.2. Timestamp

The Impact Estimate post timestamp is generated by the protocol once the post is successfully submitted.

5.2.3. Estimated Impact Score

The overall estimated economic impact score is required for the Estimate post, and allows the protocol to calculate the expertise necessary to validate the estimate, and is part of the associated validation fee. Projects with higher estimates require more expertise to validate (although expertise increases at a decreasing rate for higher estimates). The impact score should cite sources to back up the estimate (not required but reduces the effort of validators).

5.2.4. Credibility Score

We propose two tiers of validation: in the first tier, validators with expertise in a subject matter are asked to assess both the credibility and the relative impact of the public good on their field. In the second tier, validators from across the ecosystem determine the overall impact of the project based on the credibility and impact input from domain-specific validators. Estimators are not required to provide a Credibility Score for the project, but such score helps validators in their review process.

5.2.5. Project Categories

To review the credibility of a project, Validators need to be selected based on their expertise in the domain (or domains) of knowledge relevant to the project. The Estimator needs to specify the relative impact of the project within relevant categories so that sufficient expertise can be deployed to review the project. The estimated relative impact of a project in all its relevant categories sums up to the estimated overall impact of the project on the economy.

The data for Project Categories should include the hash identifying the category, the relative impact score in the category, and a brief summary justifying the decision (the latter is optional). Project Categories are a required field that helps the protocol assign validators to the estimate.

5.2.6. Validation Urgency

While the validation process should run its course to achieve reliable and accurate results, at times it may be necessary to accelerate the initial credibility score portion of the review. For urgent credibility reviews validators will receive a greater compensation to move the estimate up in the waiting list.

The Validation Urgency is assumed to be not urgent unless otherwise specified. Specifying the Estimate post as urgent would only accelerate the initial credibility score of the post, while the other sections will be validated at the same rate as any other post. Therefore, this option should only be selected if a credibility score needs to be obtained quickly. This option may be useful for a decentralized news service, for example, as an early credibility score may be necessary for such service to be useful.

5.2.7. Validation Effort Level

Projects with the same expected impact may require a different amount of effort from validators — either due to the complexity of the project or other factors. Yet, more required effort does not necessarily mean that the compensation level for validators will be higher (as compensation is tied to impact and not to effort). Since validators will prefer to review projects that require less effort than those that require more effort, this would motivate contributors to produce more straightforward content. Similarly, Estimators may share a greater portion of their expected return with validators to advance their position in the waiting list. Finally, developers will have the incentive to develop tools to simplify and speed up the validation process.

Validation effort level is a required field, and helps Validators determine whether they want to review a project.

5.2.8. Comments

The comments field allows the Estimator to provide any additional information or context for the project. This field may be especially useful for projects that are difficult to estimate (based on limited data, uncertainty, ineffective estimation tools, and so on). If a project estimate has a high level of difficulty it carries with it an economic and reputational risk for the Estimator, for validators, and for the ecosystem more broadly (the risk that such projects would hurt the currency as an effective store of value). It is therefore preferable for the Estimator to choose a lower impact estimate that she can justify with a higher degree of confidence, and flag the project as requiring another round of estimation at a later time (when more data is available or better estimation tools are developed, for example). By doing so, the public goods contributor can still receive some compensation while everyone else (Estimator, validators, and the ecosystem as a whole) would benefit from reviewing lower risk estimates.

Estimators are not required to fill out the comments field, but it may be beneficial to flag if a project may require a future estimate.

5.3. Step 3: Waiting Lists

Initial Review

Once an Impact Estimate post is successfully submitted to the protocol it has to undergo an initial review — a sort of quality control where a relatively small set of validators checks the parameters of the post. These validators are selected at random and have expertise in the categories specified in the Estimate. The combined expertise score of these validators is a small fraction of the total expertise required for a full validation. The initial review process is a layer of defense for the protocol to prevent Estimators from manipulating the system with misleading information (for example, by setting a low effort level for a project that requires more effort from validators, or by choosing the wrong categories for a project). Validators in the initial review can adjust the project categories and effort required fields, as well as flag any issues in other fields. These validators assign a Credibility Score to the Estimate post (this is separate from the Credibility Score for the project itself, which is provided in the full validation). Validators in the initial review cannot adjust the estimated impact score (or associated validator fee). The Estimate post’s Credibility Score is then used to prioritize the post on relevant Waiting Lists. While it should generally be advantageous for an Estimator to post an Estimate as quickly as possible, both the Estimate post’s Credibility Score and the risk of losing funds due to overestimation should make Estimators more careful in coming up with an accurate estimate. If the Estimator underestimates a project, another Estimator can later amend the Estimate post and collect the difference in the post’s impact score.

Category Waiting Lists

There is no one Waiting List for all Estimate posts. Instead, each category has its own Waiting List, where Estimates are prioritized based on their Credibility Score. Since the labor of Validators is a limited resource, the expected compensation (per Expertise Credit) changes based on the capacity of the category’s Waiting List; More total expertise lowers expected compensation, while more Expertise posts to be validated increases it. Similarly, the compensation per Expertise Credit varies across categories, since different categories may have a different overall Impact Score in the ecosystem — for example, a lot of expertise in Celebrity Gossip may be associated with less overall Impact (and therefore less compensation per expertise credit) in the ecosystem than some expertise in Biochemistry. The expected compensation is adjusted for every added post and does not affect previously added posts.

This mechanism helps Validators track which categories offer a greater return, so they can choose to specialize in those categories. It also allows validation capacity to be effectively maintained across the ecosystem, and gives public goods contributors (and estimators) the tools to predict what is likely to have greater impact in the economy. This mechanism also allows participants in the ecosystem to focus on building tools to increase capacity where it is needed most; for example, they can focus on developing automation tools that reliably substitute for validator labor. They can build tools that make validations faster, thus allowing existing Validators to perform more validations in a given time period. Or they can develop tools/resources that make it easier for more people to become proficient in a category, thus increasing the number of Validators in that category.

The mechanism also avoids the pitfalls of an open market for validation labor, since such a system may allow bad actors to significantly underbid other Validators (potentially even offering free validation) and thus increase their chance of validating posts.

5.4. Step 4: Validators Selection

Validators choose to opt-in to any category they wish. Once an Estimate post reaches the top of a Waiting List, Validators in related categories are selected at random for the post. At that point each Validator in the selection can look over the project and decide if they want to validate it, and if they want to fully or partially validate the project. Validators then need to accept the validation task and deposit funds in the contract in proportion to how much of the project they are expecting to validate. Since not every single Validator is expected to fully validate the project, the protocol will continue to randomly select Validators for the project until either the required expertise amount for the project is met or the category runs out of Validators (including Validators with expertise in related categories) within the allotted time frame.

The purpose of depositing validator funds (at a rate of 1/3 the expected standard payout) is to keep validators honest — since they could lose the funds during the challenge period if they provided a fraudulent review. It also helps prevent a Sybil attack on the protocol, where users can create multiple accounts, create many low-quality reviews, and then still make money even if a small percent of the reviews are not challenged — in that case a lot more resources would have to go toward challenging low-quality reviews.

The task acceptance process allows the protocol to select additional validators beyond the original selection without having to wait for validators to first submit their review. Once validators are selected for the post the protocol can move on to the next post. This process creates a separate time period for reviewing, which allows validators to focus on the review process without having to rush — thus contributing to the accuracy of reviews.

If a Validator reviews a smaller portion than what they initially agreed to they will lose the additional deposited funds. This incentivizes validators to be more precise when they specify their expected portion of review.

Since maximum coin issuance for a project depends on the total expertise provided for validation, reducing the scope validated would result in a lower potential issuance (as well as lower return for Estimators). Project contributors would still be able to apply for additional funding, which essentially means creating a new Estimate post (for the remaining funds only) and going through the validation process from the start.

Validation Tiers

Each Estimate has three tiers of validation:

First Tier: the validators in the first tier are randomly selected based on their expertise in project-related categories. These validators perform the most difficult task in validating the project; they assign the project a credibility score, category-related expertise score. Validators need to provide sources and justifications to back up their reviews.

Validators may also indicate their confidence level in the review. A lower confidence level would reduce the weight of their review (which is based on their category-related expertise score), and their expected compensation. However, it would also reduce the risk of being challenged or penalized for providing an inaccurate review. As the monetary and reputational penalty can outweigh any benefit of misrepresentation, validators always have the incentive to truthfully represent their level of expertise.

First-tier validators have an allotted time to complete their review. Since their task is the most difficult from all validation tiers, they receive the majority of the validation fee (and corresponding impact score).

Second Tier: the validators in the second tier are randomly selected based on their expertise in project-related categories. After the allotted time for first-tier validation passes, second-tier validators review the work of the first tier validators and determine how the review of each validator should be weighed relative to each other (based on the quality and accuracy of the review) through Quadratic Voting, weighted by the expertise score of second-tier validators. If a review is inaccurate or fraudulent, validators can also vote to penalize the validator (and provide justification for the decision). Second tier validators are compensated for their work

To help ensure the quality of reviews, after the first stage of voting is done, second tier validators are then split into two groups (with roughly equal expertise scores) to review the work of each Validator from the other group for quality and accuracy, through Quadratic Voting. Each Validator would receive compensation and expertise scores (from the totals allotted to the second tier) based on the votes from the second stage. The weight of each validation would also be modulated by the second stage vote.

Third Tier: the validators in the third tier are randomly selected with expertise in categories from across the ecosystem. These validations determine the impact score of the project through Quadratic Voting, based on the input they receive from the first and second tier validators. Similar to second tier validators, there is a second stage vote to modulate the initial vote and distribute compensation and expertise scores.

5.5. Step 5: Periodic Validation

While the Estimate validation determines the total expected impact of the project (once the economy is “saturated” with the public good), periodic validations are meant to assess realized impact of the project. These follow a similar progression as the Estimate Validation, but require a much smaller amount of expertise. Another major difference is that no coin issuance follows the Estimate Validation. However, once Periodic Validations are completed — followed by a Challenge Period — coins can be issued to the project’s contract based on the realized impact of the project.

5.6. Step 6: Coin Issuance

Following each Periodic Validation and Challenge Period, the protocol issues coins to the project’s contract based on the realized impact of the project. Yet, contributors cannot withdraw these coins unless there is consensus on how funds will be distributed between contributors and influencing projects and sources.

Contributors and influencers can agree among themselves on the proper distribution of funds and post it in the project contract. If no one disputes the distribution, coins are unlocked at the end of the Challenge Period. If consensus cannot be reached, contributors can mediate the distribution of funds through a decentralized review process; each contributor can present their claimed contribution and back it up with supporting evidence. The decentralized review will work similarly to the validation process (except that no new coins are issued since the dispute is a private matter and not a public good). The dispute can be over the entire project funding or a portion of it, and the cost of the mediation will vary accordingly — it is therefore always preferable to reach a consensus internally.

6. Implications

What sets the Abundance Protocol apart is an incentive structure that aligns the interests of all participants in the ecosystem; it drives contributors to maximize their impact (as well as openly collaborate with others) and incentivizes validators to objectively and accurately review contributors’ impact. All other participants also have an interest in the accuracy of reviews (since it affects the value of the currency) and will act to challenge any fraud in the system.

Participants in the ecosystem and non-participants alike benefit from the production of public goods that the protocol enables. Yet, compensating contributors does not come at the expense of participants, since the protocol is designed to maintain the value of the currency — thus we solve the problem of the free-rider problem in public goods.

Let us now consider the major implications — and challenges — stemming from the design of the Abundance Protocol:

6.1. Incentivizing Innovation & Collaboration

Since public goods are open for anyone to use and benefit from, in order to properly compensate public goods contributors for their work we need to map out the projects and sources that influenced a public goods project and determine the degree to which each of these contributed to the project.

For example, if an open source project is being evaluated, we can go over the project’s dependencies and estimate how much each of the dependencies contributed to the project’s impact score. Once an impact score is determined for the project itself, the dependencies should receive a share of the return based on their degree of influence.

This process ensures that public goods contributors are properly compensated when others use their work, while allowing anyone to use public goods permissionlessly. It also incentivizes people to create public goods openly and without fear that others might steal their ideas or profit from them without crediting the original contributors.

Contributors may include the team working on the project as well as people providing data and feedback that contributes to the success of the project. This means that, unlike with Big Tech, people can choose to provide data for a platform and expect to be compensated for that service.

Compensating contributors (and influencing sources) based on their level of contribution to a project incentivizes everyone to collaborate more openly for the success of the project as well as to maximize their contribution.

6.2. Decentralized Economy

Impact maximization = Profit maximization: a major benefit that stems from the protocol’s design is the creation of a decentralized public sector where individuals can work (permissionlessly) for the benefit of the public and be compensated based on the impact they make.

Since contributors do not need to rely on any central authority (or organization) for their compensation, they are essentially free to work on any project, at any time, and based on their interests — a truly decentralized economy! Contributors’ only consideration for maximizing their profit needs to be: “how do I maximize my impact?”

Decentralized science: since the protocol enables researchers to get funding for their work through a decentralized, transparent and impartial public mechanism — and without the need for funding from special interest groups, corporations or governments — it allows researchers to have complete scientific independence, and increases the public’s trust in the scientific process itself.

The protocol enables researchers to focus on the work that they believe would have the greatest impact on society, and not worry about any extraneous considerations (how the research would affect tenure, government grants, journal publications, and so on).

Since the research is a public good, all related experiments and data (aside for personally identifying information) will be in the public domain as well. This would vastly improve both public access to knowledge and the potential for collaboration between scientists around the world. The public would also be able to evaluate the reliability of scientific articles in any field, since research projects would have a Credibility Score (supported by sources) based on the review of validators with expertise in the field.

Decentralized media: in the current ratings-based social media environment the most extreme, controversial, polarizing and outrageous posts tend to grab the most attention — and therefore dominate the conversation. The protocol offers an alternative paradigm where content creators can focus on making a positive impact and promoting civil discourse. Instead of focusing on getting the most ‘likes,’ people will consider what they can contribute to the conversation. The protocol would also reduce the spread of misinformation on social media (without the need for social media platforms to censor or suspend anyone); misleading posts would simply get a low credibility score, while online trolls and bots would have little to no influence in the protocol due to their low (or negative) credibility.

Independent media will be able to make money by creating quality content, investigative journalism, and truly work in the public interest — instead of pushing outrageous and divisive content to get more ratings in the Attention Economy

6.3. Building Capacity

Accurate estimation of the economic impact of public goods is difficult enough, but in a decentralized system estimates must be done in a transparent way and the data that estimates rely on must be verifiable by anyone. Though this is perhaps the most serious challenge facing the protocol, it was certainly taken into account when designing the protocol, and should not compromise its success in any way.

Essentially, both accurate estimation and verifiable data are capacity issues for the protocol, and the capacity of both is expected to grow over time due to the protocol’s economic dynamics. Let us examine these dynamics to understand why this is the case: it is in the interest of everyone in the ecosystem for impact estimates to be as accurate as possible, and for data to be verifiable. This is due to the fact that the value of the currency depends on credible estimates of the economic value of public goods. What this means in practice is that, initially, validators would want to review public goods projects that have sufficient verifiable data, and whose economic impact it is relatively easy to estimate — at first these are likely to be mostly digital public goods projects, since they are likely to have far more data. Validators would also have a preference for projects that have a greater economic impact, since that incentivizes greater innovation and attracts talent to the ecosystem. Validators also have an economic incentive to prefer validating projects with a greater expected impact, since their estimate is then likely to have greater economic value as well. Yet, they would not want to review projects where the estimate is likely to be inaccurate, or have a high degree of variability — even for high impact projects — since the validators are likely to lose money in the process if they get the validation wrong.

This doesn’t necessarily mean however that a project that is relatively difficult to review will not get any compensation in the ecosystem. Instead, what is more likely to happen is that, given a range of values for the project’s expected economic impact, validators would simply choose a value on the low end of the range where they can be more confident of the minimal impact of the project. Then, postpone a full review of the project until the ecosystem has the tools to assess the full impact of the project — this compromise still gives contributors some compensation for their project, while preserving the value of the currency.

While initially validators may prioritize projects that are both high-impact and relatively easy to review, this is unlikely to be the case for too long. The reason is that in such an ecosystem there will be a very strong economic incentive to develop powerful tools that simplify and improve the estimation process, as well as an incentive by all participants to provide the necessary data to make more credible estimates. The more projects are validated in the protocol the more data people will have to work with on further refining the impact estimation technology (this technology will necessarily be open source and available for all to use). Over time participants will also develop high quality decentralized oracles, that will be able to provide credible real-world data to the protocol, and therefore allow the protocol to work effectively and credibly beyond the digital realm.

This means that, as people develop ever-more sophisticated tools to model the impact of public goods, and the more the amount and quality of data increases, the quality of estimates is likely to improve over time, and the sphere of those projects deemed “easy to review” will increase progressively. Meanwhile, the amount of validation time, labor, and resources needed per project is likely to decrease, thus increasing the efficiency and capacity of the protocol over time.

6.4. Currency Sell Pressure

Imagine an economy where people’s income is denominated in one currency, but all bills, rent, goods, services and so on are denominated in another currency. You can expect a strong and consistent selling pressure for the first currency in such an economy. Yet, this is the reality for all cryptocurrencies. There are two ways to address this problem: one is to counteract the sell pressure with demand pressure and value accrual for the currency, the other is to promote the use of the currency as a medium of exchange for goods and services. While nearly all protocols and web3 projects focus on the former, we believe both methods are needed.

Demand pressure: there are three main use categories in the protocol that drive demand for the currency; the first use category is investment. Investors need to have the native currency of the protocol to be able to invest in public goods projects, or in estimators, validators or challengers. Since there is a timelock period of coins while estimates are under review this too acts to reduce sell pressure.

The second use category is estimators. Estimators need to provide a validation fee for their Estimate post to be reviewed (as a measure to prevent sybil attacks and spamming of the protocol). To do so they can either use their own funds or rely on investors (who will want a return on their investment).

The third use category is validators. Validators too need to deposit funds during the review of an Estimate post — in proportion to the expected return from the validation — as a measure to disincentivize fraudulent reviews. Validators can use their own funds or rely on investors. For both use cases above, investors can review the track record (and expertise scores) of estimators or validators to assess the risk of investing in their work. Such investment can only go toward the Estimate’s validation fee or for validation, and cannot be otherwise used by users, thus reducing the risk for investors (and promoting more investment).

Additionally, the protocol’s transparency and abundance of data for how impact is estimated for each public goods project will provide users with meaningful benchmarks to assess how they should value the currency in relation to other currencies.

Medium of exchange: people are unlikely to use cryptocurrencies as a medium of exchange until transactions are cheap, convenient, and can be done at scale. While these are the necessary conditions, they are far from the optimal conditions to make a cryptocurrency a medium of exchange — for that the currency needs to have a clear advantage over the alternatives.

Since much of the work to scale a blockchain is open sourced, the Abundance Protocol can help accelerate this process through the business and investment opportunities it creates around public goods projects. It can also facilitate the development of UI to simplify the mainstream adoption of the tech in commerce (both online and in the real world).

While these are the minimal requirements to make the currency a potential medium of exchange, they are hardly sufficient to drive mass adoption of the currency to make it a medium of exchange in practice. It’s important to note however that it is not necessary for any business to use the protocol’s currency exclusively for it to be an effective medium of exchange; merely providing it as an option to customers would be sufficient since doing so would contribute to reducing selling pressure for the currency. What would drive adoption is the benefit of being part of a vibrant ecosystem that stimulates economic growth.

  • Business reputation: the protocol’s integrated on-chain reputation system can facilitate commerce by allowing credible reviews of products — unlike the easily-manipulated reviews we have in online commerce today. The protocol would also allow the development of decentralized dispute resolution mechanisms to minimize fraud, and provide potential customers with data on the proportion of disputed transactions for businesses.

  • Tooling: participants in the ecosystem can expect consistent improvements in freely-provided open-source tools, products and services that facilitate commercial activity, and an active community of developers working to improve these tools.

  • Attracting innovation: since products and services using the protocol are more likely to have credible data available, it would also make it more likely that public goods are developed through the protocol to improve these products and services (since validators will prioritize public goods projects that have more data and are easier to review). This creates an incentive for more and more products to be denominated in the protocol’s native currency.

  • Native currency: since the protocol creates a sustainable business model for open-source projects, it is likely that many of the open-source projects funded through the protocol would integrate its currency into their products, or may even use the currency by default.

6.5. Regional & Community Currencies

When a project has mostly localized impact — such as street lighting in a particular town, for example — it may be impractical to fund it through the generalized currency model, since validators who are selected at random from across the ecosystem (and likely from around the globe) may not value the project as highly as the local community, which would result in a minimal impact score (and minimal funding). Similarly, such a project may be of low priority in the wider ecosystem, and take a long time to get reviewed. How then can the protocol solve the problem of public goods for projects whose impact is mostly localized?

In the case of public goods projects with localized impact (“localized” could refer to a community sharing a geographic location or a set of interests), a localized currency may be preferable for coin issuance. This currency would work similarly to the protocol’s native (generalized) currency — value-preserving coin issuance — except the rules to get localized expertise and level of localized influence (for example, based on years of residence in a location) are set by the community.

To get funding for a (mostly) localized public good, an Estimator would create an Estimate post and set the currency for compensation; suppose a project mostly has local impact, some regional impact, and a bit of impact on the overall ecosystem. The Estimator would indicate the expected impact in each of the currencies. Then, first and second tier validators are randomly selected from the relevant categories (within the wider ecosystem). Then, third-tier validators are randomly selected from each currency pool.

To drive demand for the local currency, as well as reduce expected sell pressures, each community can determine the economics associated with their currency and utility; for example, a town may decide to use its currency for municipal services and public transportation. A gaming community may decide to use its currency in games, and so on.

7. Conclusion

We have proposed a protocol to compensate contributors to public goods based on the impact of their work through a value-preserving coin-issuance mechanism. The protocol aligns the incentives of all participants in the ecosystem to accurately estimate the impact of public goods and properly compensate contributors. It creates a decentralized economy where people can work in the public interest — permissionlessly. The protocol employs multiple layers of protection to ensure the integrity of the process and counteract attacks on the protocol. It solves the free-rider problem in public goods by preserving the value of the currency, while maximizing economic growth derived from public goods — thus creating a decentralized economy of abundance.

Subscribe to Mike | Abundance Protocol
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.