Scaling Gitcoin grant reviews

Right now grants are reviewed by a small set of highly trusted individuals within Gitcoin who have built knowledge and mutual trust through experience and discussion. This optimizes for accuracy but at the cost of centralization, high cost, low resilience to reviewers getting hit by buses and a low limit on the number of individuals meeting some trust threshold is a blocker on this model scaling to large numbers of grants. The challenge is to build a protocol that allows grant reviewing at scale to be very fast, very cheap and very accurate. Achieving any two of these is easy (to be cheap and fast, automate completely; to be accurate and fast, pay high fees to trusted reviewers; to be cheap and accurate allow well paid trusted reviewers to work slowly) but optimizing across all three requires more sophisticated protocol engineering.

In the broadest terms, there are two levers available for reducing the cost, increasing the speed and maintaining the accuracy of grant reviews. These are a) decentralization and b) automation. Decentralization grows the pool of reviewers to pace with the growing pool of grants, moving responsibility from a small pool of trusted reviewers to a larger community - i.e. it makes more humans available to do reviewing work. Automation takes mechanistic tasks out of the hands of humans and does them quickly with a computer instead. Both of these approaches offer important improvements but come with limitations that must be managed. For example, decentralizing naively by simply opening up the reviewer pool risks reducing the overall trustability of the pool because adversarial reviewers will participate to skew the grant reviews in their favour or low-competence reviewers will degrade the review quality. Similarly, automation can deliver efficiency gains to the reviewers but too much automation down-regulates nuanced, subjective grant decisions that can only really be made by skilled human reviewers.

A perfect grant reviewing protocol finds the optimal balance between these factors and enables grants to be reviewed with the optimal balance of speed, accuracy and cost. The system also needs to be configurable and manageable for individual grant owners with minimal reliance upon any central provider. There has been substantial R&D within Gitcoin into how such a system could be built. This post will provide an overview of that R&D and the current roadmap.

What does a good system look like?

A good system for reviewing Gitcoin grants must:

  • Benefit from economies of scale - the cost must not scale proportionally to the number of grants

  • Maintain review quality even when the number of grants increases

  • Enable all reviews in a round to be reviewed rapidly - the time taken to compelte a round must not scale proportionally to the number of grants

  • Have a UX that encourages project leads to take ownership of their own review system configuration and management.

  • Avoid creating game-able incentives and Sybil vulnerabilities.

If there is one concept that has outsized influence on our ability to build such a system, it is composability. For Gitcoin grant reviews this means creating a set of tools that are available to be switched on/off and tuned specifically to the needs of individual grants. A composable set of grant review and Sybil defense tools would enable grant/round owners to configure the reviewing environment to their specific needs and the priorities of their users, while encouraging them to take ownership of their own review system. Allowing grant/round owners with domain expertise to make their own decisions about the context of their own grant round decentralizes the rule-making and distributes it across the actual user-community rather than having itcentralized under Gitcoin control. It also likely promotes effective scaling to large numbers of grants because reviewing is handled by sub-communities, effectively parrallelizing the work. In the composable paradigm, larger numbers of grants could be handled by more subcommunities, equivalent to a computer that dynamically adds more processing cores as the the size of some parallelizable task increases.

How can we build this sytem?

Outsourcing work to computers reduces the cost of reviews. This has to be balanced against the naunced, subjective decision-making that can only be made by humans, so the challenge is to incorporate the right amount of automation that doesn't compromise review quality. The question then becomes: which tasks can safely be automated?

Objective reviews

One task that can be automated is a simple check that grants meet the basic eligibility requirements. This can be achieved using a simple, standardized questionnaire given to grant owners at the time of submissions where the answers can be evaluated against a set of conditions, for example:

The grant is VC funded: yes/no
The grant has a token: yes/no
Is the owner account at least 100 days old?: yes/no
...

Some of these answers can be verified easily using Gitcoin data. The problem is how to determine whether the grant owner has lied about less easily-verifiable questions. One option is to manually review the answers, but this reinstates the human work that the automated eligibility checks were supposed to remove. Another option is to have a whistleblower reward so that grant owners within a round are incentivized to call out peers who have dishonestly characterized their projects, with the equivalent of "slashing" dishonesty being removal from the round or capping their matching pool rewards. It must be irrational for a grant owner to lie about their project's properties. An initial sift of grants that eliminates those that do not meet basic eligibility would cut the number of grants that make it to the review stage by at least 50% (based on data from previous rounds) translating into large cost savings. This eligibility check can be thought of as an initial "objective review" - it asks whether a grant meets a set of defined criteria and ejects those that do not from the round before human reviewers have been involved. Additional protection can come from incentivized whistleblowing. After the objective review the grant will undergo a subjective human review. This requires human reviewers to be assigned to the grant.

Subjective reviews

Reviewer selection

It is also possible to automate the selection of reviewers that are assigned to a particular grant. Review software like Ethelo can pick reviewers from a central pool and assign them to grants that match their preferences, expertise etc, relieving this responsibility from round owners. This seems straightforward, but there must also be a trust layer built into this process to prevent the reviewer pool being populated by low-competence or adversarial reviewers. This can also be automated to a large extent using Gitcoin Passport. Gitcoin Passport is a collection of "stamps" that act as proof of personhood and evidence of expertise or experience in a certain domain. An automated scan of Gitcoin Passport stamps can be used to create a reviewer profile that can then be used to eject suspicious reviewers from the pool or reduce their trust score. Conversely, metrics derived from Gitcoin Passport stamps can be used to verify reviewers as non-Sybil and give them scores that demonstrate their suitability to review certain grants. Gitcoin Passport can be integrated into Ethelo in this way to automate some important and laborious parts of the reviewer selection process, reducing the cost per grant. Not only will the accuracy not diminish, it may well even improve because of the substantial amount of hidden data science underpinning Gitcoin Passport's stamps and trust weighting functions.

This automation can also provide composability and configurability to grant owners by allowing them to tune the Gitcoin Passport stamps that are of itnerest to them and the criteria they think could be used to identify good grant reviewers. The UX for this could be very simple - a one-click option to activate Gitcoin Passport and then some simple input fields to define some terms of interest. Behind the scenes, these tags can then link to specific stamps. For example:

GRANT OWNER: Reviewer configuration:

- Sybil threshold: 0-85 (how sure do you want to be that your reviewers are honest humans?)
- Web3 score: 0-85 (how much web3 experience should your reviewers have?)
- Domain expertise: [ text  ]  (what keywords should we look for across the reviewer pool)
- How many reviewers should see each grant?: 2-4 (cost increases per additional reviewer. Default==3)
- etc...

There could therefore be a continuum of automation from entirely manual reviewer selection to heavily automated. Each round owner could configure their round with some lower limit on the amount of automation (e.g. "grants in this round must turn Ethelo ON"). Within the bounds set by the round owner, the grant owners could then tune their reviewer settings to their own specific grant. This approach removes a lot of the friction, and cost, associated with reviewer assignment and trust assessment.

Reviewing

The previous section showed how the reviewer assignment could be improved using Gitcoin Passport and Ethelo. There are also enhancements that can be made to the reviewing itself. Currently, the actual reviews are a particular bottleneck because they are conducted by a small number of known, trusted individuals with experience reviewing across multiple rounds. However, to scale to more grants, and to demonstrate credible neutrality, we need more reviewers. This raises two important questions:

  1. how can we incentivize users to become reviewers?

  2. how can we ensure the reviewers in the pool are trustworthy and competent?

Reviewer incentives can be monetary or non-monetary. In GR14 FDD created a model that explored how reviewers could be optimally organized and remunerated for their work. One outcome from this was a scheme that randomly assigned three eligible reviewers with different trust levels to each grant, always ensuring that the total trust level exceeds some threshold. The reviewers can then be paid in $GTC or another community token with the value awarded scaling with their trust level, incentivizing reviewers to participate to get the experience and Gitcoin Passport stamps that raise their trust scores to unlock greater rewards.

Non-monetary incentives have also been shown to be powerful in many communities, especially achievement-gated POAPs and NFTs which could themselves factor into the Gitcoin Passport trust scores or be used directly as Passport stamps. An empiricial experiment is currently underway that will test the performance of reviewers incentivized in different ways in an Ethelo mini-round and also explore what data is most valuable to collect from reviewers and how that data can best be managed and mined.

Gitcoin Passport provides the initial trust score that can be used to give confidence that the reviewers in the pool are non-Sybil and also to partition the reviewer pool into trust levels and knowledge domains. This information can be used to assign reviewers to particular grants according to the preferences of individual grant owners. At the same time, participation increases the trust score of reviewers, especially when they are rewarded with POAPs and NFTs that can be factored into their Gitcoin Passport. This flywheel effect could rapidly grow a community of trusted reviewers for individual communities, for Gitcoin and across Web3 as a whole because the reviewer's credentials would travel with the individual as a composable, transferable set of attributes. Since getting started as a reviewer is permissionless, anyone can jump onto this flywheel and become a trusted individual - the only criteria is active, honest participation. The transferable social reputation and activity-unlocks offered by the POAPs and NFTs might turn out to be more valuable to users than the monetary rewards, which could lead to further cost efficiencies. The positive feedbacks and community-building that can emerge from non-monetary rewards are also desirable because they circumvent to some extent the risk of inadvertently embedding a plutocracy where the barrier to Sybil attacks and dishonest review practises is just some threshold of wealth. It also encourages individuals to take the necessary actions to increase their skill and trustability over time while also attracting new entrants to the reviewer pool, creating a flywheel of positive behaviours that can diffuse across the web3 space.

The idea of credential transferability is very interesting and draws upon some of the core ethos of web 3 - one could imagine interesting symbioses where platforms that benefit from Gitcoin funding encourage their users to participate in reviews by giving priveleges to their own users if they have a certain trust level or particular set of Gitcoin Passport stamps, and reciprocally provably honest usage of the platform is recognized by Gitcoin Passport in the form of stamps or trust score.

Appeals, approvals and QC

The proposed grants system is ultimately a consensus protocol - it uses a set of embedded rules to create a situation where the community can agree that grants have been honestly reviewed by human reviewers to an acceptable degree of confidence. However, the protocol requires a social coordination layer underpinning it that can act as a fallback when a) the protocol misjudges a review and b) when an attack successfully breaches the protocol's in built defenses. The simplest model for this is to have some trusted arbiter that polices the grant review process and settles any disputes. However, this is a centralized model, which raises risks of that arbiter being hit by a bus, being evil, or being bribed. Much better is a decentralized mechanism where a community can collectively settle disputes and whistleblow on dishonest behaviour. Some ideas include an iterative mechanism where grants that are disputed are reviewed by another random selection of reviewers, perhaps with a higher overall trust score than in the previous round, this time with access to the dispute details submitted by the grant owner. The cost of this could be covered by the grant owner if the dispute fails and by Gitcoin if the dispute succeeds, creating an economic disincentive for dishonest grants and incentive for honest grants to appeal an unfair result. Other ideas include settling appeals on an open forum with a Gitcoin Passport trust-score weighted vote determining the outcome after a discussion period. These mechanisms need not apply only to appeals, but to any appeal including dishonest grants or reviewers that are identified by other members of the community (whistleblowers).

Retrospective insights and round-on-round learning

Incorporating more software, such as Ethelo, in the grant reviewing process also enables more data capture that enables round-on-round learning and refinement of the system. A challenge for Gitcoin right now is making its data openly available and accessible but also sufficiently anonymized. In a previous post, we outlined plans for decentralizing Gitcoin data and empowering the community to access it and use it in creative ways. This is likely to be increasingly important as Gitcoin grants scales to more users, as we cannot assume insights from smaller rounds under the original centralized grants system hold true for larger grant rounds managed under Grants 2.0. It makes good sense that as the number of grants, grant owners, reviewers and the size of the data captured all grow, so should the data science community tasked with mining knowledge and refining the system.

A few examples of how this data analysis could feedback into the grants system are a) identification of metrics and indicators that could act as early warning systems for "bad" grants or reviewers; b) indicators that can aid in Sybil detection and refining Gitcoin Passport trust scores.

Some composability risks

One could imagine a system where the owner of each grant submitted has freedom to easily configure and manage how their project is reviewed. However, offering complete freedom to customize opens the door to nefarious actions by grant owners. For example, a grant owner could remove any quality control and encourage Sybil-type donations to their grant. On the other hand, there could also be cartel risk, where round owners tune their eligibility requirements to be extremely high or very specific to favour certain individuals, creating exclusivity that can then be gamed.

The ideal situation would be to have some incentive structure that encourages grant owners to tune their review configuration to be more stringent rather than less. One could imagine some incentivized inter-community moderation services that cross-calibrate review quality. Pragmatically, at least at the start, protection from composability-gaming would more likely come in the form of some simple binary eligibility criteria that enable a round owner to programatically filter out ineligible grants. This would free up human reviewers for assessing valid grants that need subjective, nuanced human consideration.

Summary

Gitcoin is growing and it needs a grant reviewing system that can scale. At the moment, grant reviews are too centralized and rely too heavily on the work of a few trusted individuals. To scale we must increase the number of individuals participating in grant reviews while also outsourcing as much of the work of each individual to computers as possible. At the same time, we need the system to be adaptable to the needs of individual groups of users so that the grant reviewing can be community-tailored at the most granular level possible. The vision is that a grant owner can submit their grant to a round that has been specifically configured to meet their specific needs, and they have agency in determining precisely what that configuration looks like. Their grants are then reviewed by a set of reviewers from the community pool with the requisite knowledge and experience. Grant owners can then be satisfied that their round was as safe, fair, aligned with their community's values and completed quickly. Delivering this vision requires a composable stack of accessible reviewing tools and systems of incentives that encourage those tools to be adopted and used honestly. This article has outlined what some of those tools might be and how they might be deployed in Grants 2.0, and also what attack vectors might need to be managed when composability is introduced into the reviewing process.

Moving forwards from here requires weighing up the options for composable reviewing tools, making decisions about what to implement in the coming rounds and how to monitor their performance. There are experiments currently underway with test populations of reviewers that should inform these discussions.

Subscribe to jmc
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.