Across 13 rounds, Gitcoin has given almost $60 million to public goods. Projects that demonstrably create positive externalities bid for portions of the total funding pool. Like any substantial pool of money, Gitcoin grants attract diverse attacks from bad actors aiming to divert a portion of that money away from public goods and into their own wallets. The role of Gitcoin's Fraud Detection and Defense (FDD) squad is to protect the Gitcoin community - a diverse group that includes users, $GTC holders, grant recipients, donors and stakeholders in funded projects - from these attacks. From FDD emerges a protective layer that filters out attackers, enables partnerships with people and projects that have genuinely good intentions and delivers a trustworthy set of grant decisions. In doing so, FDD minimizes financial spillage to dishonesty and incompetence are thereby maximizes the public goods that can be supported by a given pool of funds. This article explores the various components of FDD and explains how they operate together to form a community "trust function" that protects public goods.
FDD aims to deliver grant decisions can be trusted by the community.
In this context, trust can be defined as belief that the grant evaluation system is effective at eliminating dishonesty and wastefulness. To foster this belief the community must perceive the process to be transparent and well-aligned with its values. This requires an open system that ensures grant applicants, grant reviewers and voters act honestly. Trust can be thought of as the synthesis of five core concepts:
S Shared Success
Maximizing these core values in the FDD protective layer enhances the legitimacy of grant decisions. This requires FDD to defend against attacks by dishonest grant applicants, voters and reviewers while simultaneously implementing an introspective process of self-refinement.
There are many ways a grant-hacker can try to game the system. A single user might submit many proposals without necessarily intending to deliver on them, relying on pseudonimity to avoid reputational damage. A user might also divide their donations to specific grants across many wallets, gaming the funding mechanism by dividing themselves into multiple virtual humans, hoping that the system gives each of their avatars a real person's voting weight. This is known as a "sybil attack" - effectively one person pretending to be many. There are several other attack vectors such as providing false or ambiguous information that make fraudulent grants appear to meet the eligibility requirements - this is not always easy for a grant reviewer to detect. The grant reviewers themselves and the tools they use to evaluate grants must also be prevented from becoming dishonest.
FDD is effectively a function that takes grant applications as inputs and outputs judgments. It's actually a nonlinear composite function with many input features and interoperable modules. The FDD function is optimized against health metrics and feedback from the community, we hope to adaptively converge towards true legitimacy.
The FDD function can be divided into three broad components (shown in three colours in Figure 1):
At a high level, the GIA sets and enforces the criteria of grant eligibility. The community votes on how much funding each grant should receive from the matching pool. The current mechanism for choosing the amount awarded to each grant is Quadratic Funding, however this is subject to change with the development of Grants 2.0 (https://gov.gitcoin.co/t/gitcoin-grants-2-0/9981) and the mechanism design initiative, Catalyst. The Sybil defenders identify voters that are dishonestly gaming the voting system and removes them. Then Evolution examines the overall grant-giving metrics and makes improvements for the next round. This is an accountability system that ensures the FDD squad itself is honest, transparent and efficient. Each of these top level components is composed of several sub-components that will be explained in the next sections.
The task of GIA is to assess the validity of each grant. This breaks down into several distinct tasks:
Each of these tasks has a team dedicated to it.
At the moment grant applications are reviewed by a relatively small set of highly trusted individuals who have built up their working knowledge and mutual trust within the Gitcoin DAO over several grant rounds. This works quite well but having grant outcomes decided by a small pool of trusted individuals runs contrary to the base ethos of decentralization, permissionlessness and transparency that define the DAO. To put the decentralization into context, whereas all reviews were done by the Gitcoin core team up to the 10th Gitcoin grants round, subsequent rounds have distributed the evaluations across more individuals managed by the DAO. To decentralize further, the grant reviewing process needs to continue to expand to include more human reviewers; however, opening up the review process also creates vulnerabilities that can be exploited intentionally by thieves and saboteurs and unintentionally by less competent participants.
Some system must be put in place that incentivizes honest and diligent behaviour while also widening participation across the community. This is the domain of the Grant Intelligence Agency Rewards Team. Their remit is to design and implement a reviewer-incentivization scheme that attracts, trains and retains trustworthy reviewers. For each grant, the system must maximize the trustworthiness of the decision while minimizing the cost per review. Correctly incentivizing reviewers is a route to increasing the trustworthiness of the humans reviewing grants - one critical part of the overall grant review process that also includes Sybil defense.
Incentivization can take many forms; it includes both financial and non-financial rewards. Optimizing the incentivization model means finding the right reward criteria, as well as the tempo, value and method of payment with the aim of maximizing trust and minimizing cost. The right model widens participation so that the need to trust a small group of known reviewers diminishes and also maximises learning opportunities, communication efficacy and reviewer retention as positive externalities.
Several potential models have been proposed during GR13. The simplest is a heirarchical model where lower-trust reviewers complete an intitial evaluation which is then passed to a higher-trust reviewer to confirm. Alternatives include a pooled system where the final decision comes from a majority vote across multiple reviewers. The leading concept at the moment seems to be a random assignment model that assigns grants to pools of reviewers that can come from multiple "trust levels". A more in depth exploration of these conceptual models is available here. In GR14 the aim is to move from conceptual models to numerical simulations. At the same time, there are plans to incorporate natural language processing into the review process so that a degree of intelligent automation can support the humans in the system.
Once grants have been determined to be eligible, high quality and value-aligned they can progress to funding. Funds are allocated using a quadratic voting mechanism where the number of donations made to a particular grant influences the total amount awarded much more than the monetary value of each donation because each donation is accompanied by a donation from a central "matching pool". Smaller contributions get larger contributions from the matching pool, up to a cap. This means grants are awarded larger amounts if many people contribute compared to few people contributing large amounts. This has an inherent vulnerability to Sybil attackers, since dividing a single donation into many smaller ones amplifies the contribution from the matching pool. Defending against these Sybil attacks is predominantly the domain of the Sybil Defenders, but there is also a team within GIA that addresses this specific issue: the Trust Bonus team.
Trust Bonus is a system of incentivizing Gitcoin users with extra influence in the matching pool in exchange for proof that they are real individual humans. Valid evidence of personhood, including Proof of Humanity, BrightID, Idena, SMS, ENS, POAP, Twitter facebook and Google accounts, enable a person to boost their weighting in the matching pool.
Grant applicants are also able to appeal against decisions after-the-fact. This is managed by the Policy squad, who are also responsible for creating and maintaining the various policies that govern the use of the Gitcoin platform and participation in grants rounds. Historically, the appeals process has involved an initial review by the FDD squad before being made available to a team of Stewards for comments. Support from at least 5 Stewards leads to a successful appeal and update to the grants policy if ratified by GTC holders (Figure 2). However, during GR13 there was extensive discussion on the Gitcoin Governance Forum about a particular grant that will most likely lead to refinements to the appeals process in GR14, especially around widening access to appeals decisions to other DAO workstreams and the broader community.
The whole grant review process is overseen by a trusted group, "Round Management", who ensure that things progress at the right tempo and meet quality benchmarks. They act as an oversight panel that maintain alignment with the Gitcoin ethos and amplify the community voice in decision making. To do so, they connect with all the various subgroups within GIA and externally.
Recently, GIA has started to incorporate Ethelo as a grant management tool. This stadardizes the grant management and offers enhanced metricization so the community can more easily digest details about every grant round, supporting transparency and decision-decentralization.
In summary, taking applications as inputs, the grant intelligence agency returns vetted, eligible grants that can Gitcoin DAO can fund confidently. The next challenge for FDD is to ensure the voting procedure that determines the allocated amount proceeds fairly and transparently. As explained above the grants intelligence agency contributes to that using the trust bonus scheme, but primarily responsibility is handed over to the Sybil Defenders.
Arguably the Gitcoin grant system's greatest vulnerability is dishonest gaming of the votes because sybil vulnerability is inherent to the quadratic voting model. Because the system attempts to assign funds on the basis of how many humans support a grant it rather the capital deployed in support of it, dividing a donation into many small "votes" is a cheap and effective way to boost a particular grant's grant allocation. This is a form of sybil attack, since a single human has divided themselves into multiple virtual humans to increase their voting efficacy. This happens at an alarming scale - of all the Gitcoin users making donations to the most recent grants round, about 12% were estimated as sybil attackers, although bot detection is notoriously difficult and there is probably large uncertainty. These sybil attackers also evolve new, more complex strategies in every round, making them harder to detect. The Sybil defenders are the blue team in a deeply adversarial arms race. The role of the Sybil Defense team is to amplify honest voices and suppress adversarial ones.
The primary tool used to detect sybil accounts is a semi-supervised reinforcement machine learning algorithm. The purpose of the algorithm is to detect sybil-like behaviours in votes for specific grant applications. To do this, there is a well defined pipeline conecting data about the grant applicants to a sybil/non-sybil judgment. The data sources are the grant application and information scraped from the applicant's Github profile. From these sources, a set of characteristics are extracted. These characteristics are those assumed to be predictive for Sybil behaviours, and they are used as model inputs ("features"). The model itself is a "random forest classifier" which divides the data into successively more specific categories until eventually giving the dataset a label - in this case 'sybil' or 'not sybil'. This model (known as ASOP, Anti-Sybil Operationalized Process) was developed and operated by a contracted team (Blockscience) but it has now been handed over to the community. In parallel, the FDD squad has also developed a "community model" that will now run alongside ASOP, probably in an ensemble that also includes human evaluators. Gitcoin DAO FDD-stream contributors now run both models end-to-end, boosting its decentralization.
The models used by Sybil Defense are "human-in-the-loop" algorithms which means that the sybil detection is not entirely automated, but requires human input at several points in the pipeline. There are many reasons why this is critical. At the highest level, the model must align with the values and aims of the community, rather than rigidly following the rules written into it by its developers. This includes coming to consensus on a set of behaviours the community considers to be "sybil-like" that the model can seek out. At a lower level, humans ('sybil defenders') are required to evaluate grants manually to provide critical "ground truth" data that can be used to assess and tune the model performance. The machine-learning model is used in combination with two other pieces of evidence (survey answers provided by human evaluators and a set of heuristics) to generate an "aggregate" score that is used as a diagnostic tool for identifying sybils. The self-similar, quasi-fractal nature of the DAO raises its head here, as human-machine interactions generate a ML model that itself is aggregated with human evaluations to generate a diagnostic tool that is analysed in an automated data-science pipeline overseen by human analysts. "Human-in-the-loop" is a descriptor that applies across scales from FDD subcomponents, their parents, FDD and the DAO. In the end, the model is a tool for strengthening human coordination, not replacing it. Each grant outcome is a subjective decision but that decision is more trustworthy when backed by increasingly robust data analysis tools such as ASOP. Combining humans-in-the-loop machine learning with optimized incentives for evaluators encourages the decentralization of the system and ensures the system stays ethically aligned to the community.
Votes that the sybil defenders identify as dishonest are removed from the pool. At this point, the grants and the votes have both been through a series of semi-automated processes that have trimmed away dishonesty and malfeasance and left a core of legitimate public-goods projects that can be supported according to a an approximaton of community sentiment reached through quadratic voting. The grants can therefore be funded - this happens after ratification by a team of Gitcoin stewards and release of matching pool funds held in escrow in a multisig contract on Ethereum.
Substantial work is put in to ensuring that each grant round improves upon the last. There are many facets to this. At the core is a process of reflection on the technical and human aspects of the round. For example, fraud detection analytics are used to quantify the efficacy of Sybil defenders and GIA in removing dishonest participants from the process. The analytics answer the questions: how many Sybils were removed and how many snuck through? How many grants turned out to be fraudulent? How much did each review cost?
Evaluating the number of Sybils identified using the machine learning models compared to human evaluators provides an opportunity to tune the models for better performance. This process should lead to more accurate model outcomes in successive rounds. Combined with human evaluations, this will lead to an ever-improving attrition rate for Sybils. For example, comparison between the automated and human evaluations for GR13 indicate that the model is more lenient than humans for letting Sybils through undetected, raising about 84% of the warning flags that humans do. In GR13, the Matrix squad developed a classification of diagnostic Sybil behaviours and provided analysis of past Sybils that challenged current assumptions about how Sybils can be detected. This information can now be used to refine the Sybil detection process for the upcoming season. Over time this can also reduced the cost of defending the grants against Sybils by reducing the necessary person-hours invested in it. The cost per grant review should also decrease in successive rounds as the incentive and trust models are better optimized by implementing the findings from continued research and development. Between GR12 and GR13 the cost per grant was reduced by about 300% as a result of streamlining across FDD.
As well as quantitative analysis of the grants and votes, it is critical for FDD to be introspective. For example, how trustworthy were the reviewers? Did the various teams meet their objectives for the season? Are the DAO contributors happy and productive? Where were the bottlenecks? These qualitative investigations are managed by the Mandate Delivery Squad and FDD governance. This spans informal conversations and group meetings to surveys and formal feedback that collectively provide an FDD healthcheck. Issues arising are discussed transparently, leading to refinements to the FDD functioning in advance of the next round. In upcoming rounds, a new squad, "xOS" will be introduced to improve the user experience (UX) for contributors, in particular making the decision making processes within FDD easier to understand. Also, a dedicated team of "Storytellers" will create accessible materials that keep the community (both internal and external to FDD) well informed about FDDs operations, further adding to transparency and accountability.
In the end, the entire grant evaluation process is revised, refined and optimized in a continuous loop. As information flows through the FDD function from grant application to grant outcome, insights about the function itself are extracted at every stage, like a self monitoring, diagnostic system. The source council, team leads and contributors use these insights to learn about the inefficiencies and pinchpoints and minimize them for future rounds. The tools and expertise developed by FDD are also available to other external groups who can benefit from data analysis and modelling services (as managed by FDD Participatory Data Services), distributing the benefits of FDD across Gitcoin DAO.
FDD is a trust generating function that takes in grant applications and returns decisions that maximize the public goods that can be supported by a given pool of funds. This means defending against fraudulent grants and Sybil attackers and requires constant introspection about the health of the system. This is managed by three broad groups of subsystems, known as GIA, Sybil Defenders and Evolution. Together, these systems combine to ensure the grants funded by Gitcoin can be trusted by the community to be fair, transparent and representative of the communities priorities. FDDs introspection layer has identified several major upgrades to the system that will be implemented in current rounds, including the integration of Ethelo for review management, new reviewer incentivization models, enhanced Sybil defense schemes, improved contributor UX and the creation of a story-telling team that will further optimize the FDD function and level-up the service it offers to the Gitcoin community.