Do Androids Dream of Electric Sheep tells the story of a detective tasked with eliminating humanoid "replicants" that are almost indistinguishable from natural humans. They do this using a system of tests, including an instrumented interview that look for subtle "tells" such as a limited ability to make complex moral judgments in hypothetical scenarios. Sybil defenders are similarly tasked with distinguishing real and virtual humans in a mixed population where they are difficult to tell apart. They too look for subtle "tells" that give Sybils away. Sometimes the Sybil signals are obvious and unambiguous, sometimes they are not. The additional complication for Sybil hunters is that the entire population exists in a digital space where a human's physical existence cannot be proven by their presence- it can only be demonstrated using forgeable proxies. Reliably linking actions to identities is therefore a subtle science that pieces together multiple lines of evidence to build a personhood profile.
One such line of evidence is proof that a person has participated in certain activities that would be difficult, time consuming or expensive for someone to fake. Gitcoin Passport is used to collect approved participation 'stamps' and combine them into a score that acts as a continuous measure of an entity's personhood. Another line of evidence is the extent to which an individual's behaviour matches that of a typical Sybil. There are many telltale actions that, when taken together, can be used as powerful diagnostic tools for identifying Sybils. Machine learning algorithms can quickly match an individual's behaviour against that of known Sybils to determine their trustability, like an automated Voight-Kampff test. A high degree of automation can be achieved by ensuring Gitcoin grant voters, reviewers and grant owners meet thresholds of trustability as tested proactively using Gitcoin Passport evidence and retrospectively using machine learning behaviour analysis. An adversary is then forced to expend a sufficient amount of time, effort and/or capital to create virtual humans that fool the system into thinking they are real. As more and more effective detection methods are created, adversaries are forced to invest in more and more human-like replicants.
Cost-of-forgery is a concept aiming to create an economic environment where rational actors are disincentivized from attacking a system. One way to manipulate the environment is simply to raise the cost of attack to some unobtainable level, but without excluding honest participants. The problem is that simply raising the cost really just reduces down to wealth-gating. This creates an a two-tier community - people who can afford to attack and people who can't. There is also a risk that the concept bleeds into wealth-gating participation, not just attacks, which would unfairly eliminate an honest but less-wealthy portion of the community (i.e. increasing the cost of demonstrating personhood for honest users as a side effect of increasing the cost of forgery for attackers). To some extent, this is also the case with proof-of-stake: attackers are required to accumulate and then risk losing a large amount of capital in the form of staked crypto in order to unfairly influence the past or future contents of a blockchain. For Ethereum proof-of-stake the thresholds are 34%, 51% and 66% of the total staked ether on the network for various attacks on liveness or security - tens of billions of dollars for even the cheapest attack. The amount of ether staked makes the pool of potential attackers small - the pool is probably mostly populated by nation states and crypto deca-billionairres.
For a proof-of-stake or cost-of-forgery system to be anything other than a plutocracy there must be additional mechanisms in place other than raising the cost of attack. An attack has to be irrational, even for rich adversaries. One way an attack can be irrational is to ensure the cost of attack is greater than the potential return, so that an attacker can only become poorer even if their attack is successful. Ethereum's proof-of-stake includes penalties for lazy and dishonest behaviour. In the more severe cases individuals lose their staked coins and are also ejected from the network. When more validators collude, the punishments scale quadratically.
There are also scenarios where rich adversaries might attack irrationally, i.e. despite knowing that they will be economically punished - either because they are motivated by chaos more than by enrichment, or because the factors that make their behaviour rational are non-monetary or somehow hidden (political, secret short positions, competitive edge, etc). These scenarios can overcome any defenses built in to the protocol because it only really makes sense to define unambiguous coded rules for rational adversaries.
The two primary lines of defense in Gitcoin grants are retrospective squelching and Gitcoin Passport. Users prove themselves beyond reasonable doubt to be a real human using a set of credentials a community agrees are trustworthy. They are then more likely to survive the squelching because they behave more like humans than Sybils. The problem, however skillful the modelling becomes, is that being provably human does not equate to being trustable, nor is a community of real humans immune from plutocratic control - rich adversaries could bribe or use their capital to coerce verifiable human individuals to act in a certain way. An example of this is airdrop farming - a suitably wealthy attacker could promise to retrospectively reward users who vote in favour of their Gitcoin grant in order to falsely inflate the active user-base and popularity of the grant in the eyes of the matching pool. A simpler example is a wealthy adversary simply paying users directly to verify their credentials and then vote in a particular way.
It is impossible to define every plausible attack vector into a coded set of rules that can be implemented as a protocol, not least because what the community considers to be an attack might be somewhat vague and will probably change over time (see debates on "exploits" vs "hacks" in DeFi- when does capitalizing on a quirk become a hack, where should the line be between unethical and illegal?). This, along with the potential for attackers to outpace Sybil defenders and overcome protocol defenses, necessitates the protocol being wrapped in a protective social layer.
There has to be some kind of more ambiguous, catch-all defense that can rescue the system when an edge-case-adversary fools or overwhelsm the protocol's built-in defenses. For Ethereum, this function is fulfilled by the social layer - a coordinated response from the community to recognize a minority honest fork of the blockchain as canonical. This means the community can rescue an honest network from an attacker that is rich enough to buy out the majority stake or finds a clever way to defraud the protocol.
For a Sybil resistance system, it would probably have to be a social-layer backstop too, because only humans have the subjective decision-making powers to deal with the kind of "unknown unknown" attacks that can't be anticipated. By definition, protocol defenses close known attack vectors, not the hidden zero-day bugs that spring up later. For a Sybil-defense system it would be manual squelching of users or projects that have acted somehow dishonestly in ways that have not been detected by the protocol but are in some way offensive to the community as a whole.
The danger here is that even with a perfectly decentralized and unambiguous protocol, power can centralize in the social layer creating opportunities for corruption. For example, if only a few individuals are able to overrule the protocol and squelch certain users while promoting others those individuals naturally become targets of bribes or intimidation or they themselves could manipulate the round to their advantage.
Therefore, there needs to be some way to impose checks and balances that keep the social coordination fair. There is a delicate balance to strike between sufficiently decentralizing the social layer and exposing it to its own Sybil attacks where the logic could become an infinite loop - to protect against attacks that circumvent the protocol defenses we need to fall back to social coordination which itself needs protecting from Sybil's using protocol rules that Sybil's can circumvent meaning we fall back to social action which itself needs protecting...ad infinitum.
It is still not completely clear how a social-rescue would take place on Ethereum, although there have been calls to define the process more clearly and even undertake "fire-drill" practices so that a rapid and decisive action can be taken when needed. Anti-Sybil systems and grant review systems such as Gitcoins could explore something similar.
The pragmatic approach to Sybil defense is to create an efficient protocol that can deal with the majority of cases quickly and cheaply, then wrap that protocol in an outer layer of social coordination. This social layer should be flexible enough to quickly and skillfully handle unexpected scenarios. However, to keep the scial coordination layer honest, it needs to be wrapped in its own loose protocol.
From the inner core to the outer layers the protocols become more loosely defined and subjective, closer to the core the protocol should be sufficiently precise as to be defined in computer code. There will be some optimum number of layers that will emerge organically to produce a system that is sufficiently robust to all kinds of adversarial behaviour.
To make this less abstract, the core in-protocol defenses could include automated eligibility checks, retroactive squelching of users identified as Sybils by data modelling, and proactive proof-of-humanity checks against carefully tuned thresholds. This alone creates a community of reviewers, owners and donors that are quite trustable. The social wrapper should be a trusted community that can handle war-rooms and rapid-response decision making for scenarios that are not well handled by the core protocol. One way to do this while protecting against centralized control is to use delegated stake so that the community "votes in" stewards to an emergency response squad they trust to act on their behalf. This will be self-correcting because the community will add and remove stake from individual stewards based on their behaviour. These stewards need a standard-operating procedure so that they can spring to action immediately when an attack is detected, which can be crowdsourced - effectively adding a second protocol and social layer to the anti-Sybil onion.
The benefit of this onion approach is that it allows the great majority of attacks to be neutralized efficiently by the in-protocol defenses, but allows for subjective responses to edge-case attacks. It is impossible to defend against the entire attack space, but this approach offers a community-approved route to pragmatic out-of-band decision making when in-protocol defenses are breached or some edge case arises.
Tackling Sybil defense across Gitcoin grants using a monolithic "global" system will necessarily bump up against these issues. One option to overcome this is to break Sybil defense down into composable units that can be tuned and deployed by individual sub-communities instead of trying to construct a Sybil panopticon that works well for everyone. It wil be much easier to construct layered anti-Sybil onion systems for individual subcommunities than trying to tune a single monolithic system to work for everyone. This is the approach Gitcoin intends to take in Grants 2.0. A well-defined, easily accessible set of tools and APIs that can be used to invoke Sybil defense within the context of a single project not only allows the Sybil-defense tuning to be optimized for a specific set of users but also empowers the community to control their own security. The challenges then become how to share tools, knowledge and experience across users so they don't continually re-invent the wheel. We discussed this in some detail in our Data Empowerment post. Decentralizing Sybil defense via a composable set of tools is also an opportunity to crowd-source a stronger defensive layer via communty knowledge sharing and parallel experimentation.