A Synthesis Toward Equitable Group Decision Making
This is the first half of a longer article that I split to allow a more thorough explanation of the topics in each. The cover art was generated using content from the full article, more here. Special thanks to Seth Benton for feedback and review.
Below are the main points I will attempt to outline a conceptual framework for in part 1. In part 2, I explore the potential implications of this framework for DAO governance.
The power of an individual in an organization is equivalent to their reputation;
Reputation is equivalent to contributions;
Contributions can be enumerated, measured, and quantified;
A holistic quantification of contributions to a DAO enables a fully transparent, trustless, and equitable governance system.
Each person’s reputation is composed of a variety of factors. Which factors we choose to observe and how we choose to measure them can have a dramatic impact on the decisions we make regarding a person. We also need to be wary of subjective measurements involving moral and ethical considerations. Subjective measurements such as peer-reward circles (e.g. Coordinape) can capture contributions that aren’t fully enumerated, while also incentivizing emergent growth through new efforts. However, I’ll do my best to stick with objective measurements in this article and approach questions from logical first principles.
If a new hire at an organization already has a reputation; then, it’s because of contributions they made elsewhere - either in their personal or professional lives. Those contributions conferred them a transferable reputation. One that’s difficult to quantify and even more difficult to verify.
In the world of web3 where pseudonymity, privacy, censorship resistance, trustlessness, and permissionless systems rightly reign supreme we often have little more than reputation to rely on when evaluating a new org or person. Therefore, reputation is extremely important when determining how much power someone has in an organization. Good thing we can track actions, and therefore contributions, on-chain.
DAO contributors can build a reputation through participating in chats, forums, polls, writing proposals, voting, attending meetings, completing bounties, and myriad other methods. These actions can be quantified in various ways and rewarded using tokens. Non-transferrable NFTs, aka “soulbound” tokens (SBTs), have become popularized over the last year as the gold-standard for issuing reputation credentials on-chain. However, this approach is inadequate to sufficiently represent the complexity of human reputation, particularly in a digital environment.
Some of the approaches to creating digital representations of human reputation do incorporate a diverse set of input data sources. Reputation models can be based on contributions drawn from a variety of data sources, both on-chain and off-chain, then aggregated or compared to calculate a given metric and eligibility for a credential (a “merit badge”). A couple of examples of this are the Gitcoin Passport and Orange Protocol’s reputation NFTs.
Using models allows a decision maker to fine-tune their choice of inputs. At one end of the spectrum is a complete lack of authentication and at the other end is a comprehensive evaluation of all enumerated variables. Compared to the singular qualitative data point provided by SBTs, a quantitative model of reputation can be much more meaningful.
KPIs are a means to measure performance toward a goal. Improving KPIs should not be the primary goal of any initiative. That’s like a dog chasing its tail. Optimizing for KPIs does not solve the original problem and results in “training the model” on an irrelevant dataset. Humans are inherently biased and subjective. Humans lie not just to others, but to ourselves, often unintentionally. The only way to keep ourselves in check is with objective data & analysis.
Proof of work with infinite design space. A contribution consists of an:
An action (work)
An outcome (value)
Documentation (proof of work)
Governance: number of polls, proposals, votes, operations, moderation
Financial: investments, grant acquisition
Effort: merit and deliverable-based activities, time spent in meetings, and tracked working on projects
Social: chat and forum ideation, feedback, meetings, promotion
By tracking and quantifying contributions in a variety of ways, it’s possible to create an algorithmic and holistic system that’s robust to fraud as well as more representative, equitable, and inclusive than with fewer inputs.
Discrete levels of authentication (credentials), and rules for acquiring them, need to be established to distinguish between individual contributors and the community. Each individual action needs to be enumerated. This can seem like a daunting task until we impose some structure on the process.
Individual contributions can be normalized using transformation functions, then time-filtered to adjust their relative weight. The weight of any particular contribution can then be aggregated according to a meta-function that scales the categories of contribution types according to a distribution specific to each community (described further below). The end result is to generate a single summary value we describe as a “contribution score”.
x = n
”a function whose graph is a straight line, that is, a polynomial function of degree zero or one.”
x = 2^n
”a polynomial function with one or more variables in which the highest-degree term is of the second degree.”
x = log2(n)
the inverse function to exponentiation.”\ ” the exponent to which another fixed number […] must be raised”
Each input can be adjusted to focus on a relevant timeframe, or weighted timeframe, to determine which contributions are included in the count sent to the conversion function. This section and concept were initially inspired by the model used in SourceCred grain distributions.
SourceCred has three policies for how a project distributes rewards (“grain”) for contributions - recent, immediate and balanced. The model presented here extends this to include conviction - a strategy to increase the weight of input(s) over a given period of time.
The concept of time filters can be applied to calculating contribution scores at any given snapshot in time.
“This splits rewards evenly based on contributions from each participant over the last week. (This policy ignores all contributions from previous weeks, and is intended to give fast rewards to active participants).”
“This splits rewards based both on lifetime contributions and on lifetime rewards earnings. A balanced time filter tries to ensure that everyone in the project receives a total rewards payment which is consistent with their total contributions over the entire duration of their participation.
For example, suppose that a contributor used to have a low number of contributions, and as such received a small amount of rewards. However, the community recently changed its weights, or added a new plugin, such that the contributor now has a larger amount of contributions recorded.
The balanced policy sees that this contributor is underpaid, so it will pay them extra to ‘catch them up’ to others in the project. Conversely, contributors might be ‘overpaid’ and they'll receive less rewards until the payouts have been equalized.”
“This splits rewards based on recent contributions using an exponential decay to prioritize more recent cred. The “recentWeeklyDecayRate parameter determines to what degree you want to focus on recent contributions. If recentWeeklyDecayRate is set to 0.5 (i.e. 50% discount), as in the above example, the policy will count 100% of the contributions generated in the last week, 50% of the contributions generated in the week before, 25% from the week before that, 12.5% from the week before that, and so on.”
There’s a fourth possible modality popularized as a modifier for governance that is applicable as a temporal contribution filter. Conviction Voting incorporates the concept of increasing decision commitment continuously over time. The same concept can be extended to calculate contribution scores based on weights that decay continuously over time instead of a fixed period.
Well-defined data processing methods can be used to combine any chosen set of random variables to make them comparable.
Conversion functions account for bias in the metric itself. For example, quadratic funding accounts for bias toward donors who give large amounts.
The time filter is self-explanatory, it corrects for a subjective time bias but makes it objective by codifying the process into a transparent formula.
However, neither of these operations makes our various metrics numerically comparable in any logical sense. Each metric still needs to be scaled so that they exist relative to a constant minimum and maximum. Each input metric should be scaled & normalized after the conversion function and time filter have been applied.
The dependent variables result from passing an input variable through the appropriate time-filtered conversion function. Table 1 (shown in two parts) illustrates an example list of inputs (X) and pairs each with a hypothetical means to calculate an output value (Y).
Table 1. Input sources, categories, descriptions, conversion functions, and calculated outputs are shown for a hypothetical contribution score model.
A meta function is defined here as a composite function made up of set functions. One property of a set function is to reduce the dimensional output of the computation relative to a set of input functions. Meta-functions can be used to calculate a standardized contribution score that reasonably approximates reputation-based participation in a community. There are three steps required to calculate a standardized contribution score from a set of output values (Y) that represent scaled and normalized quantifications for each individual contribution’s input source.
Determine the weight of each contribution type.
Governance: XX% Financial: XX% Effort: XX% Social: XX%
Note: If contributions are assigned to multiple types they must be calculated separately.
Determine which conversion functions to use for combining contribution types.
(a) + (b) + (c) + (d) = Total Weight
Note: It may be simpler to use linear conversions for all meta-functions and avoid superfluous data transformations. Finer control can be gained by transforming each individual input with an appropriate conversion function.
Normalize the final contribution score by taking the average of the total weight.
Total Weight/# of Contribution Types = Normalized Contribution Score
In part 2, I explore how a contribution score model can be applied to DAO governance, caveats and limitations, and the broader context of potential impacts.
If you find value in this work please consider collecting this article for 0.002 ETH and subscribing to receive updates when I publish new articles.
Thanks for reading.