Aligning on Impact: Creating Tools for Impact Evaluation and Measurement

How do we address the challenges of inconsistent evaluation and measuring impact effectively in Grant Programs in the web3 ecosystem?

In this post, I explore how the Impact Evaluation Framework and the Metrics Garden, two innovative tools I developed to address these challenges, were used during RetroPGF3’s voting phase. Far from being just tools, they represent a paradigm shift in decision-making and collaborative efforts in how funding is effectively allocated in the web3 ecosystem.

Background

Measuring the impact generated by Public Goods in the web3 ecosystem is HARD.

This obstacle was evident during Optimism's RetroPGF Round2, where measuring impact was a challenge both to Applicants, who didn't know how to measure their impact for others to evaluate, and for Badgeholders, who didn't know how to assess the impact a project has had.

And while it’s hard, it’s not impossible.
I believe it is one of the most novel and exciting challenges in Governance which is why I have been researching and experimenting with it in various formats over the last year.

Applying it to RetroPGF3

The journey to create these tools for RetroPGF3 began with two core hypotheses:

Hypothesis 1:
The creation of an Impact Evaluation Framework will enable Badgeholders to build an informed intuition of what Impact means in the Optimism Ecosystem and conduct a more comprehensive and aligned evaluation of projects.

Extract from RetroPGF2: Learnings & Reflections
Extract from RetroPGF2: Learnings & Reflections

Hypothesis 2 (to be tested):
The Metrics Garden will help projects measure their impact using relevant metrics, thereby improving their future RetroPGF applications.

The groundwork for the second hypothesis to be tested in RetroPGF4 was established during this round.

Expected behavior:

Both the Impact Evaluation Framework and Metrics Garden were developed with the goal for these tools to serve as additional elements in a Badgeholders “Tool Box” during RetroPGF3.

  1. Badgeholders were encouraged to read the Impact Evaluation Framework to gain succinct information on definitions of impact for each of the categories and expand their knowledge beyond their niches of expertise.
Each Category had it's own "Definition of the category" and "Definition of impact for it"
Each Category had it's own "Definition of the category" and "Definition of impact for it"

2. Badgeholders would then use this new information to inform their evaluation process. With this new knowledge, Badgeholders could then refine their definition of Impact and easily build more robust evaluation criteria.

Feedback during RetroPGF2
Feedback during RetroPGF2

3. As Badgeholders became familiar with the Metrics Garden for each category, they would leverage these metrics in their RetroPGF3 evaluation process for this round. The use of these metrics would allow for better comparison between projects, and create an additional incentive for projects to leverage them to measure their contributions for upcoming rounds.

A Metric Garden was created for each one of the Categories
A Metric Garden was created for each one of the Categories

How were these tools used?

The Impact Evaluation Framework and the Metrics Garden served to inform and support Badgeholders in their evaluation process, some examples are:

  1. Learn and understand Impact within each category in the Optimism Ecosystem.

  2. Leverage the definitions and keywords to build evaluation criteria for voting.

  3. Use the Metrics Garden to build evaluation criteria that feed into Lists.

Additionally, the Impact Evaluation Framework was also used to onboard Badgeholders who were new to the Optimism ecosystem by providing a quick rundown of meaningful information within each category.

Feedback and Reflections

Initial positive results

Overall, both tools were very useful resources, particularly for new participants, which made up 50% of Badgeholders. Profiles ranged from people new to the Optimism Ecosystem, new to being Badgeholders, or both. These resources provided clarity and direction, aiding in the creation of personalized evaluation rubrics and voting criteria making project evaluation more straightforward.

In addition to this, the Metrics Garden continues to grow as more projects in the ecosystem leverage the metrics and measuring tools described in it.

The main feedback received has been collected through public interactions and the mention of the tool by Badgeholders as they describe their voting processes. Additional information on usage and interactions is to be gained from more targeted surveys. Alignment results will be better understood based on the analysis of the anonymized voting data results along with survey results.

Food for thought

  • The Impact Evaluation Framework and Metrics Garden weren’t easy to find. Both were nested in the Badgeholder Manual under several layers, which made the discovery process nearly impossible if you didn't have a direct link. How can the visibility and discoverability of both tools be improved?

  • Increase Awareness of these resources: Awareness of these resources slowly built up and so far has been targeted at Badgeholders, if the Metrics Garden is to be widely used by Reapplicants and new applicants. How can awareness of these resources be increased outside of niche spaces?

  • While the goal of the Impact Evaluation Framework and the Metrics Garden was to serve as resources to create an underlying intuition of impact, in one of the user interviews the question “Where’s the rubric?” came up. Meaning there was an underlying expectation for the tool to include pre-generated evaluation rubrics for people to follow.

In this sense, the Impact Evaluation Framework and Metrics Garden emerge as Impact Evaluation Building Blocks that anyone can leverage to build their own end-user-facing evaluation applications.

The experimentation continues

These are some ideas to test that solve for the points mentioned above:

  1. Generate a set of rubrics that leverage the data points and information in the Impact Evaluation Framework and Metrics Garden from which people can choose to use “as is” or with modifications.

  2. Make SunnyGPT available for public use. This will enable people to query the data easily and as needed, it will also provide me with more information on what kind of questions people are getting stuck on.

  3. Build audience-based, and stage-based approaches to the use of these resources. I had an initial hypothesis on the behavior of Badgeholders, there would be 3 types: Highly involved, Middly involved, and Lowly involved Badgeholders. While these resources were initially built for people in the middle, they ended up being leveraged mainly by highly involved individuals, with a spillover effect for middly involved Badgeholders. How each audience uses these tools will vary based on their needs, in that same line, the stage at which these tools are used will differ from informing applicants on how to create an application to informing the evaluation process of Badgeholders as they allocate funds. This means tailoring the user experience based on these two variables can serve to increase the impact of these tools.

Why this matters

As we wrap up this retrospective, let's remember the core essence of these efforts: the vital importance of evaluating projects based on their tangible impact on the Optimism Ecosystem. Failing to do so risks rewarding misaligned projects, demotivating genuine contributors, and inadvertently encouraging value extraction without contribution. As we move forward and build upon our lessons learned for upcoming rounds, let's commit to fostering an ecosystem that thrives on genuine impact and collective growth.

You may wonder, why are you starting with a retrospective? Where’s the rest? Worry not, there's much more to come on how these tools were designed and developed and their ongoing impact on the Optimism Ecosystem and beyond.

Subscribe to LauNaMu | Web3 Impact Evaluator
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.