Summary & Thoughts about Grant Ship WhitePaper
February 27th, 2024

Grant ship WhitePaper was published this week on the February 19th, a vision to bring a level playing field for funding mechanisms. This opens a decentralized competition that will be more efficient than current opaque and centralize funding mechanisms.→ Access to the Whitepaper→ Launch a Grant ship on Arbitrum→ Follow en X→ Join the Discord

A competitive Grants Meta-Framework
A competitive Grants Meta-Framework

At its core, Grant Ships is a DAO game where a set number of "Ships" simultaneously operate their own onchain grants programs […] When a round ends, the DAO is presented with each Ship's portfolio and votes […] on the performance of each Grant Ship. In the following round, each ship receives funds in proportion to the votes they received.

Abstract from Grant Ship WP

In other words, DAO Masons can be seen as an incubator of web3 funding programs. It provides funding allocation and frameworks to empower communities to support what matters. It is powering ‘ships’ with business models based on funding. Ships are autonomous grant systems with a tailored accountability system. Grant Ship aims to bring to live a plurality of efficient funding mechanism, allocating the right amount resources on what matters. This framework is stable and converge to efficiency thanks to a global competition of ships.

Here are my personal thoughts and remarks on the Whitepaper mostly about my:

Skepticism on variables isolation method to compare ‘ship Portfolio’ alimented by the lack of definition of ‘Performance Metrics’.

Isolation of variation method proposed by GrantShip
Isolation of variation method proposed by GrantShip

Lot’s of unknown: With this in mind, I express my doubt with the

“A/B testing […] to compare allocation practices” approach.

As previously said in the whitepaper, grant ships are “working in the Dark” meaning all the variable are not identified or known, making the reporting process very difficult to identify direct and indirect causes and consequences. This isolated environment, still count many variables, on different countries, provided by different teams, in different ecosystem. The test sample is little and might not be representative.

This is very restrictive: Ships should be evaluated on their impact and if they respected their engagement. How to compare different ships that takes different approaches (short, middle, long term) to tackle an issue. It is not because a company earns more money than an other that the second company should not exists. Judging on performance compared to other competitors is a tricky way that is difficult to be objective. With the difficulty to provide impactful objectives metrics how Grant Ship Differs from a Grant Committee?

It hurts Grant Ship competitiveness: The effort to set up a scientific approach to understand what funding mechanism works seems to decrease GrantShip efficiency bringing uncertain results that would be difficult to replicate, slowing the go-to-market and increasing complexity for players.

Proposed Solution: The evolutive approach to select the best ships, acknowledge good ship behaviours and iterate on them regardless of their sector is faster and more impactful. A deep analysis of the success factor can be done retroactively, setting guidelines for the next ships.

Lack of Definition of ‘Performance’ & propositions

“votes […] on the performance of each Grant Ship”

Even if the word ‘Performance’ is used 17 times in the Whitepaper it neither does indicate clear guidelines on how it is defined nor the way it will be evaluated. It is then difficult to judge a the efficiency of the Ship competition without Performance clear definition criteria.I am in favor of a Evolutive Competition to create a fair playground, efficient during a long time. Establishing ‘objective’ performances are complexed and should not be the main criteria of evaluation to compare ships. Performances should be evaluated individually, here are some proposition to brainstorm on evaluation frameworks. My first proposition would be on subjective preferences like funding allocation (ex Gitcoin round). Opening the competition externally with a public ready to bet or allocate more funds to their favorite ship(s). In addition an objective evaluation of the obligation of means (milestone based) should be put in place. Ship operators should be hold accountable of the amount of resources invested to achieve the goal.

Last using the size of the impact compared to the amount of fund invested could be a useful indicator to monitor, alert and spotlight outliers. One of the next step would be to Develop a suite of indicators to follow and evaluate ships.

Remarks on ”Funds in proportion”

“each ship receives funds in proportion to the votes they received.”

The competition is interesting and relevant when resources are limited and a large sample of ships compete. In a first step, to make the competition relevant many ships need to propose competitive alternatives. Otherwise, making a small number of high-positive impact ship compete between them can be counterproductive. Thus, scalability of ships should be acknowledged, the impact is not linear depending of the capital employed. Funding more a ship does not mean that the performances will increase proportionally.

Composable with external funding: The ability for the Ships to look for their own funding is independent from the impact of the system provided. Thus, I pledge to open the framework to not be strictly limited to exclusive funding from one unique source. The DAO Mason allocation should be an incentive not an unilateral sentence of life and deaths of ships. Ships should be able to grow at their own rate.

Conclusion:

Pluralism of funding mechanisms is the most efficient way to support what matters at a global scale. I support the bet of GrantShip to design an efficient mechanism as a competition of independent funding mechanisms. I hope to have more claritification on the Performance criteria. Thus, I pledge for a more ambitious go-to-market approach, that could be composable with external funding.

*Disclamer: I do not represent or affiliated to grantship, this view is my own. Welcoming any remarks, critics and debates. This is a personal and public review of the Whitepaper made by *Matt Davis (ui369.eth), Chris Wylde (boilerrat.eth), Jordan Lesich (jord.eth)

Innovation takes time, but we are on the way ⭐ From JulesFoa.eth - Building Kryptosphere Accelerator, a community-led accelerator program

Subscribe to julesfoa.eth
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from julesfoa.eth

Skeleton

Skeleton

Skeleton