🌐 Website
📘 Documentation
👐 Discord
💽 Github
Rated is an experiment in coordination. The experiment starts with the Beacon Chain validator community. At the moment, there is a strong undercurrent of disparity in how operators, researchers and other eth2 stakeholders measure the performance of individual validator indices and groups of indices.
We believe that this is a market failure, as we are all effectively looking at the same thing and yet we see it differently. Solving that coordination problem should unlock a lot of value for all involved.
We’ve published a front-end at rated.network that we hope serves as another point of reference for transparency on the Beacon Chain. The front-end is powered by chaind and a Lighthouse node. On top of that we have done work on a v0 model of validator performance that appears on the front-end as “effectiveness rating”; you can find out more about how the model works in the documentation we have published.
In the coming weeks we will be releasing analyses we have worked on in support of the v0 effectiveness model descriptive and predictive capacity. Pretty soon we are also planning to release a free API for the community to be able to access the effectiveness scores and various other useful parameters around validator performance. We will also be open sourcing a bunch of the elements that make up Rated.
The v0 model we have published on rated.network is a good, robust model. But granted there is a fair amount of subjectivity in the way performance is recorded in the Beacon Chain, v0 is still our view. Our goal is to arrive at a v1 that bakes in the broader community’s view, and at the end of it all, offer the model, website and API as public good resources for anyone interested to be able to tap into.
We believe that taking the pain to go through the motions co-ordination ex-ante, will pay dividends for all of us down the line. Granted that the protocol is and will continually be subject to change at the consensus level, the way we define performance will naturally also change over time. By having a forum to discuss upcoming changes and an agreed upon path to upgrade the very definition of performance, we believe will help us achieve several higher order goals.
More pragmatically, we believe that alignment in how we measure validator performance–and standardising this metric, will lead to better insurance products, validator pool rewards mechanisms, new financial products, as well as helping move the cogs on client diversity.
Our immediate goal is to start a conversation around the v0 model; whether folks like the approach or not, what to add, what to remove, what to consider that we haven’t considered and so on. To take part, hop in our Discord here. After we go through those motions, we’ll look to kick a POAP powered signalling vote into gear–inviting relevant POAP holders (e.g. the Eth2.0 Serenity Launch POAP) to take part.
If after reading the above you find yourself scoring anywhere on the scale of intrigued to mission aligned, we would love for you to be involved! There are a few ways to do that:
We look forward to having as many of you involved as possible.
Let’s Rate! 🍬
Rated v0 was created by @eliasimos and @ariskkol, with invaluable help and input from @0xjack_