Elias Simos
0x63de
September 8th, 2022

London Sep. 8th, 2022 – Today we are in the privileged position to announce that Rated Labs, the company behind Rated, has raised a $2.5 million seed round to accelerate its mission for greater transparency and rich context in Web3 infrastructure data.

The financing round was led by 1confirmation, closely followed by Semantic and Placeholder and tied up with participation from Cherry.xyz, Decentral Park Capital, and a strong roster of angels from the Web3 infrastructure space.

We believe that there is a valuable data layer to unearth and contextualize in what bubbles up to reputation for nodes and their operators. That very data layer is the missing piece that will enable a host of new products, as well as increased transparency and accountability for Web3 infrastructure sets, and thereby the industry as a whole. 

Rated is currently in v0, and is offering a network explorer for Ethereum Beacon Chain and Prater validators and validator operators, where users can browse for the various entities that make up the set and compare and contrast their performance at a granular level. Rated is also supporting an API that allows application developers to integrate contextual data on validators and validator operators, into their workflows and products downstream of Rated–currently being utilized by the likes of Lido, Nexus Mutual, Kiln and a series of other node operators of all sizes and applications that relate to Ethereum infrastructure. 

Useful resources

Context

In this post, we are presenting a series of post-hoc analyses we ran on the efficacy of the Rated_v0 model, as a descriptor and later predictor of validator performance. We segment the analyses into several cohorts in order to test for the robustness of the approach in different frames of reference.

February 11th, 2022

Useful resources

🌐 Website
📘 Documentation
👐 Discord
💽 Github

What is Rated?

Rated is an experiment in coordination. The experiment starts with the Beacon Chain validator community. At the moment, there is a strong undercurrent of disparity in how operators, researchers and other eth2 stakeholders measure the performance of individual validator indices and groups of indices.