From scoring model to interfaces: Web3privacy now

We are shipping an educational website to help the general public understand whether a web3-service is private or not. Its core feature - scoring mechanism validated by the market (reference: web3 - l2beat, web2 - IMDB).

Useful links

MVP descriptionhere Initial ideahere Privacy players' surveyhereLatest scoring assessment: hereETHRome prototype: here

Introduction

The last iteration helped us to test the latest scoring MVP & expand it. Now it’s time to test visual interfaces, requirements for smooth step-by-step user flow & additional help via explainers.

This led to the creation of a Live demo delivered as a part of the ETHRome hackathon.

ETHRome built prototype
ETHRome built prototype

Team

Mobile flow prototyping

But before ETHRome we used a mobile interface format to play with minimum visual data per screen. This helped us to deliver short, but actionable instructions for the general public to attest privacy claims in a startup way (MVP with multiple potential pivots).

Step 1: Validity track

Goal: quick assessment of the basic privacy claims via open-sources features like active Github repo or Documents.

The validity track helped us to test “visual noise pollution”, scoring explainers, potential UX/UI features & information fullness for a non-privacy expert audience.

Its goal isn’t to prototype 100% of the visual elements but to create a “sandbox” for matching the scoring model with user flow & additional explainers.

Step 2: Checklists

Goal: make deeper DYOR with simplified guides.

Checklists will play an important role within the user flow because they will cover subjective interpretations of whether a privacy project is legit or not (like reputation).

For now, it will help to shift the responsibility of privacy assessment to the person. While our role is to provide guides and tips for observations. But the final decision to use the service or not should be on the person.

Long-term goal: maximise automation within DYOR steps, so the person can’t fall into the dark patterns of privacy.

Explainers

Goal: provide the non-technical audience with easy-to-understand explainers covering missing knowledge.

Explainers play an important role in our project's success because they boost knowledge & confidence in DYOR. We acknowledge that not everyone who will use our website is techy or from the web3 industry, so we need to explain basics like “What’s Github” or “Why documentation matters in an open-source?”.

Different explainer examples will be covered in an MVP:

Step1: Validity track

  1. Github

    1. Github is …

    2. Contributions are …

  2. Documents

    1. Documents are …

    2. of pages represents

  3. Team

    1. Publicity of the team …

    2. Anonymous team represents

  4. Third-party audit

    1. Security audit is …

    2. Third-party role -

Step2: Checklists

  1. Reputation

    1. ???
  2. Readiness

    1. Test-net is …

    2. Mainnet is …

    3. MVP is …

    4. Beta or Alpha is …

  3. Traction

    1. Publicity of the team …

    2. Anonymous team represents

  4. Signup

    1. Surveillance capitalism is …

    2. Data aggregation is …

Example: prototype “field test”

Goal: provide a visual example of the scoring model testing.Project: dVPN at Cosmos.

We are using a simplified signal “semaphore” (mind, not PSE’s Semaphore project) system to highlight the results of the scoring assessment.

We will test the scoring model on broader categories & projects that will help to create a balanced assessment process. This time we covered a rather “negative” example with a high risk of user data exploitation.

Observations

The scoring model is a rabbit hole with an infinite iteration. The majority of privacy experts have their own views on assessment (covered in the useful links section of this article), so they could always add or invent more features to be attested. This conflicts with a startup way: deliver MVP → then make it better towards Beta.

But we took a moment to test scoring model 1on1s & applied them to the mobile interface, asking ourselves how the scoring would change if we added more & more “steps” (like “licenses” or “# of contributors”)?

Moreover, each iteration requires an additional set of explainers & practical examples (like links to the correct third-party audit - benchmark or security audit company reputation explanations).

Conclusion

We abandon the idea of shipping a “silver bullet” for privacy because it could take us years of research, experimentation &… black swans. We prefer to focus on a real-deal MVP that the general public could use & decrease the risks of using non-private services claiming that they are 100% private.

Here MVP means a combination of

  • basic “privacy hygiene” (10 min assessment)

  • DYOR with checklists (1+ hour assessment)

  • practical examples within the 1st category (DeFi)

  • bottom-up example of “forcing” privacy projects to become more responsible to the public & security audit companies deliver privacy-centric audits

*And do you remember the ETHRome live demo? *The asymmetrical approach to Web3privacy now platform helped to create desktop interfaces, expand privacy dataset, test scrapping tooling, engage privacy orgs from Railgun_ to Waku & many more. This will be covered in the next article.

Want to collaborate? Join us on the way to unlock privacy for millions:

Subscribe to Web3Privacy Now
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.