Making web3-privacy assessment research: public feedback

We are prototyping an educational website to help the general public understand whether a web3-service is private.

We interviewed 100 privacy players & gathered an MVP vision — we are running a series of 1-on-1 feedback loop sessions to make the scoring model community validated.

Here it is important to apply MVP mindset to ship the scoring model ASAP & then move towards a more efficient & inclusive scoring model. So it will evolve over time from serving non-technical & the most vulnerable people to technical professionals.

Example of the reflection sessions

MVP criteria: — non-technical, privacy newbie person — simplest manual check-up — less time on privacy assessment

MVP description: here Initial idea: here Privacy players' survey: here

Feedback loop sessions are structured within the “proposal” - “reflection” way. This helps to keep progress in a unified form & save future features in 1 place. Participants: Ethereum Foundation, Nym, Monero community, Waku privacy actors.

Validity track

Fast scoring check-up
Fast scoring check-up

These simple actions help non-techie &/or non-web3 people to do a quick test if the project is alive, open-source & open for a third-party audit.

Journey

  • check GitHub: exist, not

  • check docs: available, not

  • make a team screening: public, missing

Github repo

proposal

  • Is it in stable release, 1.0 and not an alpha or untested code?

  • Latest release is less than 6 months old?

  • Are there many PRs and Issues pending?

  • Are there external contributors outside of the team members?

  • What licenses are in use (free and/or open license)?

Prototyping original take x proposals
Prototyping original take x proposals

reflection

The majority of additional check-ups require basic knowledge of potential threat scenarios. Even if they are obvious like “dead GitHub repo that no one was updating for 1 year”. 101 should be short & practical (examples, suggestions).

  1. Releases: product versions 101.

  2. Commits: contribution x timing x commits 101.

  3. Contributions: PR, issues 101.

  4. Licenses: 101 description & assessment tips.

Docs

proposal

User docs or technical docs?

reflection

“Read the docs” is a typical technie reflection on how to check the privacy features of the dApp or a protocol. But “read” doesn’t mean “understand”. Here “docs validity” 101 could be broken down into “must haves” from existing docs to type of the docs (fake, poor, deep, sufficient).

Public team

proposal

  • Does that mean knowing the real people behind the project?

  • Isn't that against privacy?

  • Check if there are known contributors (reputation 101)

We should be fine knowing the digital identity, but then, we still don't know much.

reflection

Reputation is one of the biggest challenges for the whole web3 space, not just the privacy market. Moreover, anonymous engineering could be a mass phenomenon in the future. So educating about deliberately missing team & active GitHub contributors differences should be well articulated. Especially, when there's room for anon, sudo-anon presence: research, essays, text-based interviews & so on.

Here “public”/anon/digital avatar team presence should lead to additional exploration behind both public figures & avatars.

We actively check public vs anon team tendencies in our database: link

Third-party audit

proposal

  • is the latest less completed in the last 6 months?

  • more than one audit?

  • are the auditing firms known?

reflection

Matching quantitative & qualitative assessment - key to the project's success. The quantitative assessment (was there any security audit?) is easy to perform (main website, Duckduckgo, X, GitHub searches), while quantitative adds big complexities:

  • there’s no culture of privacy-features-focused audits within security firms

  • security companies could perform smart contract audits ignoring privacy leakages or data aggregation (anti-privacy features)

Web3privacy now team prototypes privacy-features focused security audit with one of the market leaders. The result of the research will be a service description impacting the nature of security research in the market & and their quality. This will significantly increase the number of security audits within the privacy market & and help to protect people in general.

Scoring mechanism

proposal

General scoring mechanism: add more complexity, but allow the scoring to be more granular than four 25% yes/no options in assessment.

  1. The main four count for 25% each but they can be broken down into subelements.

  2. Giving more data - better-scoring distribution overall.

Why are there no score values for the other tracks as well? Quantifying all the variables would be interesting and more elements improve granularity in data samples after testing platforms/services etc

reflection

At the moment we don’t have a developed scoring mechanism system validated with the market players. Our goal is not to oppose a single vision of how it should be done, but to propose multiple ways how assessment scoring could look alike, reflect with privacy actors & and ship a well-balanced framework.

That’s why

  • “%” is a filter rather than an accurate measurement.

  • “yes / no” approach simplifies assessment for non-technical people

Optimal model in the future will have “weights” related to threat actors’ impact or vulnerability (critical, minor etc). Especially, within the “consent” & “data aggregation” fields - measuring ethical approach to privacy (including usage of marketing slang that obscures understanding of the product).

Support

proposal

Some form of support available? (telegram, discord, forum)

reflection

Support could be an indication of a lively project &/or active community. Although, it could be easily faked via obscure avatars & bots. Not sure how we could highlight it via assessment - could be a short checklist of the visiting community, check its liveliness & so on.

Checklists

We want to build a simplified culture of the privacy-auditing process that many people could replicate by following simple instructions.

Journey (product-readiness example):

  • read a short definition of open-source product delivery cycles (test-net, mainnet for protocols, versions for dApps etc).

  • find the latest public announcement (website, socials, duckduckgo, blog)

  • check year & updates (not just readiness, but support, commits & upgrades etc).

  • have a simple “red flag” matrix within DYOR (no test-net - red flag, no mainnet - 50/50 etc)

Each checklist will have a description of the direction & examples of anti-privacy features &/or non-ethical product delivery (claiming dedication to privacy, but tracking people & aggregating personal data like wallets).

proposal

  • have they been hacked or had other known technical issues? (review media sources online)

  • funding (VC, donations)?

  • third-party trackers

reflection

Scoring model should allow people in the future to do fast-checking procedures within the attention economy & constant micro-content bombardment in socials & messaging apps. But everyone with more time should have an option of extra work that will improve privacy assessment (cosmetic check-up & deep dive).

Additional proposals

  • Check up anonymity set - set 101 benchmarks

  • Transaction obfuscation 101 (on-chain analytics methods)

  • Single point of Failure 101

  • Check up privacy leveling: network, application level privacy

Summary

Many privacy experts understand the complexity behind privacy-claims assessment & huge gap between technical & non-technical audience. Especially when we talk about the latest research in ZKP, FHE, account abstraction (AA) etc.

Meaning that they don’t patronise non-technical &/or non-web3/crypto people, but try to help them build simple & ease tools for privacy assessment.

To be continued… we aim for 30 min 1-on-1s.

Subscribe to Web3Privacy Now
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.