We’re excited to introduce an open-source transitive trust algorithm designed to calculate and propagate trust within P2P reputation systems. This algorithm ensures that trust is earned fairly and accurately reflected, particularly in complex, high-stakes environments. We invite you to explore the whitepaper and share your feedback.
We’ve all been there before. You’re at a side event. Vibes are good, people are chatting, and you’re trying to figure out who to connect with. Your best friend Alice, whose judgment you trust completely, introduces you to Bob, a builder you know nothing about. Then Bob intros you to Charles, and now you’re left wondering, “Can I trust Charles just because Bob does?”.
Traditional trust algorithms like EigenTrust (unrelated to EigenLayer) were originally developed to calculate trust in file-sharing networks, like BitTorrent. However, these algorithms have often been misapplied to social trust scenarios, where they can be easily manipulated. Our new transitive trust algorithm avoids these pitfalls by ensuring that trust is earned and accurately reflected, even in complex and evolving relationships.
Note: This is an open-source developer tool built for the attestation community and beyond. It is not an algorithm built into EAS. The edges can be attestations or any type of edge. Although we think attestations are great edges!
The Transitive Trust algorithm evaluates trust between peers based on direct attestations. It ensures that endorsements are genuine, discourages fake endorsements and collusion, and prevents anyone from unfairly inflating their reputation from others’ perspectives.
Attestations are the foundation of the Transitive Trust algorithm and reputation systems because they provide a secure, verifiable, and decentralized way to capture and propagate trust. These onchain or offchain attestations ensure that trust is earned through genuine interactions, verifiably recorded, and resistant to manipulation—key to maintaining the network’s integrity.
Every attestation of trust from one peer to another should result in the recipient’s trust score either increasing or remaining the same from the perspective of every other peer in the network. This ensures that trust scores are stable and reflect genuine interactions.
Non-destructiveness encourages honest endorsements without the risk of reputation loss and prevents trust manipulation through strategic vouching.
Example: Picture Alice introducing you to Bob, and Bob introducing you to Charles. If Bob vouches for Charles, this should either increase or maintain Charles's trustworthiness in your eyes. This non-destructiveness ensures that trust scores go up or stay the same, preserving stable and reflective relationships.
It’s important to know that transitive trust does allow for negative endorsements. Imagine Alice trusts Bob and vouches for him, which increases his trust score from her perspective. If Alice later discovers that Bob has acted dishonestly, she can attest to that as well, decreasing Bob's trust score from her perspective and those who trust her. This dual ability to provide positive and negative attestations ensures that trust scores are comprehensive and accurately reflect both positive and negative interactions.
No peer should be able to create another peer whose reputation is greater than that of the peer itself, and no collective should be able to create a peer whose reputation is greater than that of the collective’s most reputable member. i.e. no one can overwhelm the system by creating/joining malicious groups.
Example: Imagine Bob wants to be the most popular person at the event, so he starts high-fiving everyone, thinking it will make him more liked. He brings along two fake friends, Charles and James, and they all start high-fiving each other. In real life, this would be cringeworthy, but in a decentralized network, this is a common issue.
In existing systems like EigenTrust, this vulnerability is present because it assigns some initial value to all nodes in the network. This requires sybil resistance to be controlled and solved before the nodes are accepted into the network.
Our Transitive Trust algorithm prevents manipulation by ensuring that trust scores are only influenced by genuine direct endorsements between distinct entities. Each peer's trust score is derived from the trust others have in them, without allowing anyone to inflate their own score through collusion. This keeps the trust scores authentic and reliable.
No peer should be able to raise or lower its trust score from the perspective of any other peer. Furthermore, no peer should be able to alter its standing relative to another peer from the perspective of a third observer. I.e. Nobody should be able to change the amount of trust placed in them by someone else
A balanced incentive structure ensures that trust scores are calculated from the perspective of each observer, reflecting their unique reasons for trust. This approach mirrors how trust works in the real world.
For example, if your best friend starts trusting someone new, it doesn’t mean you automatically trust your friend more because of that connection. Trust isn’t transferable in that way, and your own trustworthiness shouldn’t improve just because you endorse someone trustworthy. Existing systems often overlook this nuance, leading to skewed incentive structures.
Example: Here’s how it breaks down in existing systems. If you wanted to increase your score from the perspective of Alice, and you know that Alice likes Bob, then you could attest to Bob to increase your score from Alice’s perspective.
The transitive trust algorithm ensures that your standing with Alice remains unchanged unless Alice herself observes qualities in you that matter to her. This prevents a malicious actor from manipulating their reputation through strategic endorsements.
If your situation involves important decisions based on trust—especially in p2p environments where relationships are complex and evolving—the Transitive Trust algorithm helps ensure trust is genuinely earned and fairly represented. It’s particularly valuable in high-stakes scenarios where misplaced trust could be costly.
Sybil Resistance in Social Graphs: Enhancing proof of personhood and preventing fake identities in decentralized networks.
DAO Peer Discovery: Identifying trustworthy peers within DAOs for better decision-making.
Hiring Decisions: Using attestations to evaluate and hire candidates based on the qualities you care about.
Decentralized Marketplaces: Protecting buyers by accurately reflecting seller reputations.
Access-Based Systems: Managing engagement and access rights based on trust scores.
Crowdsourcing Reliable Information: Filtering and prioritizing relevant projects or knowledge based on community trust.
Oracle Reputation Scoring: Evaluating the trustworthiness of oracle providers.
Content Moderation: Filtering content in social networks by assessing the trustworthiness of creators relative to consumers.
P2P Lending Pools: Assessing borrower trustworthiness to reduce risk in decentralized lending platforms.
Dispute Resolution: Selecting reliable mediators for resolving conflicts in decentralized systems.
Open Source Contributions: Evaluating developer reliability and code quality based on community trust.
Smart Contract Audits: Trusting audits are based on auditors' reputation and your network's trust in them.
AI Agent Reputation: Calculating reputation scores for AI agents relative to tasks completed and reputation earned.
We’ve posted the whitepaper below and are looking for your comments. It’s math-heavy, so if you enjoy complex math, dive in! If math isn’t your thing, we’d still love your feedback on the problems you’ve faced or opportunities you see for reputation systems.
Access the whitepaper below. Contribute your feedback by attesting your contribution or feedback (see how below).https://attest.org/Transitive-Trust.pdf
After finalizing the paper and reviewing comments, we’ll launch developer tools to make running the algorithm simple to integrate. Our target launch is in September 2024.
Follow these steps to attest your feedback and comments to the paper. Our goal is to create a clear line of provenance from the original paper and how others have contributed to its growth over time.
We’re excited to get comments from the community and general feedback. The current draft of the whitepaper has a file hash of:0xeacdf34e225304e45770e4e10cc1e39f7462829b83579cd7c2e0e562a467b655
You can validate this by going to the EAS Explorer and uploading the same file downloaded from above.
Use the explorer link below. It will take you to a schema used to attest a contribution as a string. It references the original UID of the contentHash in the Referenced Attestation Field.Note: You do not need to add a recipient. Just type in the contribution, feedback, or comment as a string field and make sure to attest onchain.
Keep attesting. Keep building.