On-Chain Reputation, Soulbound Tokens, and the Problem of Deepfakes

Introduction

In Weyl, Ohlhaver, and Buterin’s (2022) Decentralised Society: Finding Web3’s Soul, the authors propose the idea of “soulbound tokens” (hereafter, “SBTs”) and how this technology could be instrumental in helping to shape future forms of sociality and human coordination. (Consider the ease of implementing undercollateralised lending and borrowing in Decentralised Finance [DeFi] and the possibility of more basic applications like paying instalments or bills in a trustless and permissionless way once we establish a robust system of on-chain identity and reputation.) Financial applications aside, SBTs can also serve non-financial functions. Of particular note for teachers of critical information literacy is the authors’ suggestion that SBTs can constitute a novel solution to the epistemological problem of deepfakes.

Souls, Soulbound Tokens, and the Problem of Deepfakes

Souls and Soulbound Tokens

The authors define “souls” and “soulbound tokens” as such:

”Our key primitive is accounts, or wallets, that hold publicly visible, non-transferable (but possibly revocable-by-the-issuer) tokens. We refer to the accounts as “Souls” and tokens held by the accounts as “Soulbound Tokens” (SBTs).” (p. 2, emphases Weyl et al.’s)

Put another way, each wallet constitutes a “soul” and the non-transferable tokens in these wallets that represent credentials like educational or employment history, memberships, art collections, etc. are the aforementioned “SBTs”.

The problem of deepfakes

Deepfake technology has been improving at an exponential rate, raising serious epistemological and ethical problems surrounding the veracity of information in our digital age. It is not difficult to imagine morally repugnant applications of deepfakes: doctored images or videos of influential political figures or our loved ones can compel us to act in rash, unexpected, and irrational ways, potentially leading to unintentional harm to ourselves or the people around us. We ought to anticipate the numerous challenges that deepfakes may bring to society and theorise ways to address them.

SBTs and the Problem of Deepfakes

How SBTs can Tackle the Problem of Deepfakes

Prima facie, SBTs seem like an obvious and straightforward answer to the problem of deepfakes. As the authors put it,

Applications [of SBTs] extend beyond art, to services, rentals, and any market built on scarcity, reputation, or authenticity. An example of the latter is verifying the authenticity of purported factual recordings, such as photographs and videos. With advances in deep fake technology, direct inspection by both humans and algorithms will increasingly fail to detect veracity. While blockchain inclusion enables us to trace the time a particular work was made, SBTs would enable us to trace the social provenance, giving us rich social context to the Soul that issued the work — their constellation of memberships, affiliations, credentials — and their social distance to the subject. “Deep fakes” could be readily identified as those artifacts originated outside of time and social context, while trusted artifacts (like photographs) would emerge from the attestation of reputable photographers. Whereas present technology de-contextualizes cultural products (like pictures) and opens them to unchecked, viral attacks lacking social context, SBTs can recontextualize such objects and empower Souls to take advantage of trust relationships already present within communities as a meaningful backstop to protect reputation.

Thus, in the absence of advanced deepfake-decoding technology, it seems that the rich social context and history emergent from a sufficiently developed network of souls and their accompanying SBTs could offer a way for us to track and verify the origins and authenticity of information without resorting to dystopian versions of tracking or social credit systems. Soul hacking (impersonation or wallet-hacking through social engineering) aside, the authors’ proposal seems promising.

Nevertheless, the long-term success of SBTs in tackling the problem of deepfakes would largely depend on a few factors.

Factors Affecting the Success of SBTs in Addressing the Problem of Deepfakes

(Note that the following list is non-exhaustive; I only list a few key problems that appear, by my lights, to be the most fundamental or challenging.)

1. Rate (and Ease) of Adoption

First, whether SBTs can be successful in addressing the problem of deepfakes depends very much on how ubiquitous the ecosystem is and whether enough content consumers and producers (both individuals and organisations) adopt such a system such that there is a rich enough social web for us to make normative claims about the information therein. Without widespread adoption, it is unlikely that SBTs can perform the function of regulating disinformation. As such, chain-agnostic decentralised alternatives like verifiable credentials seem to be close competitors that can also perform the role of mitigating the effects of disinformation — all without requiring mass participation on the blockchain. Regardless, more work needs to be done to determine which system (there could be alternatives yet explored) would be more fit-for-purpose, or if there is a case for hybrid systems to be in place.

2. Speed of Intervention

The main objective of identifying deepfakes is to disrupt their propagation in order to prevent or minimise the harm to society that they can engender. Therefore, a measure of how successful an anti-disinformation system is would be how quickly it can flag suspicious content, verify or falsify them, and subsequently remove them before further harm can be done. It would be a further merit of such a system if it is able to reach those who have previously consumed the disinformation in order to issue a warning or notice of falsehood. This may be especially important since consumers of content are unlikely to go back to the source of disinformation once they have first consumed the material.

3. Epistemic Injustice and Discrimination

In designing SBT systems that can measure the social context and provenance of a piece of content and its likelihood of being insidiously deceitful, assumptions and correlations are inevitably drawn between certain credentials and the corresponding propensity for a soul with such credentials to behave in certain “desirable” or “undesirable” ways. A problem with measuring such probabilities and subsequently making predictions from them has to do with the epistemic injustices, discrimination, and/or inequality that such an ostensibly objective system can unknowingly perpetuate.

Leaving aside egregious cases where a brand new “soul” (that is, wallet) is created for the sole purpose of propagating a particular piece of disinformation, we can see how souls with SBTs that signal membership in minority groups that have been historically marginalised or disadvantaged are at a theoretically higher risk of being flagged or investigated. Thus, unless we intentionally segregate information about a soul’s membership in social categories like race and gender, systemic injustices and prejudices would naturally find their way into our code. This is not to say that stereotyping is never morally or epistemically permissible or justified (see Beeghly’s article for more on this topic);  we just have to be mindful that a transition to new technologies does not, ipso facto, mean that existing conundrums in social epistemology will be thereby resolved.

4. Defining desirability and undesirability in volatile social environments

In the previous point, I alluded to how we would need to draw correlations between a soul’s ownership of certain SBTs and how likely they are to perform certain “desirable” or “undesirable” epistemic actions. An underlying assumption packed in that claim is our ability to define, encode, and calibrate notions of “desirability” and “undesirability” in the first place, along with our capacity to do so fast enough in ultra-fluid and ever-changing social contexts. At first pass, we seem to be able to rely on some common-sense or folk notion of desirability, though more fine-tuning would need to be done for niche or high-stake cases of disinformation. (I suspect, however, that it is the niche and high-stake cases that interest us in this discussion anyway.)

5. Philosophical Approach to Remediation

Another key factor that may affect the long-term success of SBTs in tackling the problem of deepfakes relates to the stance we take against souls who propagate disinformation. (Here, I assume that we give the benefit of the doubt for souls who spread misinformation, though minor penalties or negative sanctions may still apply.) In particular, it matters whether we adopt a more retributive or rehabilitative approach. The former risks precluding the possibility of reform while the latter risks recidivism. The answer, as it often is the case, lies somewhere in the middle. Our task is to seek that balance.

Conclusion

Hitherto, I have considered a few key factors that may affect the efficacy of SBTs in addressing the problem of deepfakes. To summarise, a satisfactory system that regulates disinformation needs to be widely adopted, efficient in identifying and ceasing the spread of disinformation, and able to minimise harm through eschewing discriminatory vetting processes. On a more fundamental level, we also need to think about (i) the ways we define desirability in the first place and (ii) whether we adopt a retribution-leaning or rehabilitation-leaning approach to souls who propagate disinformation.


Featured image by Vackground on Unsplash.

Subscribe to ed infinitum
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.