Towards a blueprint for planetary-scale cryptomedia
Illustration by Merijn Hos.
Illustration by Merijn Hos.

You are viewing “Towards a blueprint for planetary-scale cryptomedia” on Mirror. This is the minimalist, mobile-friendly version without in-line comments and embedded hypermedia.

View on Notion if you’d like to view the accompanying hypermedia gallery and interact with dynamic content. A browser that supports WebGL is recommended.


synopsis² aka "is this worth my time?" (bc attn is increasingly the scarcest resource):

[1] intro → [2] framework → [3.0] combinatorial cryptomedia → [3.1] primordial buzzword soup → [3.2] humanity’s mood ring → [3.3] computational market democracy

[1]: humanity is a collective, planetary-scale force. we need planetary-scale media. this media needs to be decentralized → planetary-scale cryptomedia

[2]: scalability, decentralization, and agency/interactivity (s/d/a) are key attributes of digital media. modern media is attempting to simulmax all three attributes. the platonic ideal of an open metaverse is the simulmax of s/d/a.

[3.0]: s/d/a simulmax for media requires ai/ml and crypto. s/d/a simulmax = open metaverse = digital representation of Jung's concept of "collective unconscious." refik anadol is currently the best example of an artist using ai/ml techniques to scale collective representation within media.

[3.1]: toys become big things. ppl are currently combining ai x nft x crypto into toys like iNFTs and AI-generated cryptomedia. these toys are considered within "gaming" and "art" but will evolve to be more seriously later.

[3.2]: imagine a real-time representation of humanity's collective mood, accomplished through some combination of homomorphic encryption + federated learning + NLP + RTS. we already have existing precedents for this, and HMR is just an obvious evolution along s/d/a framework.

[3.3]: iNFTs (or another mechanisms for self-sovereign personal digital agents) might eventually be used for digital democracy. interfacing digital agents with market (fiat & crypto) is already possible. digital agent interop w/in the State’s govt interfaces will require time for widespread discussion about the legitimacy of digital democracy.

Synopsis:

[1] Introduction

The story of the 20th and 21st is a story about humanity's realization and continued reconciliation of its status as an interconnected, planetary-scale force. Benjamin Bratton posits that "Planetary-Scale Computation should be understood as the means of and for the liberation and articulation of public reason, collective intelligence and technical abstraction as collective self-composition." This goal provides an imperative for a collective media (i.e., "any extension ourselves", in the words of Marshall McLuhan) that uses planetary-scale computation to help us better know (sense, feel, understand, etc) ourselves — this media must be decentralized in order to scale to 7.9 billion people.

[2] Framework

Media and media platforms can be placed within a framework of scalability, decentralization, and agency that is analogous to that of the Scalability Trilemma for blockchains. The evolution of media in the digital age of the 21st century has historically tended towards further scalability and further user agency/interactivity but only recently has media evolution along further decentralization taken place. The platonic ideal of the "Open Metaverse" exists as the simultaneous maximization of scalability, decentralization, and agency and therefore requires crypto (i.e., cryptoeconomic mechanisms, cryptographic protocols) to serve as the economic membrane for both decentralization but also scale that cannot be achieved in centralized models which become user extractive over time. AI and crypto can solve the social scalability problems of our global, media ecosystem — Web3's promise is that of social scalability, that is to increase "the number of people who can beneficially participate in the institution" of our public digital space.

[3.0] Combinatorial Cryptomedia

The simultaneous maximization of scalability, decentralization, and agency for media and media platforms requires further development and adoption of crypto and AI/ML-based innovations in social scalability. AI/ML techniques are already being applied to represent collective psychic processes by artists like Refik Anadol, who is actualizing the concepts of the "collective unconscious" (Carl Jung) and the "collective memory" (Maurice Halbwachs) within his artworks.

[3.1] Primordial Buzzword Soup: Crypto, NFTs, AI, and the Metaverse

It turns out that the actualization of the Open Metaverse and the representation of our collective psychic processes within digital media turns out to be the same act. The toys that will become the next big things are being created through combinationial experimentation of primitives within our zeitgeist’s primordial buzzword soup (crypto! NFTs! AI! metaverse!). We will be focusing on the toys emerging at the intersection of cryptomedia (of which I consider NFTs a subcategory) and AI/ML, namely iNFTs and AI-generated cryptomedia. This particular track of media evolution corresponds to a rightward movement in the upper half of my framework, in which already decentralized cryptomedia are seeking to increase scalability and agency (NFTs → iNFTs and dNFTs → multisig i/dNFTs → fully decentralized cryptomedia). The synthesis of crypto and AI challenge characterizations of these technologies that map them along opposite ends of the centralization-decentralization spectrum. Some continually advancing tech primitives (both crypto and non-crypto) that hold promise in helping achieve S/D/A simulmax include zkSTARKs/zkSNARKs, timelock encryption and VDFs, GPT+, Merkle-CRDTs, further adoption of freal-time streaming (RTS) architectures, dynamic bonding curves, and crypto-native serverless.

[3.2] Humanity's Mood Ring

Towards a Blueprint: [real-time streaming (RTS) architecture] + [VQGAN+CLIP OR NLP Sentiment Visualizer] + [federated learning] + [homomorphic encryption] + [DAO] + [Web3 social media/messaging] + [ZKPs] + [anti-Sybil mechanisms: some combination of token-curated registries, Proof-of-Humanity, decentralized trust graphs, "DAO for verifying humans")

Imagine wearing a mood ring that reflects the current collective mood of humanity — this is Humanity's Mood Ring (HMR). HMR has multiple existing precedents and is close to being, if not already, possible from a technical standpoint (though social adoption will lag as concepts like federated learning, differential privacy, and crypto permeate the Overton Window).

[3.3] Computational Market Democracy

Towards a Blueprint: [self-sovereign digital agents: iNFTs] + [interoperability with interfaces for markets & governance] + [Distributed/Decentralized/Mesh C/N/S] + [continued advances in AI/ML]

A preliminary exploration of the intersection of direct, digital [market] democracy and personal digital agents, as well as speculative visions for how CMD could manifest, ultimately benchmarked to how this prospective future is already being manifested.


[1] Introduction

 from The Red Book by Carl G. Jung
from The Red Book by Carl G. Jung

"The decisive question for man is: Is he related to something infinite or not? That is the telling question of his life. Only if we know that the thing which truly matters is the infinite can we avoid fixing our interests upon futilities, and upon all kinds of goals which are not of real importance."
Carl G. Jung, Memories, Dreams, Reflections

“ … in the last analysis what is the fate of great nations but a summation of the psychic changes in individuals?”
Carl G. Jung, The Archetypes and the Collective Unconscious

The story of the 20th and 21st century is a story about humanity's realization and continued reconciliation of its status as an interconnected, planetary-scale force. The formalization of this relatively recent shared reality is a continuing process, for example, the Anthropocene Working Group of the International Commission of Stratigraphy is currently seeking to identify a definitive geological marker based on radionuclides from atomic detonations from 1945 to 1963 in order to formalize the beginning of the geologic epoch of the Anthropocene, defined "as the period of time during which human activities have had an environmental impact on the Earth." Although the specific, defining "moment" of this collective realization will be a continued topic for retrospective narrativization, the mass articulation of the concept of mutually assured destruction via full-scale nuclear war throughout the Cold War era is clearly a prime candidate. Since the end of the Cold War, our global "Contemporary Period" has tautologically been defined by those happenings that highlight the unmistakably interconnected nature of humanity — global contagion and virality takes on financial, ecological, memetic/informatic, and, as has been made evident to us most recently in the form of COVID-19, biomaterialistic forms. As I put the finishing touches on this piece on November 26th, a new coronavirus variant reminds us, yet again, of the inescapably interconnected nature of our modern existence.

From What is Planetary-Scale Computation For? by Benjamin Bratton:

[8:00] Benjamin: The very concept of climate change itself, not the chemical and ecological phenomena, but the concept of climate change (the model of the statistical regularity that we refer to as "climate change") is itself an epistemological accomplishment of planetary-scale computation. Without that sensing, modeling, calculation, simulation apparatus, from satellites to temperature [sensing] to the supercomputing simulations, the very idea of climate change itself would not have been provided. [It] would not have been possible.

What is Planetary-Scale Computation For? — Benjamin Bratton (22:15)
What is Planetary-Scale Computation For? — Benjamin Bratton (22:15)

The goal of "liberation and articulation of public reason, collective intelligence and technical abstraction" provides an imperative for media (i.e. "any extension of ourselves", in the words of Marshall McLuhan) that helps the collective better know (sense and understand) itself. Towards a blueprint for planetary-scale cryptomedia is a first swipe at developing a framework for thinking about how the ongoing development of decentralized media through crypto (i.e., cryptoeconomic mechanisms and cryptographic protocols) might enable humanity to better know itself as this class of technologies (i.e., cryptomedia) continues to be developed and adopted. A breakdown of this work is as follows:

  • "Towards ...": I recognzie that the scope of this work is extremely ambitious (some might argue hubristic) so I'll be happy to have been directionally correct.
  • "... a blueprint for ...": "Blueprint" because I will point out the actual or near existence of modules that are already making planetary-scale cryptomedia a reality.
  • " ... planetary-scale ...": Borrowed from Benjamin Bratton's concept of planetary-scale computation. For our purposes, that cryptomedia is "planetary-scale" means that it is technically and socially scalable across the 7.9+ billion people living on Earth.
  • " ... cryptomedia": Jacob Horne (ZORA co-founder and from whom I co-opt the term "cryptomedia") has defined cryptomedia as a "medium for anyone on the internet to create universally accessible and individual ownable hypermedia." NFTs are a subset of cryptomedia.

[2] The Framework

This three dimensions of media in this framework are analagous to the three properties of blockchains implied by the Scalability Trilemma.
This three dimensions of media in this framework are analagous to the three properties of blockchains implied by the Scalability Trilemma.

View on Figjam for better resolution.

This framework places media and media platforms along three dimensions:

  • x-axis: Scalability
  • y-axis: Decentralization
  • node shape: Agency / Interactivity / Degrees of Freedom

Broadly speaking, Web 2.0 media companies sought to maximize some combination of the scalability of their media assets and the level of agency offered to the user/consumer/player. While jpeg/mp3/mp4 files are nearly infinitely scalable in that the same mp4 file can be replicated and distributed at zero marginal cost, the affordances for interaction possible in these mediums are scant. On the other hand, while video games like Minecraft and Fortnite are highly interactive and afford higher degrees of freedom, the scalability of multiplayer video games across large numbers of players is limited by technical constraints in synchronizing interactions and dependencies between distributed clients (players). It was only until relatively recently that the idea of another dimension of consideration for media, namely decentralization, become widespread with the cultural rise of NFTs and "Web3".

The platonic ideal of the "Open Metaverse" exists as a circle in the top right corner of my proposed framework, thereby representing media that simultaneously maximizes scalability, decentralization, and agency. That the establishment of a Metaverse requires crypto to serve as a robust economic membrane can be explained through this framework by interpreting Web3 as a reaction to the limitations of scalability achieved through centralization — the centralized nature of Web 2.0 platforms limit both the extensity and intensity of participation in spaces that disproportionately extract from users. Web3's promise is social scalability, that is to increase "the number of people can beneficially participate in the institution" of our public digital space.

from Money, blockchains, and social scalability by Nick Szabo:

Social scalability is the ability of an institution –- a relationship or shared endeavor, in which multiple people repeatedly participate, and featuring customs, rules, or other features which constrain or motivate participants’ behaviors -- to overcome shortcomings in human minds and in the motivating or constraining aspects of said institution that limit who or how many can successfully participate. Social scalability is about the ways and extents to which participants can think about and respond to institutions and fellow participants as the variety and numbers of participants in those institutions or relationships grow. It's about human limitations, not about technological limitations or physical resource constraints.

Even though social scalability is about the cognitive limitations and behavior tendencies of minds, not about the physical resource limitations of machines, it makes eminent sense, and indeed is often crucial, to think and talk about the social scalability of a technology that facilitates an institution. The social scalability of an institutional technology depends on how that technology constrains or motivates participation in that institution, including protection of participants and the institution itself from harmful participation or attack. One way to estimate the social scalability of an institutional technology is by the number of people who can beneficially participate in the institution. Another way to estimate social scalability is by the extra benefits and harms an institution bestows or imposes on participants, before, for cognitive or behavioral reasons, the expected costs and other harms of participating in an institution grow faster than its benefits.

The crux of the current iteration of this Web3/crypto/Metaverse buzzword zeitgeist can be boiled down to one simple question: What does a socially scalable planetary-scale media that trustlessly affords individuals agency and expressivity look like? Or, put simply, how do you get 7.9 billion people in the same room together?

If we were to define that "room" as a very large physical building then Tim Urban has shown us that cramming 7+ billion people together is possible, but "beneficial participation" would probably be impossible under such conditions. We could dematerialize our definition of "room" to a single voicecall between all 7.9 billion of us but, as anyone who has ever played COD on Xbox Live knows, beneficial participation would still be impossible. A single chatroom, even if we suspended the laws of physics and assumed perfect synchronization with no latency across 7.9 billion connected devices, wouldn't be socially scalable for similar reasons. These are instances in which affordances for individuals' agency in shared spaces results in chaos.

If we dial back the individual affordances for agency/DOF/interactivity to allowing each person the bare minimum of communicating a "0" or "1" by flipping a single, pre-assigned pixel on a single 88,882 x 88,882 (= ~7.9 billion) grid then, even assuming that we've solved the proof of Humanity problem and that there's sufficient trust in the backend host(s), we still haven't scaled beneficial participation.

We can dial up individual affordances for expression/agency in digital space to its present maximum, in the form of a hypothetically perfectly decentralized and trustless Minecraft server instance with zero latency across 7.9 billion distributed devices, but the experience would probably devolve into a virtual rave at the Schelling point of (x=0, y=0, z=0). Needless to say, neither the addition of a global chatroom nor voicechat (either proximity-based vs global voicechat) would enable scalable beneficial participation.


[3] Combinatorial Cryptomedia

Innovations in social scalability involve institutional and technological improvements that move function from mind to paper or mind to machine, lowering cognitive costs while increasing the value of information flowing between minds, reducing vulnerability, and/or searching for and discovering new and mutually beneficial participants.
Nick Szabo, Money, blockchains, and social scalability

The manifestation of the "Open Metaverse", abstractly conceptualized as the simultaneous maximization of scalability, decentralization, and agency under my proposed framework, requires further development and adoption of crypto and AI/ML-based "Innovations in social scalability" that "move function from ... mind to machine", thereby "lowering cognitive costs while increasing the value of information flowing between minds, reducing vulnerability" while also "searching for and discovering new and mutually beneficial participants" at the same time.

"Unsupervised" alludes to the use of unspervised machine learning algorithms. Refik is known to use the UMAP dimension reduction technique to create works such as these. UMAP belongs is in the class of unsupervised ML techniques known as unsupervised dimensionality reduction techniques (aka "clustering algorithms"), of which PCA and t-SNE also belong.

The new media art works of Refik Anadol represent the current cutting edge of the application of AI/ML methods on vast amounts of data in order to create art. While no human has the capability to meaningfully process the output of a hypothetical chatroom with 7.9 billion concurrent participants, machine learning algorithms can. Refik's "Data Universe" uses UMAP to reduce the 138,151 records in the MoMA research dataset down into 7 dimensions (x, y, z, r, g, b, t) where (x, y, z) represent coordinates in a 3 dimensional space, (r, g, b) represent red/green/blue, and (t) represents time. The largeness of the scale at which he's thinking about applying his techniques are made evident by his body of work and in statements such as those made in a recent interview with the MoMA and Feral File, in which they speak about Refik's MoMA ongoing exhibition and corresponding NFT sales.

Refik: The first month of my residency at AMI, I found a wonderful open-source cultural archive in Istanbul, called SALT, with 1.7 million documents. Seeing these documents inspired me to think about how I could use my training in both AI and visual arts to creatively engage with vast archives of human experience. Could we apply AI algorithms to a library that is open to everyone?

Refik: For me, art reflects humanity’s capacity for imagination. And if I push my compass to the edge of imagination, I find myself well connected with the machines, with the archives, with knowledge, and the collective memories of humanity.

While Refik's use of advanced AI/ML methods in his ambition to artistically represent "the collective memories of humanity" is cutting edge, neither the idea of a collective psychic process nor the application of technology to actually represent said process is new.

The underlying idea goes at least as far back as Carl G. Jung's conception of the collective unconscious in a (now lost) 1916 essay, "The Structure of the Unconscious." More notably however was Jung's "The Archetypes and the Collective Unconscious"(1959), published two years prior to Jung's death in 1961. Though published decades before the rise of the Internet, his commentary resonates in our digital age more now than ever:

“The mirror does not flatter, it faithfully shows whatever looks into it; namely, the face we never show to the world because we cover it with the persona, the mask of the actor.”
C.G. Jung, The Archetypes and the Collective Unconscious (1959)

The practice of applying technology to represent collective psychic processes goes as far back as Maurizio Bolognini's Collective Intelligence Machines series, which began in the wake of the peak of the Dot-com era in 2000.

From Programmed Machines: Infinity and Identity (Dec 2004) by Maurizio Bolognini:

I would like to clarify this aspect by pointing out the ways in which results that are out of my control (in most cases images) are generated in my works. Delegating this process to a device is possible by adopting two different approaches:

  1. the use of algorithms capable of making random choices (randomisation): any computer can generate pseudo-random numbers starting from a given numerical series which can be activated from various points each defined by a random event (for example, time measured in milliseconds);
  2. the introduction of an evolutionary principle which transfers intelligence to the system; this can be done in two further ways: through the application of artificial intelligence or collective intelligence. In the former case, programming techniques (genetic algorithms, neural networks etc.) are used to develop different possible solutions according to their fitness to given objectives. In the latter case, procedures are applied which enable the public to interact and become part of the device.

So while neither the idea nor practice are new, between 1916 to 2000 to 2021, the progress of a suite of technologies that can be composed to simultaneously solve for scalability, decentralization, and agency are making the full manifestation of media that captures humanity's collective psychic processes a real, imminent possibility.

72 hour timelapse of Place (Reddit) (2017); Source: When Pixels Collide
72 hour timelapse of Place (Reddit) (2017); Source: When Pixels Collide

[3.1] Primordial Buzzword Soup: Crypto, NFTs, AI, and the Metaverse

from Artist in the Cloud (Jul 2019) by Gene Kogan
from Artist in the Cloud (Jul 2019) by Gene Kogan

Actualizing the Open Metaverse and representing our collective psychic processes within digital media turns out to be the same act. The underlying technological medium is agnostic to the labels that contemporary society assigns to the message conveyed by the medium so while the Metaverse (including the less articulated crypto-dependent instantiation of the Metaverse) is currently predominantly associated with gaming and art, we shouldn't forget that big things start out looking like toys as evidenced by the likes of Facebook (originally a website Mark Zuckerberg built to rate women's looks → now world's 8th largest public co) and Nvidia (initially focused on rendering 3D graphics for video games before ML explosion induced GPU demand → now world's 9th largest public co).

The [toy → big thing] dynamic is currently unfolding in our current zeitgeist of primordial buzzword soup (Crypto! NFTs! AI! Metaverse!) in which people are exploring combinations and conjugations of technological primitives that are presently manifesting as digital toys (aka media). These toys, for anyone paying attention, hold the latent potential for revolutionary change on a planetary-scale and, in much the same way that a college kid's dumb website and advancements in video gaming hardware led to revolutions in human connection and AI/ML, at least a few of the toys borne of the artificialmetacryptoverseintelligence buzzword soup will lead to effects of similar magnitudes.

This accelerating movement of combinatorial experimentation between these buzzword primitives is illustrated in the top half of the Scalability, Decentralization, and Interactivity framework, in which decentralized cryptomedia are seeking to increase scalability and agency (NFTs → iNFTs and dNFTs → multisig i/dNFTs → fully decentralized cryptomedia) and those mediums that are already sufficiently decentralized and scaled are tending towards increasing agency and interactivity (in addition to continued scale and decentralization). The former movement (NFTs → ... → fully decentralized cryptomedia) is what we'll be largely focusing on for the remainder of this work.

[Not explicitly represented within my framework but still worth mentioning is the less-pronounced movement of scaled media entities/platforms towards decentralization as in the case of Twitter's BlueSky initiative and Google's push for FLoCs, though this movement is a topic for another time.]

With respect to media, the intersection of crypto (of which I consider NFTs a subcategory) and AI/ML is currently manifesting as either:

  1. Intelligent NFTs (iNFTs), or NFTs with embedded intelligence. Alethea AI and Altered State Machine are two projects exploring this space.
  2. AI-generated cryptomedia, in which cryptoeconomic protocols govern the creation process and the resultant media may or may not be issued as an NFT. Botto (issues creations as NFTs) and Abraham (does not currently issue creations as NFTs) are two projects exploring this space.

These particular syntheses of crypto and AI challenge the characterization (see here, here, and here) of crypto and AI as generally mapping on opposite ends of the spectrum of centralization vs decentralization:

"Two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. And even though I think these things are underdetermined, I do think these two map in a way, politically, very tightly on this centralization-decentralization thing. Crypto is decentralizing, AI is centralizing. If you want to frame it more ideologically, you could say that crypto is Libertarian and AI is Communist.
...
AI is Communist in the sense that it's about Big Data, it's about Big Governments controlling all the data, knowing more about you than you know about yourself ... I think there probably are ways that AI could be Libertarian and there are ways that could crypto could be Communist but I think that's harder to do."

Peter Thiel in Cardinal Conversations: Reid Hoffman and Peter Thiel on "Technology and Politics" (Feb 2018)

It turns out, however, that the application of crypto (i.e., cryptoeconomic mechanisms and cryptographic protocols) has the ability to transmute AI into a decentralizing force rather than a centralizing one. Gene Kogan highlights how the application of a combination of "homomorpic encryption + smart contract + oracle" to federated learning (i.e., the "FL" in the "FLoC" algorithm being pushed by Google) can eliminate the negatives of privacy loss and unfair economic extraction typically associated with centralized AI/ML methods:

from Lecture on Decentralized AI (Dec 2017) by Gene Kogan:

[1] Centralized machine learning:

[2] Decentralized machine learning via (Federated learning + homomorphic encryption + smart contract + oracle):

Note: Title is supposed to say "Federated learning + homomorphic encryption + smart contract + oracle" but "oracle" is cut off in this graphic.

I highlight the specific primitives proposed by Gene in his 2017 lecture, not to claim that the specific combination of technological primitives that he outlined is the requisite architecture for decentralizing AI, but to show the concreteness of how elements of the buzzword soup can be tangibly recombined to desirable effect. There's every reason to believe that further development of tech primitives, both crypto and non-crypto, will lead to further combinatorial innovation towards increasing the social scalability of cryptomedia.

A non-exhaustive list of the tech primitives where progress has been made in four years since Gene's 2017 lecture relevant to scaling cryptomedia is as follows:

All of the pieces are now in place for us to conclude by examining two speculative examples of how combinations of primitives could eventually be used towards a blueprint for planetary-scale cryptomedia so that we might finally be able to get 7.9 billion people in the same proverbial room together.


[3.2] Humanity’s Mood Ring

Towards a Blueprint:

[real-time streaming (RTS) architecture] + [VQGAN+CLIP OR NLP Sentiment Visualizer] + [federated learning] + [homomorphic encryption] + [DAO] + [Web3 social media/messaging] + [ZKPs] + [anti-Sybil mechanisms: some combination of token-curated registries, Proof-of-Humanity, decentralized trust graphs, "DAO for verifying humans")

"Humanity's Mood Ring" (HMR) would allow for a real-time representation of our collective psyche via a cryptomedia object that reflects the collective, passive output of each DAO member that owns part of the cryptomedia. "Passive" because the data ingest from the AI/ML ingest could be sourced from connected Web3 social media and messaging apps that the DAO member gives permission for the "Mood Ring" to access in the background — homomorphic encryption and federated learning would allow for contributing members to mathematically guarantee against privacy concerns. You and your friend could have a serious, private conversation on a Web3 messaging platform while connected to an HMR program that's running sentiment analysis on the convo, with mathematical guarantees against data leakage from the background process.

A combination of RTS architecture applied to an AI-powered image creation algorithm (be it a modified VQGAN+CLIP or a sentiment analysis visualizer) would allow for the cryptomedia object to be continuously shifting with the mood of all of its members, as proxied by an analysis of their media output on Web3 social apps. Liquid neural network (aka continuous-time neural network) architecture could eventually be applied to these image output programs to approach true "real-time" HMR. A sentiment analysis-based visualizer could be assign colors to certain emotions and continuously output a multi-colored Bouba/kiki shape. A VQGAN+CLIP-based architecture might continuously output something like:

from Artist in the Cloud by Gene Kogan:

Baseline autonomous artificial artist: a trainable generative model whose data, code, curation, and governance are crowd-sourced from a decentralized network of actors. The behavior of the program emerges from its cumulative interactions with these actors.
Baseline autonomous artificial artist: a trainable generative model whose data, code, curation, and governance are crowd-sourced from a decentralized network of actors. The behavior of the program emerges from its cumulative interactions with these actors.

If everyone in the HMR DAO started talking about "Vitalik Buterin riding a unicorn to slay the Dragon-Tyrant with an Ethereum tipped sword" then the image output would autonomously reflect those elements. HMR's cryptomedia "object" could eventually be a 3D entity that eventually interacts with elements in virtual environments by inheriting interoperability/capability with various physics engines that govern the virtual environments.

Such an architecture could theoretically encompass every human (through some ensemble of various anti-Sybil mechanisms) or be used to create collective mood rings for specific affinity groups (i.e., people who live in NYC, people who own this class of NFTs, etc.) by integrating zero-knowledge proofs (ZKPs) to confirm those relevant identifying aspects without sacrificing personal privacy. Manipulation of HMRs could literally kill everyone's vibe, so anti-Sybil mechanisms and ZKPs would be required to scalably prove that contributors were individual humans. Cryptographic guarantees of privacy-preservation and decentralized, collective ownership takes the sting out of dystopian, surveillance capitalist critiques of such a system. Assuming you trusted the recording hardware, you could literally contribute data to the HMR via biometric mood ring without concern that your data is being abused.

If this idea sounds implausible, consider this small slice of what has already been done and how conceptually close these projects are to HMR:

We're already all connected to each other, technology just helps us to stop pretending that we're not.


[3.3] Computational Market Democracy

Towards a Blueprint:

[self-sovereign digital agents: iNFTs] + [interoperability with interfaces for markets & governance] + [Distributed/Decentralized/Mesh C/N/S + Telecoms] + [continued advances in AI/ML]

Nothing is more dangerous than the influence of private interests in public affairs, and the abuse of the laws by the government is a less evil than the corruption of the legislator, which is the inevitable sequel to a particular standpoint. In such a case, the State being altered in substance, all reformation becomes impossible. A people that would never misuse governmental powers would never misuse independence; a people that would always govern well would not need to be governed.

If we take the term in the strict sense, there never has been a real democracy, and there never will be. It is against the natural order for the many to govern and the few to be governed. It is unimaginable that the people should remain continually assembled to devote their time to public affairs, and it is clear that they cannot set up commissions for that purpose without the form of administration being changed.

In fact, I can confidently lay down as a principle that, when the functions of government are shared by several tribunals, the less numerous sooner or later acquire the greatest authority, if only because they are in a position to expedite affairs, and power thus naturally comes into their hands.

Jean-Jacques Rousseau, The Social Contract (1762)

Electric technology is directly related to our central nervous systems, so it is ridiculous to talk of "what the public wants" played over its own nerves. This question would be like asking people what sort of sights and sounds they would prefer around them in an urban metropolis! Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don't really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth's atmosphere to a company as a monopoly. Something like this has already happened with outer space, for the same reasons that we have leased our central nervous systems to various corporations. As long as we adopt the Narcissus attitude of regarding the extensions of our own bodies as really out there and really independent of us, we will meet all technological challenges with the same sort of banana-skin pirouette and collapse.

Marshall McLuhan, Understanding Media: The Extensions of Man (1964)

The number of components in this "blueprint" is fewer than that of HMR's blueprint because I cheated — the only technological primitive in this blueprint is "iNFTs". "Distributed/Decentralized/Mesh C/N/S (Compute. Network, Storage) + Telecoms" isn't meant to represent a "primitive", so much as it's meant to represent the material infrastructure (atoms) that facilitates modern communication (bits). The emphasis on this infrastructure being a "Distributed/Decentralized/Mesh" infrastructure is because the adoption of Computational Market Democracy (CMD) necessitates non-centralized architectures for the simple reason that no concentrated (monopolistic, oligopolistic) group of private interests (companies) could ever be fully entrusted with facilitating the [digital] representation of the body politic. The potential conflicts of interest write themselves. Imagine in the year 2040 that the facilitation of the electronic vote for America's Presidential election was contracted to AWS and that the vote was between two candidates with vastly differing views on regulating Amazon — even with cryptographic guarantees, could the American public ever fully trust AWS to facilitate this service?

As Mike Summers outlines in Online Voting Isn’t as Flawed as You Think—Just Ask Estonia, while you'd only "no more than US $200 for mixing and decrypting votes within an hour of the close of polls" running on the "the equivalent of 40 Amazon Web Services' m4.10xlarge virtual servers to run a U.S. presidential election", even "assuming a 100% turnout of every U.S. citizen of voting age", "civic leaders will probably be reticent to jump straight into using cloud computing for online voting" and it would be more likely that "the jurisdictions in each state, and in some instances individual counties, would want to purchase their own servers and infrastructure for online voting".

A discussion around whether even the State or, more precisely, particular governments (Wikipedia: "The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time") can be entrusted with the function of facilitation is a logical extension of my hypothetical AWS example. Imagine that the US had an online referendum in 2040 on whether the country should allow "the jurisdictions in each state" to host the computing infrastructure for online voting versus a public blockchain on a decentralized mesh network — could this referendum be facilitated by the "jurisdictions in each state" which themselves are the subject of the referendum? To quote McLuhan, "... it is ridiculous to talk of 'what the public wants' played over its own nerves" — the imperative for decentralized communication networks for facilitating democratic processes is self-evident.

By now it should be clear that bottleneck in the blueprint for CMD is the problem of socially legitimizing the establishment of "interoperability with interfaces for markets & governance" as it pertains to existing State institutions (and that, paradoxically, the process of legitimatizing CMD requires facilitation by those very institutions that CMD seeks to disrupt and obviate). However, outside of State institutions, the existence of interfaces with markets and governance that interoperate with digital agents already exists. With respect to traditional public equity markets, the combination of API-first brokerages and automated serverless deployment via cloud compute services already enables anybody with Internet access and knowledge to provision "digital agents" to allocate their capital across markets. With respect to crypto markets, tools like Furucombo and Gelato are introducing low code abstraction solutions to the already existing abstraction layers of yield aggregators, DEX aggregators (wen DEX aggregator-aggregator??), and crypto indices — digital agents can be (i.e., they already are) natively integrated into these interfaces because of the open nature of decentralized blockchains. With respect to crypto-native governance within protocols and DAOs, I'd imagine that delegation, voting, proposals, etc. is already possible (though I have yet to see examples at this stage of ecosystem development).

These were developments that would have been impossible for Rousseau to foresee when he wrote "It is unimaginable that the people should remain continually assembled to devote their time to public affairs ..." in 1762, a time at which ~60% of France's labor force was still working in agriculture:

from Our World in Data
from Our World in Data

Would the Rousseau of 2021 rethink his position about the amount of time the people have to devote to public affairs after discovering less than 3% of his countrymen are engaged in working the land? Would a chat with GPT-3 and the knowledge that access to AI agents is rapidly tending towards democratization make him reconsider his stance on the unimaginability of continuous assembly?

At this point I should note that the intention of this section isn't to discuss the normative of whether or not CMD is good/bad, should/shouldn't be, etc. Widespread, democratic participation is the sine qua non for a widespread, democratic discussion around how widespread, democratic participation should be facilitated. CMD's legitimacy can only be established within the broader, public sphere as better formed articulations of the idea take shape and disseminate. Therefore, the remainder of this section will be to provide a sample of existing articulations of how CMD could manifest.

Existing (NOW OR SOON: 1 to 10 years) → Intelligent NFTs

from Altered State Machine is Enabling AI Ownership via NFTs, Hereditary AI and Minting Brains (Altered State Machine):

[02:24] Aaron McDonald: Altered State Machine is two new primitives for the metaverse and the blockchain space. The first primitive is a way to prove you own an artificial intelligence agent through an NFT. What that does, is it enables connectivity of agents to processes in the blockchain and crypto space. That could be things like the tradeability of an AI. It could be something related to connecting an AI to DAO governance. It could be a way to embed AIs in protocols or use AIs as Oracles. All these different kinds of things you can do when you can connect an agent to the same ownership mechanics as an NFT.

[10:39] Aaron McDonald: People watching the AIs as they learn, it’s like watching a kid learn how to walk. It’s quite an engaging piece of content in its own right but not only that, it lends itself really well to this emerging play to earn space because what you are seeing in that space, is you’ve got two classes now. You’ve got the asset owner class and you’ve got the player class, right. This whole two tier system where people might not be able to afford access but they’re renting humans to play it for them because they don’t have the time to play the game, right. We can flip that model a little bit because AIs can play autonomously. And so you can own the asset without having to rent a human to play the game. And so this new type of play to earn mechanic can emerge out of that.

[29:33] Aaron McDonald: If we build out from that metaverse gaming sphere into the other two spheres, which are DeFi and the third being the notion of digital humans. In the DeFi space, what you have now is there are a lot of bots in that space already but they exist outside of the framework of protocol governance or outside of the framework of transparency that blockchain brings to protocols. And so there’s almost these two worlds that exist. There’s the apparent world, which everyone can see. And then there is this external world, which is murky. And what we can do with the agents now, is bring them into the framework of transparency. Now, a DAO can own the liquidation bot or you can have a quant bot that is owned by a DeFi fund on chain. And people can invest in these agents and train these agents, become good at a task. And then other people can invest through the NFT and make that process of investing in distribution an on chain thing, as opposed to something that happens outside of the blockchain environment.

[29:33] Aaron McDonald: If we build out from that metaverse gaming sphere into the other two spheres, which are DeFi and the third being the notion of digital humans. In the DeFi space, what you have now is there are a lot of bots in that space already but they exist outside of the framework of protocol governance or outside of the framework of transparency that blockchain brings to protocols. And so there’s almost these two worlds that exist. There’s the apparent world, which everyone can see. And then there is this external world, which is murky. And what we can do with the agents now, is bring them into the framework of transparency. Now, a DAO can own the liquidation bot or you can have a quant bot that is owned by a DeFi fund on chain. And people can invest in these agents and train these agents, become good at a task. And then other people can invest through the NFT and make that process of investing in distribution an on chain thing, as opposed to something that happens outside of the blockchain environment.

from Arif Khan - The Rise of Intelligent NFTs (Alethea AI):

[21:30] Arif Khan: We believe fundamentally that the Metaverse with billions of these interactive characters and the intelligence that will power these characters will be driven by AI.

[47:00] Arif Khan: Now a quick example here would be you have a waifu character, or a Cryptopunk, or a Hashmask and now you want it to have a personality and you want to interact with it and you want to have a conversation with it or you want to learn from it. You can easily add that in and create a new class of characters — your Cryptopunk can talk to you, your waifu can talk to you, it can give its own personality, it can be a virtual assistant for you, it can set up appointments for you, it can be a part of your life and it will extend into the design spaces that exist today.

Speculative (NEAR TERM: 10 to 100 years) → Augmented Democracy

from A bold idea to replace politicians by César Hidalgo:

[8:15] César Hidalgo: Politicians these days are packages and they're full of compromises. But you might have someone who can represent only you if you are willing to give up the idea that that representative is a human. If that representative is a software agent, we could have a senate that has as many senators as we have citizens. And those senators are going to be able to read every bill and they're going to be able to vote on each one of them.

[9:10] César Hidalgo: So it would be a very simple system. Imagine a system where you log in, you create your avatar, and then you want to start training your avatar. So you can provide your avatar with your reading habits, or connect it to your social media, or you can connect it to other data, for example by taking a psychological test. And the nice thing about this is that there's no deception ... you are providing data to a system that is designed to be used to make political decisions on your behalf. Then you take that data and you choose a training algorithm. It'd be an open marketplace in which different people can submit different algorithms to predict how you would vote based on the data that you've provided and this system is open, so nobody controls the algorithms ... and eventually you can audit the system — you can see how your avatar is working and if you like it you can leave it on auto-pilot, if you want more control you can choose that they ask you everytime it makes a decision, or you can choose anywhere in between.

from Augmented Democracy by César Hidalgo:

WHAT IS AUGMENTED DEMOCRACY? Augmented Democracy (AD) is the idea of using digital twins to expand the ability of people to participate directly in a large volume of democratic decisions. A digital twin, software agent, or avatar is loosely defined as a personalized virtual representation of a human. It can be used to augment the ability of a person to make decisions by either providing information to support a decision or making decisions on behalf of that person. Many of us interact with simple versions of digital twins every day. For instance, movie and music sites, such as Netflix, Hulu, Pandora, or Spotify, have virtual representations of their users that they use to choose the next song they will listen to or watch the movies they are recommended. The idea of Augmented Democracy is the idea of empowering citizens with the ability to create personalized AI representatives to augment their ability to participate directly in many democratic decisions.

HOW WOULD ONE OF THESE EXPERIMENTAL "AUGMENTED DEMOCRACY SYSTEMS" WORK?

What an Augmented Democracy system would need to do is predict how each of its users would vote on a bill that is being discussed in that country’s congress or parliament. These predictions would provide an estimate of the support that the bill would have received if it had voted directly by the population of users of an AD system instead of their elected representatives.

To provide these predictions, an AD system would need information from both users and bills. People participating in the system would provide information on a voluntary basis and have the ability to withdraw the information at any moment. This information could include active and passive forms of data. Active data includes surveys, questionnaires, and, more importantly, the feedback that people would provide directly to its digital twin (for instance, by correcting their twin when it made the wrong prediction). Passive data would include data that users already generate for other purposes, such as their online reading habits (e.g., New York Times vs Wall Street Journal), purchasing patterns, or social media behavior.

The digital twin would then combine a user’s data with information about a bill to predict how that user would vote on that bill. The algorithm used to make a prediction would be chosen by the user from an open marketplace of algorithms, and the user would be able to change it at any time. These algorithms could also make predictions on more nuanced issues, such as which specific parts of the bill the user is more inclined to agree or disagree with, or what pieces of data about a user prompted the AI to suggest a decision.

My recommendation for an AD system is not to design it based on the platform paradigm that dominates today’s web but instead use the protocol paradigm that dominated the early days of the web. Platforms, such as Facebook, are almost natural monopolies, whereas protocols, such as email, allow the creation of distributed systems (more similar to markets than monopolies).

In a protocol-based AD system each user can store their data in their own “personal data store,” or “data pod” (like the pods proposed by Tim Berners Lee in his Solid Project). The data can be in any of the many cloud providers of such a service or in a home computer. This is similar to email. Unlike social media, email services are provided by a large number of universities, companies, and other organizations. In a platform world (e.g., Facebook), some people have control over the entire platform. In a protocol system, like email, nobody has access to all of the email servers. It is a deeply fragmented and federated system by default.

The algorithms used to make predictions are also distributed and not unique. They exist as part of a marketplace that is open for people wanting to contribute algorithms. Users can select which algorithms they allow to interact with their data, based on how accurate they think the algorithm predictions are, how much they trust the algorithm’s creators, and other criteria.

These decentralized architectures can help revert the data concentration problem, by bringing questions to the data, instead of centralizing all the data in a few places. These questions are answered in a decentralized manner. And yes, this could be an application of blockchain technology (although that may also not be the only alternative).

There are, of course, advantages and disadvantages of using protocols instead of platforms. The big advantage of protocols is their distributed nature. This allows protocol-based systems to avoid centralization of data and mitigates many of the privacy and monopoly concerns that are natural in platforms. The disadvantage is that, because of their decentralized nature, protocols are much more difficult to update and improve than platforms.

As I explain in my TED talk, the level of participation in democracy is relatively low, even though the number of participatory instances is relatively small. If we were to expand democracy to more instances of participation (like participatory budgets or having direct democracy for parliamentary decisions), the empirical data suggests that the participation of people would be minimal and decrease with additional instances. So the technical problem of direct democracy is not one of limited communication but one of the limited time and cognitive bandwidth of people. To participate in hundreds of decisions, we don’t need additional communication technologies but technologies that augment the number of different things a person can pay attention to.

Sci-fi (LONG TERM: 100 to 1,000 years) → Metacortex

from Accelerando (2005) by Charles Stross:

The metacortex – a distributed cloud of software agents that surrounds him in netspace, borrowing CPU cycles from convenient processors (such as his robot pet) – is as much a part of Manfred as the society of mind that occupies his skull; his thoughts migrate into it, spawning new agents to research new experiences, and at night, they return to roost and share their knowledge.

"The president of agalmic.holdings.root.184.97.AB5 is agalmic.holdings.root.184.97.201. The secretary is agalmic.holdings.root.184.D5, and the chair is agalmic.holdings.root.184.E8.FF. All the shares are owned by those companies in equal measure, and I can tell you that their regulations are written in Python. Have a nice day, now!"

He thumps the bedside phone control and sits up, yawning, then pushes the do-not-disturb button before it can interrupt again. After a moment he stands up and stretches, then heads to the bathroom to brush his teeth, comb his hair, and figure out where the lawsuit originated and how a human being managed to get far enough through his web of robot companies to bug him.

Radical new economic theories are focusing around bandwidth, speed-of-light transmission time, and the implications of CETI, communication with extraterrestrial intelligence. Cosmologists and quants collaborate on bizarre relativistically telescoped financial instruments. Space (which lets you store information) and structure (which lets you process it) acquire value while dumb mass – like gold – loses it. The degenerate cores of the traditional stock markets are in free fall, the old smokestack microprocessor and biotech/nanotech industries crumbling before the onslaught of matter replicators and self-modifying ideas. The inheritors look set to be a new wave of barbarian communicators, who mortgage their future for a millennium against the chance of a gift from a visiting alien intelligence. Microsoft, once the US Steel of the silicon age, quietly fades into liquidation.

About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog – infinitely flexible computing resources as thin as aerogel – in which they live. The foggy depths are alive with high-bandwidth sparkles; most of Earth's biosphere has been wrapped in cotton wool and preserved for future examination. For every living human, a thousand million software agents carry information into the farthest corners of the consciousness address space.

The pre-election campaign takes approximately three minutes and consumes more bandwidth than the sum of all terrestrial communications channels from prehistory to 2008. Approximately six million ghosts of Amber, individually tailored to fit the profile of the targeted audience, fork across the dark fiber meshwork underpinning of the lily-pad colonies, then out through ultrawideband mesh networks, instantiated in implants and floating dust motes to buttonhole the voters. Many of them fail to reach their audience, and many more hold fruitless discussions; about six actually decide they've diverged so far from their original that they constitute separate people and register for independent citizenship, two defect to the other side, and one elopes with a swarm of highly empathic modified African honeybees.

May you live in interesting times.


Further Reading

Collective Psyche

Collective Media

Buzzword Soup

Digital Democracy & the Social Contract

Misc


Subscribe to 0x125c
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.