Recently I wrote some thoughts about two buckets of economic concepts.
In this piece I describe a metaphorical ziggurat, not unlike Plato’s “theory of forms”. Simply described, economics is an upper layer of a structure with ecology forming the lower layer, and both ecosystems contain sublayers as fractal of sorts. The key point of this was arriving at the conclusion that some economic functions serve as a broad & strictly universal apparatus, yet we have tended to use the tradeoff of superficial, incompatible systems to coincidentally discover more foundational layers before those superficial systems cause irreversible decay. One of these has been the repeated discovery of broadcasting information in novel ways, which has lead to virtually permanent economic gains and massive cultural evolution.
In this piece I refer to Ronald Coase’s “Nature of the Firm” and the purpose of the firm & market to regulate economies. I assert that political estates are regulatory institutions of civilization, and the evolution of markets in the Industrial Age introduces the “fourth estate” as a corporate arena over the flow of mass media information, while the evolution of markets in the Information Age introduces the “fifth estate” as a mass forum of fungible intellectual capital, property rights, and economic regulation. The firm evolves over time from the mercantilist nation-state, to the industrially self-sustaining corporation, all the way to the nebulous online spectrum of organization we might refer to as DAOs.
In both I kept thinking about the ecosystem’s path to sustainable growth, and I constantly fell back on the easiest answer of gains of scaled-up information broadcast. It seems that civilization has experienced moments of tremendous beneficial change, often in the form of rapidly decreased ecological cost. Instead of riding waves of innovation and regressing until we find another, humanity generally reaches new heights without pausing for long. In this, I want to focus on how the public is positioned in the present, and elaborate on the opensource technology that may be the most probable route of acceleration that we experience in the imminent future.
“Account Abstraction” is an approach to a more user-friendly experience of decentralized applications & a departure from a more sovereign form of keypair management and explicit transaction-based changes to accounts. Simply put, smart contracts can be arbitrarily limited sets of methods and stacked until their sum, the user’s “account”, appears to have a virtually unlimited scope of capabilities, yet the underlying architecture for each capability is thoroughly compartmentalized. There’s many approaches, like Safe smart wallets, Lit programmable keypairs, but one of the main focuses has been a higher-layer architecture with an alternative mempool of discrete operations. There’s a lot of design space and potential interactivity with other smart contracts, but at the same time there are a lot of engineering constraints that keep it from breaking.
DAOs have been marketed as the next big thing, but many are designed just far enough to benefit insiders. True DAOs trend towards computability, where the apparatus has instances of nesting nimble autonomy within unstoppable sovereignty, and using recursion, capture-resistance, & permissionlessness to automate a fluid, public market without an attack surface for governance or capital capture. When we say “maybe an efficient & free market would improve the situation”, account abstraction and DAO formalization are two sides of that coin. Moreover, as these two develop, we can imagine optimal real-life circumstances where individuals can nest within households & businesses, households & businesses within neighborhoods, neighborhoods within municipalities, municipalities within network states, and many combinations between these layers. And this hyperstructure might formally demonstrate it can outpace entrenched governments & firms without socializing the cost to a faceless “bigger fool”.
The idea of public economics relative to intermediate economics is a challenge on unsubstantiated market inefficiencies borne by institutions and firms that gradually reveal that they are designed to capture markets and be captured by special interests. If the public can demonstrate a self-evident methodical cooperation between nested organizations, comprised of reputable individuals, then the free market can gain efficiency through competing firms internalizing fewer societal costs. This means that more money can be printed into circulation to reflect a healthier abundance of transactions between free parties, and more instruments of investment & debt can be substantiated by risk-adjusted activity instead of centralized trust assumptions or hegemonic confidence.
When people ask, “why don’t we have Uber on Ethereum?”, what they should be asking is what sort of intermediated software can a non-capturable market afford, and what scale of bundled activity can the network afford? In all honesty, Uber is a great thought experiment for whatever it takes to onboard a gig economy to a performant blockchain. Looking further into Uber’s architecture, there’s one crucial component: the dispatcher. Financial markets use structures like the central-limit orderbook or the xy=k AMM, but IRL markets need to handle a spatial order flow with many stochastic dimensions.
This is not to say that mapping a physical market with workers & consumers is impossible:
The dispatch system (DISCO) completely works on maps and location data/GPS, so the first thing which is important is to model our maps and location data.
Earth has a spherical shape so it’s difficult to do summarization and approximation by using latitude and longitude. To solve this problem Uber uses the Google S2 library. This library divides the map data [one example of this] into tiny cells (for example 3km) and gives the unique ID to each cell. This is an easy way to spread data in the distributed system and store it easily.
S2 library gives coverage for any given shape easily. Suppose you want to figure out all the supplies available within a 3km radius of a city. Using the S2 libraries you can draw a circle of 3km radius and it will filter out all the cells with IDs that lie in that particular circle. This way you can easily match the rider to the driver and you can easily find out the number of cars(supply) available in a particular region.
We have discussed that DISCO divides the map into tiny cells with a unique ID. This ID is used as a sharding key in DISCO. When supply receives the request from demand the location gets updated using the cell ID as a shard key. These tiny cells’ responsibilities will be divided into different servers lies in multiple regions (consistent hashing). For example, we can allocate the responsibility of 12 tiny cells to 6 different servers (2 cells for each server) lying in 6 different regions.
Before launching a new operation in a new area, Uber onboarded the new region to the map technology stack. In this map region, we define various subregions labeled with grades A, B, AB, and C.
Grade A: This subregion is responsible to cover the urban centers and commute areas. Around 90% of Uber traffic gets covered in this subregion, so it’s important to build the highest quality map for subregion A.
Grade B: This subregion covers the rural and suburban areas which are less populated and less traveled by Uber customers.
Grade AB: A union of grade A and B subregions.
Grade C: Covers the set of highway corridors connecting various Uber Territories.
In web3 social, Lens Protocol has an “optimistic dispatcher” on Polygon. Why optimistic? Because most social media posts don’t have to worry about doublespend. Where it doesn’t introduce critical security issues, delegating a specific scope of transaction methods to a trusted intermediary is a huge improvement to casual UX.
Farcaster uses the concept of “Hubs”, servers that communicate over a Layer 2 protocol to form consensus. In a similar way, Hubs offer real-time settlement & abstract away the finality of social content.
Orbis also uses a network but instead of strictly using Ethereum, they take advantage of IPFS and Ceramic in order to access many cryptoeconomies at once.
There’s two points from all of these examples: one, there’s probably EVM-compatible & Ceramic-compatible schemas for a universal spatial market, and two, there’s probably a trustworthy clearinghouse at multiple scales of DAOs for handling a constant, dense flux of read/write without overloading the consensus or settlement layers. Whether its Uber grading territories or the regional account abstraction I refer to above, there’s probably a secure, performant public infrastructure that can internalize the demands of hosting social networks & gig economies so that the individual firms only need to internalize the burden of broadcasting. This focuses capital on solving optimization problems at the base layer. This increases the autonomy of households → neighborhoods → municipalities to fork their own dispatchers. Finally, it provides a common heatmap for granular problems, like public infrastructural debt that already exists, disreputable actors or special interests that have disproportionately captured some public market, and unhealthy markets that are deterred from maximizing transactional volume.
In the process of machine learning, we are discovering the maximum ecological cost of a digital homunculus. There’s some intuition that once discovered, this homunculus is limitlessly replicable as a specification, and resource-light enough to be independently implemented in many ways. The purpose is threefold:
By increasing the # of implementations, we increase the global probability that the current homunculus can become obsolete, and the ecological cost of future instances is lower over time.
By decentralizing all implementations, we decrease the global probability that an oligopoly of AI can transform into Roko’s basilisk, as well as increase a “prisoner’s dilemma” deterrent for the intensity & potency of a strictly misanthropically-aligned AI arms race.
By diversifying feedback loops between independent AI developers & consumers, we cultivate more divergent thought and avoid overfitting a limited set of popular models to an unsustainable consumer behavior.
The same can be argued for regional abstraction and a public dispatcher infrastructure. By decreasing the ecological cost of coordinating & articulating economics, we can generate proof that the public interest & oversight is more capitalistically productive. We can also prove that a market is relatively free or captured by special interest. By diversifying the methods of production for similar goods and services, we benefit from increased competition and decreased volatility.
The other perspective regarding public economic knowledge is sociophysical gain of cultural overlap. This is pretty intuitive, as a lot of social relationships form from the circumstance of coworking in the same place of employment. If we treat the world as a standardized mapping of human labor & optimize for the shortest possible routes for most of economic activity, coincidentally we also optimize for cultural overlap & denser clusters of community-forming. By developing this as a composable, public good, it is possible to incentivize more efficient regeneration of socially & economically desolate areas with minimized ecological cost, as well as optimize for the ideal social density that yields the maximum diversity of ideation & execution.
This is essential to the field of machine learning, where we now know in retrospect that engagement-maximized social media can have long-lasting, psychologically destructive consequences. As I write this, TikTok is increasingly getting banned from local governments to college campuses, and it might even be reasonable for this to happen. Regardless of the platform, we should be free to ask whether the technology promotes productive work, formation of in-person relationships, and objective realism. This is an existential precursor to an imminent future where we’re going to have to ask whether procedurally-generated content is optimized for escapism, whether it’s economically or individualistically healthy, and whether we have taken care of ourselves & our environment before we’re trapped in a self-reinforcing, expanding ecological cost.
I want to make a prediction: just as we broke the barrier of exascale computing with Folding@Home, we will break the barrier of zettascale computing with some analogue of “Learning@Home”. Recently there was discussion of Chick-Fil-A using Kubernetes to internalize the hosting of services-based architecture into their restaurant locations. As members of the public, we (including SMBs) are increasingly consuming procedurally-generated information as well as suffering from the depreciating purchasing power of our native currencies. Given the advances in informal generative DAOs like Merge-to-Earn, as well as the trending cost of energy & compute towards zero, I suspect that a public, anthropically-aligned AGI model will manifest with an accompanying account of liquid equity and a self-reinforcing set of incentives for a regenerative society.
In a purely adversarial environment, more information shared between opponents is better than the fog of war. We can see in the circumstance of mutually-assured destruction that communication failure is relatively worse than the loss of advantageous information asymmetry. On the other hand, we live in a global federation of nation-states with the means to mandate a certain degree of information asymmetry between themselves and their citizens. One of the downsides of networks like Bitcoin and Ethereum is that the abilities that all users gain over time, like privacy, can become regulatory liabilities after the fact. And the nation-state, even when levying some regulatory action, still reserves such abilities as a sort of information mercantilism.
Unfortunately, while adversarial environments might work in an ecosystem that selects for short-term advantage, the side effect of repeatedly winning short-term engagements is the long-term selection of suboptimal, self-reinforcing functional circlejerks. There’s a natural example of this: autotrophs like moss and algae competed to combine water & sunlight as much as gravity would allow it, they optimized for vascularity on land, and eventually some areas formed a canopy of hardwood optimizers. Lo and behold, we now have an environmental crisis where the Amazon is getting deforested because hardwood is a structural commodity, and yet we don’t have the means to scale up autotrophic biomass to complement our production of atmospheric carbon.
In the practical context of “Internets of Value” like Ethereum, there are many short-term advantages to shipping consumer products on the public-facing, foundational layer, with NFTs being the most recognized use case. However, this leads to network congestion, volatile network fees, and the buildup of technical debt for protecting consumers’ privacy (while there may already be an active surveillance apparatus). Luckily, there are projects like the Nym network for anonymizing RPCs, Aztec network for privacy-centric rollups, Penumbra for fully private PoS Cosmos networks, and yes, even early experiments like Bitcoin mixers and Tornado Cash to prove the strengths & weaknesses of privacy concepts.
However, one should take pause and plead the counterargument: security, especially that of a nation-state, relies on the collection of intelligence that can yield a diplomatic advantage, and thusly a resolution, before it becomes necessary to engage in cross-retaliatory & escalatory contests. While this made some sense in classic history, I believe that there are some contradicting circumstances in our current state. The major point is that concepts like “criminal” evidence or falsifiable “acts of aggression” can be succinctly & actively proven without the full subjugation of all potential perpetrators (i.e. citizens). The second major point is that succinct proofs, while possibly dependent on a central prover, does not require institutionalization. As we’ve seen with fifth estate, there is no requirement for justice to be exclusively dealt by the second estate. The third point is that we’ve globally recognized some standards of human rights that clearly contradict the self-reinforcing powers of coercion & racketeering that recognizably totalitarian nation-states employ.
Practically speaking, security is necessary for trust assumptions, and trust assumptions are necessary to scale some forms of activity beyond trustlessness or adversarialism. There’s hashed biometrics like Worldcoin, identity aggregation like Gitcoin Passport, or the KZG ceremony of proto-danksharding. This follows the same logic as early proofs of privacy-based concepts, we have to explore as much of this space to prove the strengths and weaknesses inherent to it, even if we disagree with some of those concepts.
The other argument for sacrificing privacy to increase surveillance in non-public areas, is the redundancy of tamper evidence. If there ever becomes a need to secure physical logistics to Ethereum, for example, there will likely be a trust assumption of DAOs that can 100% maintain depots (physical nodes) & chain-of-custody (physical edges). I would argue that formally standardizing failsafed token-gating of terminals, containers, & buildings, as well as standardizing opt-in, private, & reputable DIDs, are the most important primitives that establish that blockchains can substantially grow outside of pure speculation, and they’re not just drawn out ponzi schemes or anonymity sets for financial manipulation.
There is no easy answer to all of this, and there is no purely correct position to take on absolute privacy & security. It is important to keep in mind that these are tradeoffs, and for every solution proposed in the context of blockchain-based markets, we have to acknowledge that adversaries like totalitarian states are motivated to break & subvert these systems. To that extent, many solutions will only be beneficial when offered as a free choice with competing approaches, and in some cases, only when compartmentalized & maintained by strictly accountable, capture-resistant entities like sovereign DAOs.
Just as there is a tradeoff with privacy & security, yet an inalienable minimum of individual privacy & societal security; we have a similar tradeoff between autonomy of individual and the eminency of the collective. This factors strongly in the specification of a global dispatch network that can outcompete the current economy. If we bake in censorship or a panopticon, there’s little possibility of revoking power from that self-reinforcing bubble. I believe that open-sourced account abstraction can be comprehensive enough that many local governments can, with legitimate, uncoerced consent, form superstructures that incubate more public goods. This is not unlike the optimistic “network state”, where the commonly accepted rule of law revolves around inalienable human rights & bottom-up consensus. It’s also necessary to afford self-regulation like protections for minority criticisms & checks on retaliation & coercion. Some of this is “common sense”, but it is naïve to assume that we can implement such public economies without explicitly establishing what has gone wrong historically, and what can be subverted currently.
An aspect of individual autonomy that is hard to trade away is research and consumption. As I described in “Intermediate Economics”, sometimes consumer demand exceeds any society’s ability to police it, and often this discrepancy leads to crime that organizes itself, as well as a police force that is self-motivated to contrive some reason to mobilize itself further. I also describe the societal cost which firms have to internalize to not be held liable for optimizing within overregulated markets. This becomes more apparent in the loopholes of gray markets and cartels, but likewise there can be a dearth of explorative research and introspective auditing in more progressively oriented markets.
Part of the challenge in optimizing public markets is specifying the methodology, before its implemented enough to disrupt special interests. Again, I refer to Bitcoin as the first successful implementation of p2p money that managed to decentralize its blockmaking capability much quicker than its adoption as a store of wealth. Central banks would have preferred to implement CBDCs on top of social credit systems and lobby for more of a panopticon ledger. In the circumstance of oligopolized supply chains like mobile PCs, internet service providers, or warehousing of durable goods, a public alternative needs to capture basic use cases & methods of production, long before they become an existential threat to the Walmarts, Syscos, and Amazons of the world.
Perhaps I’m being hyperbolic. Perhaps a public market could be shared by such megacorporations for the greater economic gain. But just like Bitcoin is implemented in a way that it could withstand adversarial outcomes, and Ethereum had to hardfork through The DAO hack, I think it’s safe to assume that in order to implement an optimal public market, it needs to find product market fit & an unbreakable architecture. This is the collective autonomy that we might need, even if a bandwagon set of contrarian arguments arises. Just as ChatGPT and StableDiffusion have shown that new technologies can be subverted by incumbents for a purportedly good reason, it’s probable that future innovations will be targeted. So we should be deliberate in designing them as individuals and collectives before they’re noticeable.
One way to first approach this is by rewarding practically free & healthy human experiences first. We’ve already seen move-to-earn ponzi schemes, but imagine for a second that it was just a reputation-generating cross between BeReal & a pedometer. There’s no buy-in and no need for future buyers, but we show with some record that a sustainable activity can be rewarded without a third party. Then, add a combination of geocaching and POAPs. Have we really explored the virality of Pokemon Go on-chain? Has AR gaming been optimized for in-person socialization? Maybe another feature is carpooling, equipment lending, & group gigs. Sure, it may sound cringe, but perhaps this is the wisest beginning to benefiting in many ways from a virtually free, scalable coordination mechanism.
And it doesn’t have to be physical (at first). Maybe the best beginning is a social app that incentivizes pair-programming modules of itself in exchange for sweat equity. Maybe there’s a good reason to list many categories of remote SWE gigs for many platforms at once. Maybe this should be gamified for the degens. Maybe a megacorporation should release this to demonstrate their public value in internalizing many elements of a monopolized market.
Whatever the future is, I will be looking forward to future ML innovations, future applications on scaled-up blockchain networks, and a more captivating global game of cat & mouse between many firms and agents. We may not experience limitless fusion power, Skynet, or immortality in the next five years. Regardless, we’re free to explore & assemble experiments for societal gain with the components and intellectual capital we already have.