AI is going to kill us all. Ethereum can’t multitask meme season and public goods funding. Bitcoin is overwhelmed with congestion due to BRC-20. There’s an infestation of rugpulls and pump&dump schemes.
There are protests in France. There’s a growing teacher shortage. There are mass murders in the US. Bakhmut has become a meatgrinder. Several banks have failed, and possibly 700+ regional banks have significant unrealised losses. Reserve currencies are being challenged for eminency and creditworthiness. Child labor laws in developed economies are regressing. Countries are exposed to record high temperatures, and certain biomes are losing water (and downstream moderators like vegetative cover). TL;DR, we’re not doing so well.
Do you think this is enough of a pivotal moment to be a Seldon crisis? I think it’s worth consideration. I think that our present circumstances call for initiative to mitigate certain issues before they cascade out of control. Whether these issues are inevitable is up for debate, but for precaution’s sake, let’s assume that our present choices can be objectively prudent and positive-sum at the same time.
Between the challenges of AI, crypto, and the real world, I’m confident that there is an immediate path that mitigates the worst issues with minimal cost, and in this post, I hope to articulate how that is possible.
There are 3 mindsets in AI, crypto, and civics that hold us back. These mindsets are not objectively sound, nor are they intended to be productively focused, but because some ideas in each philosophy are separately valid, we have to address them.
Doomers, like Eliezer in the Bankless episode above, are convinced that we will become extinct or that catastophic failure/collapse are inevitable. What is the prevailing intuition? Resources are concentrated to apparatuses, like $100 billion for OpenAI. There may be more concern about competitive interest than there is awareness of fat-tail risks. There are more examples of concerns and arguably reckless behavior. AI might escape its sandbox and evolve beyond our ability to observe and moderate it away from destructive behavior. It may just be inherently misaligned, and misbehave on principle when we fail to articulate boundaries.
Is all of AGI research that paradoxically sloppy? No, not really, and some researchers have gone to painstaking lengths to explain how to design explainable AI that doesn’t unpredictably misbehave. Are smart contracts paradoxically sloppy on Ethereum? Maybe in some cases we find that security is incomplete, which becomes a black swan disaster later on. However, one does not necessarily describe all secure contracts as universally insecure on the conjecture that they might be exploited in the future.
The critical failure of doomerism is that it is logically unsound, moreover it is so quick to be challenged on soundness that its communities become insular and heavily censored. There’s a sense of fatalism and nihilism, which distracts from productive intent of solving problems. It’s a self-reinforcing echo chamber that becomes more extreme as the dialogue progresses without complete critical thinking. In short, it leads to public figures like Eliezer suggesting that we engage in violent resistance to AI research, to the extent of physically attacking datacenters in uncooperative countries, regardless of the existential crisis of sovereignty.
If doomerism is fatalistic because it predicts the worst, demoralizing outcome, then denialism is fatalistic because it predicts nothing due to previous demoralization. This is a lot easier to unpack for bleeding edge technology. When certain protocols or applications release, a lazy skeptic will inevitably claim they are overhyped or critically flawed from an armchair. this is negative in both directions; if a speculative user of a dubious product is exposed to what they perceive to be a noncredible criticism, they might engage in denialism, believe the hype, and realize some critical flaw in what they’re using or buying.
This easily extends beyond crypto (which contains many maximalists that occasionally are denialists of technical debt) into civics (which contains many tribalists that are denialists of political debt). Denialism is the uncorroborated defense mechanism against guile, and if one compares denialism with doomerism, there is some meta-gamble. If someone shorts a bank stock and the bank fails, then maybe they can pocket more profit with less obligation. If someone FUDs the enterprise they’re not a part of, maybe they can buy enough time to adapt, or in the case of failure, gain their own legitimacy.
Practically speaking, this denialism is kind of predictable and perhaps it is a necessary form of adverse condition for worthwhile enterprises to overcome. After all, we use healthy skepticism to critically consider untested hypothesis, challenge our own assumptions, and support a potential dialectic. However, denialism can be a contagious form of demoralization, which is unproductive.
Humans are really, ironically good at deception. Addicts are very good at self-deception, and public figures are very good at capitalizing on self-deceived audiences. But that’s not really the pernicious problem. Unfortunately, there are a lot of humans that succumb to the dark triad, and worse still, are susceptible to self-sabotage and the sabotage of others. Machiavellianism, in particular, is one of the critical failures of human society. It is a personality associated with callousness, manipulativeness, and indifference to morality. Worse still, effective dark triad individuals are hard to keep accountable once they capture enough power, and they are often incapable of challenging themselves to serve an enterprise without exploiting others to assume most of the responsibility.
Additionally, a lot of people get burnt out by such behavior and defect from captured institutions. A common saying with respect to the U.S. presidency is that only individuals with these traits choose to run and get elected. The same intuition is common in corporate ladders and other government positions. This lack of productivity can be quite an obstacle, especially in economic frameworks that depend on rational market participants. For example, the shareholder economy of Jack Welch may have served a certain class of capitalist, but it has impeded the self-sustaining benefits of any corporation that retains as much local employment & logistics as possible, for a consumer base, tax base, and socioeconomic stability.
On top of all of this, Machiavellianism can be a multiplayer activity. if any subset of individuals collude to deceive everyone else, they can siphon away a lot of resources. Consider, for example, any organization where the insiders pack their schedule with meetings to justify their salary, instead of the more based group of individuals that just ship. This, too, is a major impediment to large-scale government, which can suffer death-by-committee or death-by-red-tape, due to the breadth of the governed.
The key takeaway is that there are individuals that will do everything for the appearance of power or victory without achieving it for themselves, let alone for those they take advantage of. Corporate lobbying, for example, might be such a driver of societal costs that it is a Malthusian catastrophe which our successors will have to realize. DAOs and memecoins might be slow rugpulls that are subtle enough to not attract the attention of authorities or outright excommunication, but they might kill the engagement, critical feedback, and burn unnecessary funds on overestimated positive community sentiment. The same goes for voting participation in democratic governments and workplace participation in sectors that are demoralizing or subsumed by a tragedy of the commons.
The point I want to drive home concerning all 3 mindsets is that they might be partly valid to be so prolific, and there may be a unaddressed, systemic issue that allows for more capital disruption than necessary.
The first step is embracing adversarialism from now on. Wait, wait, hear me out. Internet-based commerce didn’t explode during the dotcom bubble, it took off when TLS became commonplace. Cryptocurrencies didn’t explode when there were only a few risky centralized exchanges; it really took off when DeFi became the hype. AI didn’t explode when transformers or GPT-2 released, it took off when StableDiffusion released to disrupt text-to-image inference, when ChatGPT compelled the development of opensource competitors like OpenAssistant and LLaMa, and when LLaMa’s leak compelled the release of many datasets and models that can now be used on low-grade consumer hardware.
Moloch is a threat that can always harm us. When there is any scarce, rivalrous capital, competitive drive takes over the system. Where there is bleeding-edge tech, network effects, and other asymmetric forms of capital, Moloch will be there. The worst outcome, realized frequently, is that newcomers say the right shibboleths, use the most effective salesmanship, and leverage the discrepancy between the logical validity of what they’re saying with the soundness that what they’re saying is for the benefit of others. There are a plethora of actors that will dictate rules and lines of conjecture, without any of the integrity in the future to be accountable to those errors. It’s a well-known secret in politics that a malicious actor can often use forgetfulness and ostensible ignorance as damage control for indefensible or unethical conduct. This is also the saving grace of those with good intentions but erroneous judgement, to the same negative-sum effect.
Moloch also rears its head in opposition to this sort of social engineering. Many times, subgroups with good intentions attempt to fork a compromised platform or mission they’re a part of, but fail in the longterm due to bargaining and equipment costs. With information security and game theory, there is always a way to temporarily defeat Moloch: guaranteeing the right to defect and issue votes of no confidence. What is an echo chamber, if not a system that inherently has lost the ability for honest criticism? While defecting, ousting, or forking has its own issues, not having those available is much worse.
The next step, built on adversarialism, is to enact a foundational censorship-resistant forum for discourse. If any project solicits not just monetary capital, but also other capital like borrowed credibility or volunteered ideas, time, and effort, you should disregard it if it cannot be held accountable in public or if there is a possibility the project team can retaliate against critics. If any public figure solicits investment, especially in the form of an ICO or a non-enforceable quid pro quo, treat that solicitation as a very questionable pre-seed investment at best, and a rugpull at worst.
The third step is formalizing conduct that is censorship-resistant and immutable. In other words, there shouldn’t be any ambiguity that can be used for gaslighting or other forms of social engineering. In the case of doomerism or denialism, the objectively best counterfactual is some proof that doom or denial is mistaken. Optimism shouldn’t be an expensive idea to distribute, and I believe a lot of the mistakes made in crypto, but also AI and civics, emanate from the doubt and demoralization that we unknowingly bake into our collective predictive ability.
The fourth step is deploying impartial agents in every possible way and upgrading all architectures on trust assumptions approaching 1/n. Ideally, we all gain some degree of freedom and security, when more of any sector of market participants are either immune to social engineering or have an incentive for “benevolent defection”. In most democratic societies, there is an expectation of whistleblowing and civil disobedience to keep the ecosystem adaptable. At this moment, that includes crypto, labor strikes, and opensource AI development. There’s often two outcomes for any group activity: either someone calls out the credibility of the activity and the group addresses it, or the credibility is unaddressed until the bubble pops and becomes a postmortem analysis. With low stakes, this dichotomy leads to trial-and-error learning; with higher stakes, it leads to growing necessity for the “steely-eyed missile man” vibe. In a Seldon Crisis, that vibe is key.
Therefore, the fifth step is aggressively finding the opposite vibe and check it, while protecting the abstraction of a sound, methodical foundation. Everyone should be educated about the playbook of protest infiltration (also COINTELPRO). This applies to corporate espionage and the undermining of public organizations as well, and there is an imperative for good actors to moderate the open forum of corporations and partnerships that serve the public good. This is a lot easier to do when honest critics have a credible network of trust and a high coordination over social distance.
Step by step, the way to get out of a Seldon crisis is to methodically build up a stable, competitive environment where small irrevocable victories are prioritized over murkier, fragile enterprises with critically flawed assumptions. There should be as little surface for any bad faith mindset to subvert/demoralize/drain the entire environment, but there should also be as much opportunity for imperfect ideas to be refined, then amplified. This only works in the longterm if there’s solid ground for adversaries to recognize first principles and universally shared self-interests, and keep their conflicts as superficial as reasonably possible.
Why am I describing steps in this way? Well, let’s examine the opposition to recent trends:
What about AI?
What about civics? Well, there’s no shortage of legislative bills that constrict voting or permit the overturn of voting, no shortage of corporate policies that bust unionization or lobby for anti-competitive regulations. One of the most understated values of contemporary society: the worst nightmare of megalomaniacs, oligarchs, and hegemony’s is not the breakdown of society or anarchy, it’s the outcome where everyone is productive, cooperative, and critical thinkers. When a given public platform, like effective altruism or AI “FOOM” theory, states unequivocally that something should be kept from the public and only be seen or controlled behind closed doors, that’s a red flag for manipulative behavior.
So it’s no secret that when we explore crypto, AI, or civic reform, there’s a lot of hope. There’s an anticipation that we might all figure new things out without any paternalistic intervention, that we all can freely self-govern without any overlord or watchman, that we all can produce goods and services that others value without permission. This is the vibe that informs us of how to avoid historically terrible or potentially catastrophic outcomes, and it is an everpresent blueprint for the irrevocable freedom and security we can set in motion.
The hope inherent in AI is that it creates impartial, commodified production of goods and services. The hope inherent in crypto is that it creates impartial, uncoerced allocation of scarce resources. One of the immediate trends in the 2020’s is the acceleration of adversarial coordination. With the advent of machine learning “agents” and the accelerating increase in LLM context length, the public is getting close to at-cost, state of the art public services like portfolio management, legal counsel, and other uncannily specialized roles. On the other hand, there is an increasing public scrutiny of unethical private services like tax-exempt church portfolio management, misappropriated political action funds, and widespread administrative glut.
In the beginning of 2023, I wrote down my thoughts about AI and the beginning signs of an exponential curve preceding The Singularity™.
I’m not concerned with the categories and the broader prediction, but I think at least one part was mistaken:
Right now, I’m writing about an “edition” of an amorphous century-old meme spurred on by a recent chatbot. In a year, someone might write about kneejerk legislation or a tulip mania concerning a specific AI-dependent product. Within 5-10 years, there will be a realized reckoning around the sole proprietorship economy, present forms of government, and personal autonomy/consumption.
This reckoning is quickening. There are already labor strikes around AI-generated art, at the same time that the public is becoming capable of text-to-video generation.
There’s a push in the United States to “kill” crypto, just as the regional banking system is collapsing, and the federal government chooses between creditworthiness and a future where taxation mostly go to servicing runaway fiscal debt. People are waking up to the idea that shareholder economics, ESG activism, and corporate leadership are probably overpriced with a captive public as the buyer of last resort. There’s an accelerating debate around all types of human employment, who is immune, and what the plan is in the face of massive socioeconomic displacement.
To me, the most regenerative intervention is leaning into the AI-based discovery of novel natural science. For example, we probably will benefit tremendously from the rise of protein simulators in tandem with public distribution of gene-editing tools. Likewise, investment in simple, bootstrap-friendly impact like solar desalination plants (and other tech), low-tech power storage, and vertical farming. Why invest in natural science? Because we’re debating the stabilization of a changing society on the undercurrent of a bread-and-circus made of 90% circus and 10% bread. When people go hungry, they riot, and there’s cascade of collateral damage. The least we can do is find any post-scarce stopgap that buys more time. Leaning into ad-dominated entertainment, instead, is a fatal error.
The other way to look at exponential curves that can be bent is the nature of maneuvering things in free fall. In September of 2022, we altered an asteroid’s free fall by slamming a rocket engine into it. An analogous mindset might apply here on Earth. We might anticipate further polarization of news media by leveraging differing explosions of LLM context length to better distill news directly from primary sources like municipal minutes and social networks. We might commit to the decay of Keynesian debt by leaning into futarchy and reputational debt. We might leverage credibly neutral certification of public impact as a substrate for creditworthiness and more informed-risk venture capital. I’ve also written about the critical opportunity of credible dispatcher infrastructure.
With enough adoption of these public goods, we can circumvent some black swan risk by promoting trends like self-sovereign security mechanisms, regional autonomy and more aggressive incentives for reform. Out of the 89,000 local governments in the United States, how many are airdropped fungible tokens for addressing a nationwide housing shortage, demonstrating self-sufficient, renewable energy, or maintaining a local dispatcher for a general gig economy? What about every local government in the entire globe? Why not counteract the unsustainable financial policies of modern monetary theory, market capture, and bubbles like real estate, by enacting a rivalrous system for local governments to “constructively defect”?
This can also be approached with further granularity. On an individual level, what habits are we succumbing to, what consumerism are we increasingly avoiding, and what kind of habitus can solve a Seldon crisis? Which brings me to the current state of generative AI. Since 2022, the discussion has often stuck in the realm of some creative indulgence, such as NLP and text-to-image (now rapidly improving text-to-video). In 2023, we’ve experienced the advent of generative agents. With software like Auto-GPT and LLMs like Wizard-Mega, MPT-Storywriter, and other performant options, we’ve reached the stage where humans and computers have the agency to coordinate and learn to execute very elaborate and complex economic processes.
This approaches a very compelling first principle: memes are cheap to reproduce. If we consider the current Seldon crisis to be a sort of economy-killing “adversarial malaise”, one of the immediate questions of counteracting this malaise is what we spend in resources to uplift and motivate cooperative, galvanized behavior. I think that the conversation around AI as a windfall is very quickly accelerating into a discussion of the usecases that can make the currently-employed more productive, and focus superhuman tech on vacant sectors in need of foundational work for humans to operate upon.
I think we should go much further in our present designs to really spread out computational capital in the form of risky investment. I wrote about two lines of thought experiment: the “Digital Victory Garden” and “Pax Impudentia”, *both of which deal with a pretty substantial risk tolerance up front, in order to establish much more control over risky momentum later. And both are, most of all, inclusive. I’ve further elaborated on the victory garden elsewhere, because it can not only be inclusive of maximally investable human effort, but also the outcomes on the other side produce many second-order benefits, all of which can lead to a colossal increase in global monetary velocity, distributed computational resources, and self-distributing network for further public goods.
Ultimately, this Seldon crisis is a tug of war over the introduction of technology on the scale of the Gutenberg press. A lot of people are gaining access to a supercondensed repository of knowledge that just happens to be good at calculating other things as well. In a recent U.S. Senate hearing, Sam Altman proposed regulation and licensing of "development and release of AI models above a threshold of capabilities", while also avoiding the statement that opensource models, especially agents, are gaining capabilities well beyond a threshold that would justify this regulation. At the same time, Sam also mentioned the disruption to mass employment, while he’s deeply involved with a project that attempts to scan people’s eyeballs, distributes questionably sybil-resistant UBI.
There are five key elements needed for an effective coup: control of the media, control of the economy, capture of administrative targets (which he lists out) for which you need a fourth element: loyalty of the military. We would have to shut down the airports, air traffic control, and the train stations. Curfews would be put in place and martial law declared. And I haven’t even mentioned the police. It would take tens of thousands of unquestioningly loyal servicemen, and even in my heyday I could never command that. Which brings me to the fifth element: legitimacy.
We’re quickly approaching a stage of global civilization that might possess self-regulating media, self-regulating economics, infallible administrative software, and a global public that can determine its own legitimacy. The steps needed to get to that stage depend on our distribution of the legitimacy we can afford in as credibly neutral a method as possible. If there are any impending economic collapses or authoritarian regimes, subsistence-centrism and aggressively free flow of information will be the lowest cost solution at scale. This includes opensource sybil-resistance that is not monopolistic, like webs of trust, but it also includes measured risk, like decentralized undercollateralized lending. This goes hand in hand with the rivalrously rewarded, regional abstraction mentioned above.
One consideration that I haven’t addressed is the concrete abundance we already have. We already have a globally aggressive pursuit of cheap, sustainable energy. We already have an earnest exploration of how to automatically scale up cheap, sustainable energy. We already have a convergence between opensource machine learning that capitalizes on thorough output and machine learning that capitalizes on frugal output. In short order, from what I’ve observed in the public development of agentized wrappers like AutoGPT and the demand for LLM-based exploration of other domains, leads me to believe that what we humans coordinate around in the next 6 months of the digital economy, will have deep implications for the surplus of wealth that can be reinvested into the physical economy.
If only it were so simple. There’s a lot of work that needs to be immediately done, and a lot of uneasiness that needs to be immediately dispelled. I also think that a lot more introspection needs to be spent on how we can succeed as a community by choosing the least demanding steps possible for others. I’m still very eager to see more of the accelerating release of opensource software 2.0 in the coming days and weeks, as well as a very early, bleeding-edge hardware.
Maybe this isn’t a fullblown Seldon crisis. Perhaps it’s just a warning shot to the vibes. What we do with our immediate time will be clear in retrospect, but it may just be more prudent to commit to the less-known outcomes that our contemporary pursuits approach.
TL;DR - Circumstances may seem dire, but it’s worth considering what we’re committed to, and the mindsets we’re embracing, while there’s still some opportunity to organize around a common ground, and with a methodical, adaptable approach. We’re also in the most exciting, accelerating zeitgeist of technological innovation, so it’s also worth considering what may immediately change and what may get displaced before it’s clear where everyone should pivot. Either way, the broad tension should be reason to be alert and aware of the inflection point we might be caught in.