tl;dr - Proving and verifying costs have improved dramatically in the past 5 years. Demand for proofs has not responded. Why?
Human activity can be organized by two mutually reinforcing pursuits. The first is resolving a select number of fundamental questions towards which knowledge has naturally converged – What is the underlying structure of the physical universe? What is life? What is computation? What is consciousness? The second is the application of our discoveries to expanding the scope and intensity of human affairs, otherwise summarized by the catalog of useful inventions.
By its very directive, however, invention requires considering not just form but also function in the context of civilization. Physics has enabled mechanical mastery, exercised to outsource effort. From biology, we have refined breeding strategies and fashioned tools for genetic engineering. Mapping the Turing machine’s limitations via creative destruction has led to digitally integrated life. And still yet ahead is a future in which the mysteries of consciousness are unlocked.
Recently, one historically stagnant question in this family, born of Euclid’s geometric axioms, has exhibited exciting movement: What is a proof?
A Brief History of Proofs. For most of history, the notion of “proof” was reserved for a claim’s formal, deductive derivation from agreed upon axioms. As goes the early but essential admonition in one’s mathematical education, “Be careful to write out your steps and explain why they are justified. Only then will you receive full marks.” Eventually, maturity in propositional logic led proofs themselves to be treated as mathematical objects rather than mere justifications for true statements. By the early 20th century, the conceptualization of proofs had transformed again, this time into programs that construct sequences of witnesses for sequences of deductions [1]. This view of proofs as computational objects, made rigorous first through constructive logic and later by type theory, is expressed today through methods of formal verification (i.e., computer assisted proving) for mathematical statements.
However, proofs had still not escaped the deductive paradigm of propositional logic. To make the nature of this constraint apparent, consider first the opening statement in a court of law: a chain of inferences originating from a set of stated assumptions presented to a jury for review. If the jury accepts the assumptions along with each link in the chain, then it also accepts the resulting claim of incrimination or absolution. Otherwise, the jury rejects. However, a jury may also be convinced of a claim through the alternative and (when implemented correctly) more efficient method of cross-examination. Here, a witness allegedly attesting to a claim is interrogated by a sequence of queries. Any inconsistency between responses can be spotted by a sufficiently astute jury, undermining the integrity of the witness and triggering rejection of the claim she attests to.
This deep insight by Goldwasser, Micali, and Rackoff in 1985 – that while proofs can be deterministic logical derivations they can also be interactive and probabilistically checked – led to the formalization of the proof as an interrogation of a prover claiming a statement is true by a verifier determined to catch any lie. More importantly, however, it initiated a path towards building proof systems that, like a good cross-examination, are much more cost-effective when well-architected.
Lay of the Land. Venture firms have acknowledged the historic arrival of practical proof systems by directing billions towards proof-related business. These businesses serve as infrastructure for proofs in the context of blockchain networks, capturing value through some combination of off-chain proof generation, on-chain proof verification, and developer tooling.
Companies offering proving-as-a-service sell raw compute resources for off-chain proof generation. This is often paired with specialized virtual machines for writing verifiable programs in high-level languages and APIs to interact with the proofs generated on company servers. Advantage accrues through research and engineering improvements that lower costs per proof. Alternatively, decentralized proving networks appeal to preferences for liveness and censorship resistance. Though mechanism design remains an open question, the introduction of proof markets that efficiently match orders with provers amplifies downward pressure on costs by decreasing supply-side friction.
Complementing proof generation, various businesses are also concerned with reducing on-chain costs of proof verification. Proof aggregation services, for example, make use of batching techniques to verify many proofs at once, thus dividing verification costs per proof proportionally to batch size. Aggregation can also be performed on proofs of diverse types via recursion. That is, given a set of proofs generated by different systems, one can generate new proofs of a single type attesting to the validity of the initial set. The final batched proof is then computed as usual for on-chain verification.
Verification layers exploit this “heterogeneous aggregation” to enable affordable on-chain verification of diverse proof types [2,3]. By bootstrapping secure decentralized verifier networks using restaked Ether, they increase throughput of proof verification by an order of magnitude. Work is done locally by operators who each sign the same commitment to a batch of proofs [4]. These signatures are then aggregated and sent to Ethereum for verification. As a final verdict, proof aggregation over longer time-intervals can be used to either confirm or override operator signatures. Taking a long-term view, verification layers offer easy, unopinionated integration of new proof systems, adapting in real-time to research progress rather than betting on individual techniques as Ethereum does through dedicated EVM precompiles or proving-as-a-service companies do by depending on specific proof systems.
As necessary lubricant, many companies also build developer tools such as interfaces for verifying on-chain data provenance and domain specific languages for verifiable program executions. Nevertheless, all these infrastructural components – proving services, verifying services, and developer tools alike – can only demonstrate the full power of their innovations in context of sufficiently high proof demand.
Ambivalent Sources. While candidate sources of proof demand do exist, they will be hard pressed to produce the scale required to realize the collective dream of proof ubiquity.
Now a landscape fixture, validity rollups currently serve as the primary source for proof demand and are widely considered the anchor for infrastructure’s future. Here, batches of transactions that take place on a “rollup chain” are executed off-chain by nodes. Validity proofs are then generated to demonstrate correct execution and posted to a separate base chain for verification. Successful verification triggers state transitions on the rollup chain. In this way, security is outsourced to the base chain. Because the proof is small and verified much quicker than transactions are executed, significant computational burden is alleviated by “rolling up” transactions into proofs.
However, the immediate point must be made that demand for validity proofs is naturally bounded as a function of block time. Slower block production leads to less proofs per epoch. Typically, proofs for multiple consecutive blocks are bundled [5], decreasing demand even further. This places direct downward pressure on verification, either by aggregation services or verification layers. With increased proving speed comes larger rollup bundles and, in turn, constricted flow for aggregation funnels. In other words, if infrastructure continues to rely on infrastructure for proof demand – at least in the case of a fixed number of validity rollups – proof-based businesses will be forced to fight over rights to a shrinking pie. Still, none of this accounts for the steady rise of optimistic techniques that practically obviate proof generation or verification [6].
Scaling demand must therefore depend on the proliferation of rollups [7]. The analogy of rollups and base chains to cities and nations provides predictive intuition. The survival of a city is contingent upon countless variables, known and unknown. Initial design decisions such as proximity to natural resources, access to trade routes, and governance structure are deeply relevant; so is a city’s ability to continuously cultivate an economically and culturally energetic community. But good decisions often fall to unforeseen events – natural disaster, conflict, market collapse. One shouldn’t be surprised that city size and productivity respect the Pareto principle. Many cities have tried; few have survived; even fewer have thrived. In the long run, the same will be true of rollups. But even if enough rollups spawn to feed a persisting proof infrastructure, their number will asymptotically converge to some equilibrium and, consequently, so will demand for validity proofs.
This picture is complicated by efforts of prominent rollups to raise domestic proving networks, thereby increasing the difficulty of independent proving services to capture demand for validity proofs. In-house proving networks typically comprise nodes that meet minimum requirements for staking and computation. Work is then distributed to a small number via some selection mechanism, taking a random subset of high-reputation participants for example. To capture demand, independent proving services must either compete for partnerships with rollups or for work that individual proving nodes are willing to outsource.
Other prospective sources of demand are equally ambiguous. Validity bridges engage both proof generation and verification in order to perform cross-chain state coordination. But, rather than outsourcing work, bridges collect a per-transaction fee for generating a state proof on the source chain and verifying that same proof on the destination chain. Identity use-cases are slightly more promising, but only for verification. Establishing persistent and universally portable identity is foundational to markets. However, identity presentations must be privacy preserving. As such, proving cannot be outsourced and must be done by the subject.
Reevaluating Induced Proof Demand. Faced with the same underlying problem, proof-based projects are forced to justify their efforts with the same assumption, stated or implied. In standard economic language, this assumption is one of induced demand – increase highway capacity and traffic volume will follow. However, the principle of induced demand can be dangerous when not applied judiciously.
More precisely, induced demand refers to movement (positive or negative) along a long-run price-volume curve due to changes in components of a generalized price. In the case of transport, these components include travel time, fuel costs, and the risk of accident. Similarly the price of proof generation can be decomposed as CPU cycles, storage cost, and the risk of receiving a bad proof [8]. On the other side, the price of verification consists of on-chain transaction fees and the risk of a verifier “not catching” a bad proof [9].
A long-run period for proof-infrastructure businesses is one in which the combination of theoretical and engineering efforts produce a reduction in any of these endogenous price factors. The gains we have seen over the past five years in particular have been substantial on every front. Advances in coding theory, steady transition from large to small field sizes, creative applications of the time-honored sum-check protocol, hardware optimization for proof system building blocks, better compilers for transforming program representations to polynomial constraint systems, and dedicated auditing have all cooperated to decrease the prices of proof generation and verification by orders of magnitude. Throughout this progression, moving goal-post rhetoric has clung dearly to the hypothesis that demand will scale if only price falls further, ultimately into a singularity of technological lower-bounds that forces an uncontrollable proliferation of proofs.
And yet, observation of this first long-run period indicates that proof demand is inelastic, relatively unresponsive to price changes. The demand induced by falling prices should be measurable as horizontal shift in consecutive price-volume curves over short-run periods of fixed demand. In most cases however, short-run volume increases on mainnet seem more correlated with experimentation or airdrop farming than genuinely organic demand.
If demand is indeed induced, then the curve of each short-run period should instead be a function of investment[10] in the previous period. Indefinitely bootstrapping demand by induction is inherently unsustainable. Eventually, organic investment must respond to organic demand.
The example of transport is again instructive. Elastic travel demand has historically been powered by the engine of commerce, which necessitated an expanding physical infrastructure to accommodate business relationships forming across increasingly disparate geographies. What then will be the engine powering organic proof demand? And, perhaps more interestingly, who will build it?
The proof-based companies of today have undoubtedly taken the first and necessary step towards a world in which proofs set a new bar for evidence in daily affairs. However, they have struggled to see proofs as more than computational objects. This myopic view has naturally led to models motivated solely with improving efficiency, driving down costs, and cashing in on economies of scale. Such approaches do not expand the scope or intensity of human affairs. They are trapped in the pursuit of knowledge but convinced they are engaged in its application, furiously learning to build better and better highways with no cars in sight. To tap the boundless creativity of commerce, a new view of proofs must be developed that understands them not just as what they are but for how they can unlock new kinds of economic interactions and transform existing ones. The path to success, as always, is through purpose. Where will proofs find theirs?
By Prashanth Ramakrishna and Nirel Gershwind
Acknowledgments: We would like to thank Rohan Dave for his contributions, particularly in clarifying several relevant concepts. We would also like to thank Chris Tam, Dante Camuto, Will Robinson, Will Wolf, and Wei Dai for their gracious review and insightful comments.
Footnotes:
[1] Consider the statement A → B. Here statement A is referred to as the precondition and proposition B is referred to as the postcondition. A witness should be thought of as information attesting to the veracity of a statement. The “proofs as programs” paradigm can be summarized as follows: a proof that A → B is a program that takes as input a witness for A and constructs a witness for B.
[2]* Groth16 costs 200,000 - 250,000 gas to verify, STARKS cost >1,000,000 gas to verify, and many other proof systems are simply too expensive to be used on Ethereum.*
[3] We refer to a proof’s type as the proof system used to generate it. Aggregating proofs of different types therefore amounts to aggregating proofs generated by a heterogeneous collection of proof systems.
[4] Heterogeneous proofs are accumulated by a batcher into a Merkle root. The root is signed using a BLS signature by operator-verifiers if the proofs contained in the batch are valid. These BLS signatures are then aggregated and verified on Ethereum.
[5] Provers are differentiated from batchers, which perform aggregation on multiple proofs for consecutive blocks.
[6] Optimistic rollups serve as the primary alternative to validity rollups. Here, transactions are “optimistically” assumed to have been executed correctly. If a new block should not have been validated, a “fraud proof” relating to a Merkle authentication path can be submitted within a certain time frame. The recently introduced naysayer proof generalizes this to a setting where observers challenge SNARK proofs optimistically accepted by verifiers by pointing to an invalidating location in the witness.
[7] Some rollups have moved towards creating tech stacks, functioning instead as a settlement layer for other validity rollups to launch on top (Ethereum playbook!). As a result, rollups are beginning to create value by proving-as-a service, and the rollups building atop them are actually the source of demand.
[8] A bad proof refers to a verifying proof for an invalid witness or a proof for a valid witness that does not verify.
[9] The theoretical soundness guarantees of a proof system should mean the probability of such an event is negligible. However, implementation may lead to non-negligible soundness error in practice.
[10] As it affects price…