Special thanks to Uma, John, Barnabe, Santiago, Toghrul, and Yuki for their valuable inputs and feedback which contributed to this piece. This piece is inspired from Figment Capital’s article on Decentralized Proving.
Until about 5 years ago, Zero-Knowledge (ZK) proofs were considered “moon math” - a technology that could not scale because of the complexity involved in generating a proof. Today, you can literally buy proofs from a proof market.
The rise of cryptographic proof-based infrastructure has catalyzed research into decentralized proof generation, with several projects exploring the design space. In this post, I examine some of these design approaches and build up towards what seems like the most likely future of ZK proofs - decentralized proof markets.
First, let’s get the basics out of the way. Why do we need Zero-Knowledge proofs at all?
ZK proofs use cryptography to prove validity of a statement. Their integration into blockchain infrastructure helps reduce trust within these systems, replacing it with cryptographic truth. Over the recent years, several important use cases have come to the fore that would benefit from the succinctness and privacy provided by ZK proofs:
Proof of provenance: Differentiating between AI-generated media and real content. Example: embedding digital signatures into photos and then putting the hash on a blockchain.
ZK rollups: Proving validity of off-chain stateful transactions onchain. Example: Scroll
zkBridges: Trust-minimized communication between multiple blockchains
Contractual proofs: Preserving privacy while signing agreements with untrusted parties
zkML: Increasing transparency of ML training models
zkGaming: Validation of user achievements and account details.
zkStorage: Accessing historical block data through a validity proof. Example: Axiom
This is obviously a non-exhaustive list. ZK-powered infrastructure is being developed at a breakneck pace. While most of these projects rely on centralized proof generation today, several have been exploring ways to decentralize the process.
But wait a minute, if ZK proofs cryptographically prove the correctness of an argument, why do we need to decentralize at all? Can’t we just verify if the proof is correct, even if it is generated by one centralized entity every single time?
Are we just running after decentralization for the sake of it?
Congratulations, you’re not alone in asking that question:
It is true that we can easily verify the correctness of a cryptographic proof. More than safety, the actions of a malicious centralized prover impact the realm of liveness, optimal performance, and privacy of the protocol and its users.
The adverse scenario is easy to picture in the case of a ZK rollup. Say you’ve bridged funds from Ethereum to a ZK rollup. You do some transactions on the rollup and then want to exit back to Ethereum. A malicious centralized prover could at least temporarily censor your transactions and halt the chain by simply refusing to generate proofs. You will be stuck on the rollup with your funds until a replacement prover comes in and generates proofs.
Privacy is another crucial concern with protocol-based proving for ZK apps. Decentralized proving protocols that allow for client side proving can help preserve the privacy of user data.
Here are some good answers to Cami’s question:
To summarize, decentralized proving provides 5 major benefits:
Liveness: Multiple provers ensure that the protocol operates reliably and doesn’t face downtime if some provers are temporarily unavailable.
Censorship Resistance: Having more provers improves censorship resistance. A small prover set could refuse to prove certain types of transactions.
Proving Optimization: Having multiple provers allows protocols to parallelize the proof generation process. This is especially useful in the case of ZK rollups where currently, proof times > block times, meaning that sequential proving is not a practical option. Based on the design of the prover network*, a larger prover set can also strengthen market pressures for operators to create faster and cheaper proofs, leading to better proving infra overall. (*As we will see later, this is not always the case, for e.g., in leader-election based networks, provers are not necessarily incentivized to produce faster proofs)
Privacy: (design-specific) End-users don’t need to share data if proofs can be generated on their local devices.
Legal Diversity: It is difficult for third parties to censor proofs if the provers are based in diverse legal geographies. (Note that legal sanctions would also affect the liveness of the chain, so legal diversity helps in keeping the chain live under adverse regulatory scenarios)
(Note: Throughout this article, I will not be focusing on Legal Diversity, as I assume decentralization = geographical decentralization. I will also not be focusing on Privacy because not all use cases require/provide privacy by default (zkEVMs for example). Privacy can be achieved by client-side proving that teams like Mina, Aleo, and Penumbra are working on.)
But how do we actually decentralize the provers?
So far, there is no one right answer. Projects have been experimenting with several approaches, which can broadly be categorized into 2 segments: ****
Prover Networks - Siloed sets of multiple provers for each application
Prover Markets - Open market of provers where all applications can request proofs
The following sections will explore these two segments in detail.
I call this approach a ‘Cathedral’ because it involves a set of multiple provers dedicated to the single mission of generating proofs for a particular app/protocol.
whitelist a set of provers. I call it a ‘Cathedral’ because the chosen set of provers is dedicated to a single mission of generating proofs for a particular app/protocol.
You can think of prover networks as being analogous to rollups replacing a centralized sequencer node with a set of whitelisted sequencer nodes.
There are 3 main considerations when setting up a prover network:
Leader election: How is a prover selected from the network for a specific proving task?
Incentives & Disincentives: What rewards & incentives can be provided to bootstrap a prover network and encourage more participation? How can malicious actors be punished? How can decentralization be sustained?
Hardware optimization: How do we maintain a balance where all provers get an equal opportunity to participate, yet are incentivized to optimize their prover setups?
A robust leader election mechanism provides liveness to the system. Incentives and disincentives, if set correctly, provide persistent decentralization resulting in Censorship Resistance, while hardware optimization makes the network performant.
The prover network approaches can be further classified in two categories based on the leader-election mechanism they employ:
Competitive approach: Provers race each other to generate a proof for the same data input
Collaborative approach: A turn-based approach where a (consensus) mechanism decides the leader for each proving task
In the following sections, we will analyze the various competitive prover network approaches in the industry. Competitive proving discovers the best candidate for each task, and incentivizes optimizing the hardware because everyone is racing against others.
Aleo is a ZK-based L1 where developers can run any application and generate proof of correct execution. Proofs from multiple applications constitute an L1 block.
Aleo initially planned to use Proof-of-Succinct-Work (PoSW) as its consensus mechanism. In PoSW (a subset of Proof of Necessary Work), the work that is proven is the generation of a SNARK proof. Miners compete to provide a valid solution to the PoSW puzzle by repeatedly generating SNARK proofs until they satisfy a given difficulty level provided by the protocol.
However, using PoSW requires grinding zkSNARKs, which is a rich design space that sophisticated provers can exploit to gain huge advantages. This was evident during Aleo’s 2nd testnet, as per the co-founder Alex Pruden: “One prover was running something special that no one else had access to, and thus they ended up dominating, which is bad for all kinds of reasons including community perception and most importantly, security of the underlying protocol.”
While PoSW is different from Bitcoin in that it accepts multiple valid solutions per block, and therefore distributes rewards to more provers instead of a “winner-take-all” dynamic, the existence of one prover who crushes everyone else disincentivizes prover participation, decreases decentralization, and keeps costs higher.
(Note that in contrast to Bitcoin, Ethereum POW wasn’t a winner-take-all dynamic, and uncle blocks were also given a portion of the rewards)
To solve for this, Aleo has revised the consensus mechanism to AleoBFT: A combination of PoSW and a PoS mechanism, where in:
Provers create proofs for a given block and earn pro rata portions of the coinbase reward (a subset of the total block reward) based on how many above-target proofs they submit.
PoS-elected validators bundle these proofs to propose a block in the Aleo L1 chain and receive rewards
Provers are paid in Aleo Credits. Validators stake Aleo Credits and earn more credits as a reward for proposing a block. Through the AleoBFT consensus (and competitions like ZPrize), Aleo wants to incentivize miners to develop hardware acceleration for SNARKs to commoditize these types of computations.
I believe the idea here is two-fold:
Commoditizing proof generation would attract more provers to the network and diminish any advantage a single prover could have
The ‘separation’ of Validators and Provers is somewhat similar to PBS in the sense that even if the prover is centralized, decentralized validators can choose what proofs constitute the L1 blocks (and can also generate proofs themselves if required)
Aleo’s recent testnet showed some impressive numbers as it attracted 44,000 provers from around the world. These provers were incentivized with a distribution of 3.1M Aleo credits.
This is an approach previously proposed by Polygon for its zkEVM. It relies on a competitive approach between the provers, called the Proof-of-Efficency (PoE).
Below is how the process of updating Polygon zkEVM’s state on L1 using PoE was proposed to work. The network consists of 2 permissionless participants: Sequencers and Aggregators.
Sequencers
A Sequencer proposes valid batches and is incentivized with the fee paid by the users. In turn, the sequencer pays the L1 transaction fees + some amount of MATIC (for the aggregators). The sequencer is profitable if txs fees
> L1 call
+ MATIC
fee
Aggregators
Aggregators are the provers in the network who race against each other to generate proofs of the batches proposed by sequencers. For a given batch, the first aggregator to submit a validity proof earns the MATIC fee being paid by the Sequencer. The aggregator is profitable if: MATIC fee
> L1 call
+ Server cost
However, Polygon is planning on sunsetting this PoE proposal and will be replacing it with some other (undisclosed) mechanism for leader selection.
Why are they doing so?
Just like PoSW, the PoE approach incentivizes optimizing the proving system. But it has some significant disadvantages:
Competitive approach is centralizing: The “best always wins” model can lead to centralization by disincentivizing participation. If A is better than B, but always wins, this disincentivizes B to participate in future rounds. One solution could be rewarding multiple provers (say the first n provers), but A could simply run n instances of its own prover and generate n proofs.
It’s not the best use of resources: The “fair share” model (winning chance roughly equals relative performance) is more compatible with decentralization (e.g. Proof-of-Work) but introduces redundancy which increases operational costs. That is, even if A is better than B and both have roughly equal winning chances, they both have to expend effort to participate in generating the proof for the same thing.
These issues are solved by the Collaborative approaches to Prover Networks. Collaborative approaches elect one prover per batch (similar to how Gasper chooses a block proposer for each slot). Such an approach is also called a turn-based monopoly.
Several protocols are actively researching (or have already implemented) some variation of a turn-based model to decentralize their prover networks. Let’s dive in:
Scroll’s current mechanism uses an entity called the ‘Coordinator’ that randomly elects provers for different chunk of blocks, as well as for proof aggregation. (Once the prover network is decentralized, the coordinator will be enshrined into the validating bridge and there will no longer be a distinct entity.)
Currently, L2 blocks in Scroll are generated, committed to base layer Ethereum, and finalized in the following sequence of steps:
The Sequencer generates a sequence of blocks and sends the execution trace to the Coordinator. It also submits the transaction data as calldata to the rollup contract on Ethereum for data availability, and the resulting state roots and commitments to the rollup contract as state.
The Coordinator randomly selects a prover to generate a validity proof for each chunk. To speed up the proof generation process, proofs for different chunks can be generated in parallel on different provers. Then, the coordinator collects the chunk proofs and dispatches a batch proving task to a randomly selected aggregator prover.
Finally, the Coordinator submits the aggregate proof to the rollup contract to finalize L2 blocks by verifying the aggregate proof against the state roots and transaction data commitments previously submitted to the rollup contract.
Anyone can sign up to be a prover with Scroll and have a shot at being chosen by the Coordinator to either create new proofs or aggregate already generated proofs.
How are provers encouraged to join the prover network and create proofs?
The protocol could do a revenue share with the prover, where the block rewards are split between the sequencer and the prover. However, there exists an incentive imbalance in such a split. Given a choice, nodes would always choose to be a sequencer, since sequencers will always have a shot at earning higher revenue via MEV (in addition to block rewards).
In order to tackle the incentive imbalance between sequencers and provers, Scroll is currently looking at two broad approaches:
1. Prover-Sequencer Separation: This would be an auction mechanism similar to PBS on Ethereum. Multiple sequencers are elected per round, and they post bids to the proving network. The proving network chooses the best bid for them, forcing sequencers to compete amongst themselves and pass on the MEV captured to the provers. The provers are subjected to a Sybil resistance mechanism (such as Proof of stake), and for each batch, a prover is elected randomly (say via a VRF, or by using RANDAO from Ethereum).
An auction mechanism solves the incentive alignment for the provers. It also creates minimum overhead since it requires only a limited set of sequencers. However, PSS introduces several drawbacks:
a.) Double auction: Since sequencers are also likely to outsource the construction of the block to external builders (to maximize MEV), PSS leads to an auction within an auction leaking additional value out of the protocol.
b.) Incentives philosophy: There is also a philosophical concern of who should get more of the MEV revenue between builders, sequencers, and provers? PSS setup in a competitive environment would lead to most of the MEV flowing to the provers. In addition, there are all the same questions from PBS on Ethereum like - How do we avoid off-chain agreements, should PSS be enshrined in the protocol? If yes, what’s the optimal enshrinement?, and more.
c.) Sequencer Decentralization: PSS makes it difficult to decentralize the sequencer network itself (and for the rollup to join a shared sequencer network)
d.) Buffer problem: What if proving is not profitable for a batch after execution? Provers could refuse to generate a proof if they know it will result in a loss. The prover leader election model would need to implement a fallback option. (note that a fallback option would be required in almost all prover setups)
2. Sequencer Elimination: This second approach identifies the downsides of PSS and eliminates the role of the Sequencer from the protocol.
The idea is that a sequencer is not “necessary” for a ZK rollup, since the prover can simply play the role of a sequencer as well. The responsibilities of the sequencer are broadly the following:
Creating batches from transactions in the mempool
Providing soft confirmations to the users that their transactions have been included
Posting the batches on the DA layer
Executing the batch of transactions and updating the state root
However, there’s a high possibility that we will see block-building outsourced to dedicated block builders. This will take care of the transaction ordering aspect of sequencers.
These external block builders will bid in an auction for the right to propose a block. The prover for the batch (elected through a turn-based mechanism) would simply choose the highest/best bid. Since the prover anyway has to run a full node, it can execute the batches, give soft confirmations to the users, and post the data on the DA layer. As a prover, it anyway is responsible for generating proofs for that particular batch.
And this solves the incentive imbalance problem between the Sequencer and the Prover. It also solves the Buffer problem, as proving is a requirement for the ‘elected’ sequencer even if it’s not profitable (they could be slashed if they don’t provide a proof).
However, there are a few downsides:
‘Requiring’ the leader for a batch to create a proof for the batch does not incentivize the provers to optimize their proving systems for speed (provers are still incentivized to optimize for cost to increase their profits). As per their blog, Scroll has developed the fastest GPU and ASIC/FPGA accelerator for the prover. They might use it as a standardized requirement for participation in the network, allaying any concerns around one prover dominating the network.
Eliminating the sequencer role bars the rollup from joining a shared sequencer network, as in those networks, nodes are ‘dumb’ and are only expected to maintain the bare minimum hardware requirements for sequencing.
The combined entity approach is also used by Starknet, although in their case, this entity doesn’t generate proofs for its own blocks:
Starknet has proposed a chained proof protocol where the prover liveness is coupled to the chain liveness. It is also a Prover-Sequencer Separation, but in the sense that the sequencer is responsible for proving some other block than the one it has ‘sequenced’, allowing it to still earn MEV rewards from its own block:
Sequencer (proposer) is selected based on the staking on L1
The proposer of block n is also responsible for producing a proof of the validity of the block ‘n-k’
All k strands are then merged together using recursion
This approach has a couple of benefits:
Continuous proof aggregation: Just verifying one proof from each strand suffices to prove the entire strand
Proving liveness (finality) coupled with the chain liveness: Sequencer of block n is REQUIRED to provide a proof of block n-k along with the block n itself. The lag ‘k’ is determined to give this sequencer enough time to either generate a proof itself OR buy a proof externally (more on this in the next section - buying proofs externally would allow the sequencers themselves to have lower/no state - making it easier to decentralize them.)
Starknet and Scroll both seem to be converging on a similar-realm solution of using the same entity for proofs and sequencing and using a collaborative model to choose the election leaders. (And maybe even Polygon now?)
A turn-based monopoly solves the centralization problem, but it has its own tradeoffs.
A collaborative approach to leader election in prover networks:
Does not necessarily choose the best candidate for the job
Does not incentivize provers to optimize their proving setups for speed
Requires backup in case the leader doesn’t produce a proof
One possible way of solving the backup problem is having a distributed proof generation process. In distributed processes, a batch of transactions is split into multiple parts, and different provers are elected to create a validity proof for each part.
Some good examples of distributed prover network designs can be found on the Aztec forum. I will share one of these proposals here:
This proposal by Jaosef aims to set up a collaborative proving network, with two explicit goals:
Ensuring that the roles of sequencer and prover are 100% divorced to minimize the centralizing forces that can arise from this. (Prover-Sequencer Separation)
Better incentivizing stakers, and ensuring sequencer machines are not idle when not proposing
In this setup, the protocol defines a staking system used for selecting sequencers and provers. For each slot, the staker with the highest VRF over the current RANDAO value is elected to produce a canonical block. In addition, N stakers with the lowest VRF scores are chosen to be a part of the proving committee for that block.
Once the block has been proposed, the sequencer splits the required proof tree into M subtrees (where M<N). The eligible provers from the proving committee then generate proofs for these subtrees, and the sequencer then combines them to create the complete proof tree.
This proposal presents two methods to ensure proof data is available and there are no with-holding attacks:
The canonical block ranking algorithm could require signatures from M/N of the elected provers. This forces a sequencer to broadcast data in order for their block to become canonical.
Sequencers are incentivized to quickly disseminate data by increasing their reward based on the percentage of the proving committee that contributes to the block.
Provers selected for the proving committee who repeatedly (e.g 3 times in a row) do not contribute to blocks can be slashed.
Such a setup improves the liveness guarantees of the system by requiring only M signatures out of the N provers in the committee. It also increases the ROI for stakers who get additional revenue opportunities even when they aren’t selected as the sequencer.
However, this proposal makes the barriers to entry into the staking system higher, as all stakers need to be able to generate proofs (if selected). Further, the proposal might not be choosing the best prover for each job and might create redundancies in generated proofs (although since the proofs are distributed, the redundancies would be much lower than those in the case of POW)
Can we do better? Can we get the best of both worlds - collaboration + competition?
Mina protocol might have the answer:
Mina Protocol has developed a marketplace for proof generation called the SNARKetplace. Block producers are chosen through a VRF based on their stake. They are responsible for creating new blocks that include:
recent transactions broadcasted on the network,
a SNARK proof of the validity of the current state of the blockchain
proofs of past transactions.
Block producers can purchase these proofs on the SNARKetplace using their block rewards.
The Snarketplace revolves around a fixed-size buffer, like a queue or “shelf” of work to do. Block producers add work to this shelf — in the form of new, unproven transactions - which are then SNARKed by the “SNARK workers”.
Since block producers profit from including transactions in a block (through transaction fees and the coinbase transaction), they are responsible for offsetting the transactions by purchasing an equal number of completed SNARK work, thereby creating demand for SNARK work. For example, if a block producer wants to move five new transactions to the back of the shelf, they must first buy five already SNARKed transactions from the front of the shelf.
There is no protocol involvement in pricing snarks, nor are there any protocol-level rewards for SNARK workers to produce snarks. The incentives are purely peer-to-peer, and dynamically established in a public marketplace, aka the SNARKetplace.
To prevent one prover from undercutting the market, a high minimum proving time is enforced to level the playing field. To prevent SNARKs from being stolen, when a SNARK Worker observes some new work to do, they create a transaction SNARK with a special unforgeable digital signature, called a “signature of knowledge.” The signature of knowledge contains the fee that this work is being offered for and information about the wallet address to pay out the fee to. Attempting to change the public key, as in trying to steal someone else’s SNARK work, would result in crippling the SNARK itself, making the SNARK no longer valid.
Let’s see what properties Mina’s architecture provides:
Liveness: Coupled with the liveness of the blockchain itself. Block producers are incentivized to include proofs of previous transactions
Decentralized: Anyone can join the SNARKetplace to produce proofs
Performant: To increase profits and beat the competition, provers are incentivized to create cheaper and faster proofs
Outsourcing proving to a dedicated market makes up for a lot of limitations of the collaborative or competitive approaches discussed above. It also keeps the core protocol unopinionated as the protocol doesn’t enshrine any particular approach to reward splitting between block producers and provers.
Aztec's selected proposal for prover mechanism, called Sidecar, is also designed along a similar approach of keeping the proof sourcing process outside the protocol. However, in their case, sequencers can buy proofs from anywhere and not just from a dedicated proof market. Any vertically integrated sequencers can also generate the proofs themselves.
This is also similar to the approach taken by Taiko (note that Taiko had tried a competitive approach in one of the earlier testnets, but it led to two of the provers dominating the market). Taiko now offers a PBS-style approach, where block proposers can engage in out-of-protocol agreements to get proofs for their blocks. The selected provers are required to stake TKO tokens, which get slashed if a proof isn’t submitted on time.
The purpose of a proof market is to commoditize proofs while maintaining performant proof systems for all purposes. Mina’s SNARKetplace does it for the Mina ‘cathedral’ specifically.
However, markets that create “thickness” of volume in one place are known to be better at producing satisfactory outcomes for both sides (buyers & sellers) of a transaction than the fragmented markets that exist in silos.
The strategies adopted by Aztec and Taiko are indicative of a trend towards the development of a more open marketplace for Zero-Knowledge (ZK) proofs: An ‘open bazaar’ that supports proof requests from a wide array of ZK applications, promoting broader accessibility and utility.
The decentralization approaches discussed above rely on a set of dedicated provers focused on generating proofs for one particular application. That approach has benefits as it allows the protocol to build a credible prover network (when enshrined) and improve the native token utility.
But bootstrapping a prover network from scratch is in no way an easy task. It’s also not the best use of resources, as multiple provers often compete to generate the same proof. Even in turn based elections, provers have poor ROI as their infrastructure is not required to be operational at all times. Prover networks require protocols to shell out a fixed budget for proving costs, which can be undesirable if the network demand is low. Further, enshrining a particular approach into the protocol could be particularly risky if it's not the most efficient approach.
Thus, for most ZK applications, it would make sense to outsource the proof generation process to a dedicated Proof Market (or as I like to call it, the ZK Bazaar). Open, decentralized proof markets are an inevitable extension of the modular thesis, democratizing access to proofs for any application and removing the need to set up expensive, in-house proving infrastructure.
Proof Markets are peer-to-peer (or business-to-peer) marketplaces where the commodity being traded is ZK proofs. ZK apps can put in requests for proofs for their apps, and provers can permissionlessly participate on the other side and compete to generate these proofs. The pricing of proofs is then decided by the supply and demand dynamics (and the underlying order-matching design)
While protocols relinquish control over the prover network by outsourcing proofs to an external marketplace, they can still use their native token to seed a submarket in the larger market. For example, a protocol could post proof requests attached with a certain amount of the native token, which gives them an adaptive pricing policy (if they don’t find a taker, they could consider printing more tokens).
In fact, this option would more budget flexibility to protocols. Since most protocols would have ongoing requirements for proofs, sub-markets would allow them to elastically manage the size of the network based on budget and requirements without having to consistently pay for a fixed node sized network.
Unlike proof networks where all provers compete (or take turns) to prove the same task, prover markets are open-ended and provers can parallelly bid for the several available open tasks in the market.
Persistent Decentralization: The incentives should be structured well to encourage more people to participate in the market (Permissionless for provers). Further, the pricing in the market should not price out the smaller provers
Performant: The market should be able to provide proofs within the desired time and price of the requesting application (Proof requests can expire!). It should encourage participants to optimize their proving systems to provide competitively priced proofs.
Flexible: The market should be able to serve proof requests of all kinds (Permissionless for buyers). A zkBridge connected to Ethereum may need a final proof like Groth16 that provides cheap on-chain proof verification. By contrast, a zkML model may prefer a Nova-based proving scheme that optimizes for recursive proving.
These properties together help achieve the end goal we seek from decentralizing provers in the first place: Censorship Resistance, Liveness, and Prover Optimization.
Certain applications may require that provers operate as full nodes, complicating the application’s reliance on external markets for efficient proof generation. If provers in the market, tasked with generating proofs for a protocol, lack access to the protocol's full nodes, the protocol must then provide them with a complete execution trace, resulting in inefficient bandwidth usage.
Again, this could be solved but seeding submarkets within the larger open market. Protocols could use their tokens to incentivize provers to join the submarket, and could also specify hardware benchmark requirements to join.
The design space of proof markets is extremely rich, and there are several tradeoffs to be considered while designing a market that delivers all 3 of the properties mentioned above. This report by Figment Capital gives a good overview of these considerations. The most important ones of these are:
Incentives and Disincentives: How are the provers awarded for their work? How are they slashed for adverse actions?
Matchmaking: What is the process of electing a prover for a task?
Custom circuits vs zkVMs: What are the constraints around the apps requesting proofs? Do devs need to compile their code before submitting a request?
Operator diversity: Is it permissionless for provers to enter the market? Are there any minimum hardware requirements?
Progressive and persistent decentralization: What can be done to maintain a low entry barrier and a decentralized network? How can the market prevent the bigger players from undercutting the market?
This is in no way an exhaustive list and several other considerations go into designing prover markets. Different choices materialize into different characteristics of the market.
Let’s take a look at some of the existing or proposed Prover Market designs and see what tradeoffs they choose and how they deliver on the desired properties.
The Nil foundation's market is an open-ended, decentralized prover marketplace that provides access to a distributed network of proof generators for anyone looking to leverage ZK proofs for their protocol.
The market uses an orderbook style matching system where:
Apps and protocols are the buy side of the market. They can put in requests for ZK proofs for their computations, and provide an expiration time for the request along with the cost they're willing to pay for it.
Proof generators form the sell-side and put in their ask price for these proofs. This allows them to obtain a valuation of their computational power (and reputation) in the market.
The requesters can either provide the circuit for the code themselves, or they can simply share the code written in high-level languages like C++, Rust, JavaScript, etc., and Nil's circuit compiler, ZKLLVM, transforms the code into a provable circuit. The newly generated circuits are reusable by the market, and the original author is rewarded every time the circuit is reused.
Nil’s proof market takes two factors into consideration while pricing a proof:
Proof generation cost.
Proof generation time.
This induces a competition between proof generators to provide a proof with the smallest latency or with the cheapest generation cost.
A proof can be sold and bought at a price more expensive or cheaper than its production cost. This results in a traditional market-like price structure.
Some applications might not need the proof after a certain amount of time has passed. In such cases, apps can share an expiration time along with the proof request. The existence of an expiration time would result in a rapid decline in the proof’s price.
But in cases where there’s a proof required at regular intervals (say for each block of a blockchain), the fastest proof might not be the most expensive, since the requester only requires SOME proof every time period.
Circuit customization also impacts the proof generation process. Each proof is unique thanks to its inputs and its circuit. And since the circuit is what defines a type of a proof, it results in defining a “trading pair”.
An open market for proof generation presents the opportunity to gain new market insights that may influence different parties’ behavior and supply/demand decisions on the platform. Let's look at the various tradeoffs Nil has chosen in building this market:
Incentives and Disincentives: Incentives for the provers are the prices paid by the proof requesters. There’s a slashing penalty if the prover fails to deliver the task
Matchmaking: Orderbook-style
A proof requester sends a request with a desired price c_r to the market.
Proof Market locks c_r tokens from the buyer's account.
Proof producers send proposals to the market with a price c_p <= c_r.
Proof Market matches the request with the proposal of the proof producer.
The proof producer generates a proof and sends it to the market.
Proof Market verifies proof and pays c_r - commission tokens to the producer.
The proof requester takes their proof and uses it.
It is a speed-based market where provers compete to make faster and cheaper proofs.
Custom circuits vs zkVMs: Custom circuits can be built for each requirement
Operator diversity: Permissionless to join, but provers need to deposit a stake. Let X be the number of locked funds by the proof producer. Then they can process requests on the sum of X/pf at once, where pf is the penalty factor.
Progressive and persistent decentralization: Distributed marketplace. If a prover grows and increases prices, smaller provers could re-enter and offer cheaper proving prices in the market
The Nil market serves all 3 desired properties of a prover market. It maintains decentralization, is flexible via zkLLVM, and is performant as it makes provers compete against each other.
Gevulot is also a decentralized prover marketplace like Nil. The major difference is that whereas Nil is more like a dApp (a ‘proof’ DEX, more specifically), Gevulot is an L1 blockchain that allows users to permissionlessly deploy arbitrary provers and verifiers as on-chain programs. Blocks on Gevulot contain financial transactions and proofs, rather than smart contract state transitions.
The two types of programs on Gevulot - provers and verifiers - can either be deployed as a pair (“proof system”) or just as a standalone verifier program for validating proofs produced outside the Gevulot network. The only constraint to deployment is that the prover program must output a proof that can be verified by the verifier program for the system to be useful. Both types of programs can be written in a variety of languages such as Rust, C, C++, etc.
The network itself can be seen as having two distinct node types which together converge on network state: validators and provers
Provers complete proving workloads
Validators process transactions, verify proofs, and order these into blocks
Let’s look at an example of how a zkRollup could use Gevulot to generate proofs for its blocks:
Users on the rollup generate transactions, which the sequencer gathers and orders.
Rather than generating the proof itself, the sequencer sends the data to the proof system deployed on Gevulot (either by running a full node or being connected to an RPC provider). The sequencer also states the number of “cycles” for which it wants the proof system to run. (more on this later)
The proof request is added to the mempool and gets allocated to a small group of provers using a VRF.
The provers in the group run the program and attempt to generate the proof within the number of cycles that the sequencer node has specified.
The fastest prover in the group to generate the proof wins.
Once the proof is in the mempool, the sequencer’s full node or RPC provider can immediately verify the proof and get a deterministic guarantee of finality. If valid, the sequencer can post the proof on their settlement layer (such as Ethereum).
The winning proof is verified by a subset of validators on Gevulot until the verification threshold is reached (2/3rd of the validator network), after which it is included in a block and the provers get rewarded.
The following are the defining parameters of the Gevulot network:
Incentives and Disincentives: The fees structure of Gevulot is quite unique, as they introduce the concept of “cycles”. A cycle in Gevulot is equal to one block and functions as an objective measure of a programs running time. In running a Gevulot program, the user decides how many cycles they want the prover program to run for and how many provers should do so simultaneously. The maximum fee for the user then is:Maximum fees paid by user = Prover Amount * Cycle Amount * Fee Per Cycle
If the user does not pay for enough cycles, the program will not complete and the nodes will return a fail. If the user pays for excessive cycles, the nodes will return the output as soon as the program completes and the user will only pay for the cycles it took for the fastest prover to complete the proof, and the remaining amount is refunded.
Matchmaking: The workload is allocated to one or more prover nodes in the active prover set using a verifiable random function (VRF) in a deterministic manner, so that the prover for a given workload can be calculated asynchronously. A prover can choose to decline a workload if they are at capacity or bandwidth constrained. If a workload is declined by an allocated prover, any prover in the active prover set can contribute a proof, but only the first of these will be rewarded, and the reward schedule does not change. For example, say a workload is allocated to 3 provers and 1 of them declines. The reward for a single proof in that group is then up for grabs, and any prover can contribute a proof and be paid out as if they were allocated that workload. However, after 3 proofs have been generated, two via allocation and one via open market, any subsequent proofs generated will not get a reward. If a prover does not decline, but does not produce a proof within the specified cycles in a situation where all other allocated provers do, then the slot is also opened to the market as above, allowing any prover to produce a proof and claim the reward. This is to ensure redundancy guarantees hold.
Custom circuits vs zkVMs: The requesting applications can permissionlessly deploy any custom proving systems of their choice on the Gevulot network
Operator diversity: To be allocated any tasks, provers need to be a part of the active proving set. Joining the activer prover set is permissionless, although it requires a stake and the completion of a POW workload to verify that the prover meets the hardware requirements. Leaving the active prover set always has a cooldown period before the stake will be unlocked.
Progressive and persistent decentralization: By design, Gevulot is a decentralized blockchain with very low barriers to entry either as provers or validators. The prover and verifier programs are compiled into unikernel images, which are very lightweight operating systems designed to run only a single process at a time. This reduces the hardware requirments to participate in the network. Since the workload is allocated by a VRF, it is difficult for one prover to dominate the network.
Gevulot also prevents provers from copying completed proofs from the mempool and broadcasting them as their own, while ensuring the end-user receives proofs as soon as they are finished. The proof for each workload is encrypted using a secret key. This key is then further encrypted by the public key of the user that broadcast the workload and a temporary prover key, leading to two encrypted versions of the proof. These encrypted proofs are then broadcast to the mempool. The end-user can then immediately pick up and decrypt the proof for use. Once the cycles have been exhausted or all the proofs have been generated and broadcast, the prover then broadcasts the temporary prover key so the validators can decrypt the proof and verify.
Like the Nil market, Gevulot also serves all 3 desired properties of a prover market. It maintains decentralization, is flexible, and is performant as it makes provers for each workload compete against each other for speed.
Bonsai is a general-purpose zero-knowledge proof network that helps chains and apps scale their computations without adding on any additional trust guarantees. Apps can send a request to Bonsai directly from the smart contract, allowing Bonsai to take the execution offchain. The results and the correctness of the computation can then be verified onchain using a single ZK proof. Such a stateless offchain setup is also called a ZK Coprocessor.
Bonsai combines three key ingredients to produce a unique network that will enable new classes of applications to be developed across the blockchain ecosystem:
A general-purpose zkVM capable of running any virtual machine in a zero-knowledge/verifiable context
A proving system that directly integrates into any smart contract or chain
A universal rollup that distributes any computations proven on Bonsai to every chain
Any protocol or app can submit an execution request to Bonsai for its app logic. These requests are managed by a Request pool. The proving for these requests is currently in-house. As per the docs, the long-term plan is for RISC Zero to build out a Prover marketplace that will match proof asks with bids allowing for permissionless participation in the proving network.
Until the training wheels come off, Bonsai is a one-sided marketplace without any explicitly defined incentives for the provers.
RISC Zero uses a custom zkVM for its proving system that can compile arbitrary Rust (and other language) codes into the required circuits for proving correct execution.
All the proofs originating from Ethereum and other chains are all rolled up into a single Universal Rollup by Bonsai. This proof can be posted and verified on any chain, enabling ‘composable interoperability’ between smart contracts on different chains.
While it's currently a centralized market, the Bonsai Coprocessor is a major step forward in the helping scale blockchain apps in a trust-minimized manner.
Succinct is building core ZK infrastructure that focuses on a solving a fundamental issue in the ZK industry. Theoretical research on ZK has achieved major milestones in recent years. However, the actual practical adoption has not kept pace with the speed of research. This is because in the current state, the ZK developer experience is too cumbersome:
The lack of shared APIs or standards requires ZK app developers to learn custom tooling and deploy inefficient, one-off infrastructure specific to their use-case. This imposes high coordination overhead, creates reliance on centralized provers for liveness, and slows down development time.
The ZK space is a firehose of novel algorithmic breakthroughs and engineering improvements that increase prover efficiency by orders of magnitude year over year. Developers relying on monolithic stacks with high switching costs are at risk of using outdated tech.
Succinct’s ZK infrastructure, including a decentralized proving network and an open-source zkVM called SP1, enables any developer to build ZK applications based on the latest advances in open-source proof systems, zkVMs, and hardware. It coordinates all parties in efficient price discovery of proof generation on a highly available decentralized network.
The decentralized proving network is still under development, but today applications can use the prover network alpha at https://alpha.succinct.xyz/ (a deployment interface for the network) for a seamless developer experience and 1-click hosted proof generation. The proof generation will be migrated to the network once it is live.
The Succinct Network is optimized for fast-finality, short-term censorship resistance, and sovereignty. It aggregates proving demand across applications and provides the following benefits:
For developers: a standardized interface to easily build with any open-source zkVM or proof system.
For applications: outsourced proof generation to a decentralized network of provers with strong liveness and censorship-resistance guarantees, that provides the cheapest pricing for proofs due to economies of scale of the network.
For provers: an open marketplace for proving with cost-effectiveness and high reliability.
Like RISC Zero, currently, all proving requests are fulfilled by Succinct’s in-house proving infrastructure. In the long-term, this infrastructure will evolve into an open, decentralized proof market where provers will earn fees for providing their compute.
The design space for decentralized provers is massive and still under-researched. Yet, it is increasingly evident that developers will likely have streamlined access to Zero-Knowledge proofs via open Proof Markets in the future.
Siloed Prover Networks might be viable for larger protocols with consistent, high demand - justifying the costs of setting up and maintaining a prover network - but for most other applications, the Proofs-as-a-Service model of proof bazaars would be the more practical approach.
It's fascinating to see the pace of development across all the teams trying to tackle it from different angles. I hope this article provided a helpful overview of the state of the research and left you curious enough to dive deeper into the rabbit hole.