In the first part of this series, we explored how packet loss accumulates in block and fountain codes—even in simple network topologies such as daisy chains—and how the recoding capability of RLNC effectively mitigates this issue. In this follow-up article, we examine hierarchical topologies, such as trees, which have been adopted by Web3 protocols such as Monad and Solana. We present a simple example demonstrating that, even in tree-based architectures, RLNC can outperform both Raptor and Reed–Solomon (RS) codes by achieving lower latency.
As discussed in the first part of this series, loss accumulation in simple network topologies, such as multi-hop daisy-chains, was one of the striking disadvantages of block and fountain codes versus RLNC. Recall Fig 1[a], illustrating that a (generally small) number of transmitted shards can be lost at each hop along the path from the source to the receiver. Such loss is accumulated as the packets are passed from one hop to the next, so the more hops the packets need to travel, the more packets will be lost until they reach the receiver. The cumulative loss of encoded packets will either result in a failure of decodability or require substantial overhead.
Owing to recoding, RLNC does not suffer from loss accumulation. Missing pieces can be generated at each intermediate hop, and as a result the packet loss at the receiver will only reflect the losses at the previous hop. So, each intermediate node can forward either single-coded elements or recoded elements as soon as they can, without waiting for intermediate nodes to decode the value.
Networks, however, contain more complex interconnections than daisy-chain topologies. A slightly more complex topology are trees. Trees are hierarchical topologies stemming from a single node (the root), and then branching to intermediate (parent) and end (leaf) nodes. Trees have the important property of no cycles: starting from any node, no sequence of connections can lead back to the initial node. Interestingly, a tree topology can be imagined as a set of daisy-chains each starting from a branch node (Fig 1[b])! Each daisy-chain is marked with a different color in the figure, while the multi-color connection participates in two daisy-chains. Consequently, the tree structure does not compensate for the loss accumulation, but rather shares the same issue for both block and fountain codes.
Hierarchical tree-like topologies are adopted by two well-established Web3 protocols that also use erasure codes for information propagation in a decentralized environment: Solana’s Turbine [Y+17] and Monad’s Raptorcast [Lab23, Tea23] protocols.
Solana’s Turbine [Y+17] is a block propagation protocol designed to enhance data transmission across the network by breaking blocks into smaller packets, shreds, specifically by using R S codes, and distributing them rapidly and efficiently. Inspired by BitTorrent [Coh03], Turbine uses a tree topology (see Fig 2[a]) of the validators, where a leader produces a block, it splits the data into a sequence of batches, where each batch comprise 64 shreds where any 32 can be used to recover the original data (i.e. a code-rate of 1/2). A leader schedule is generated prior to each epoch, which occurs approximately every two days. The upcoming epoch is divided into fixed 400-millisecond slots, with a designated leader assigned to each slot. During its designated slot, the leader streams these shreds to a root validator, which in turn propagates to its children, which again then send the packets to their own children. This structure resembles a staged broadcast tree, where each layer of the tree further disseminates the data, reducing redundant transmissions and minimizing bandwidth usage. By breaking blocks into smaller chunks and leveraging RS coding, Turbine ensures that even if some shreds are lost, validators can reconstruct the full block without needing retransmission. This approach helps Solana achieve high-throughput while maintaining decentralization.
RaptorCast [Tea23], [Lab] is MonadBFT’s multicast protocol for block proposal dissemination using erasure coding based on a variant of Raptor codes (RFC 5053). Block proposals are split into fixed-size chunks and encoded into redundant packets, allowing the original block to be reconstructed from any sufficiently large subset. A two-level broadcast tree is used: the leader sends different chunk ranges to first-level validators, who then forward them to others. Validator responsibilities are proportional to stake weight, ensuring fair and efficient chunk distribution. This design fully utilizes network upload bandwidth while keeping propagation latency within 2× the network’s worst-case one-way latency. RaptorCast runs over UDP, and to compensate for packet loss, senders introduce configurable redundancy. Each block, typically of size 2MB, may generate thousands of chunks, distributed in contiguous ranges across all validators. This approach avoids retransmission overhead and tolerates faults without requiring bidirectional recovery.
To prevent the accumulation of shred losses—which can lead to decoding failures at the receiver—block and fountain codes typically adopt one of two strategies: (i) assume a fixed network topology (i.e., an upper bound on the number of hops between sender and any receiver) and generate a sufficient number of coded elements to compensate for losses; or (ii) decode the message at intermediate nodes, encode it anew, and forward it downstream at each hop.
Protocols like Turbine and RaptorCast follow approach (i): Turbine assumes a tree depth of two to three, depending on the number of active validators, while RaptorCast is based on a fixed two-level (two-hop) hierarchical structure. If these protocols were to add more layers as they scale, the number of hops would increase and throughput would decrease exponentially.
However, as we have discussed, that approach (i) is wasteful in terms of bandwidth for the same reason that the daisy-chain suffers when neither decoding or recoding are used.
Therefore, let us consider the following question: Is there a benefit to recoding over decoding following by a new coding? In particular, in a hierarchical structure if we allow decoding at intermediate nodes, is there a benefit to RLNC recoding? We shall see that the answer is ” yes” - there is a real benefit in latency, not being wasteful in terms of time.
Recoding provides RLNC with a distinct advantage in data propagation speed, as individually encoded elements can be forwarded downstream without the need for decoding at intermediate nodes. This advantage is illustrated in Fig. 3 for RLNC, RS, and Turbo code-based propagation. Assume the protocol executes in discrete time slots, where each time slot consists of a single broadcast round in which a parent node delivers one coded element to each of its connected children, corresponding to only one of the three protocols.
For the sake of conciseness, we depict shreds from the three approaches in the same figure. Faulty links—those that fail to send messages during a given timeslot—are shown in a lighter colored edge. A lightning icon appears on a node once it has received enough coded elements to decode the original message, with the icon color-coded to indicate which algorithm produced the elements used in decoding.
The figure compares the times taken for each coding strategy to propagate sufficient coded elements through the network. To prevent loss accumulation and decoding failure, block and fountain code approaches require each intermediate node to decode and re-encode the message before forwarding. In contrast, RLNC allows coded elements to be forwarded immediately without decoding, since loss accumulation and non-decodability are alleviated due to recoding.
Fig. 3[a] shows the initial setup of the system and the initial encoding produced at the root for each protocol. All three protocols start with four coded shreds and assume that any two coded elements (or shards) are sufficient to decode the original value. With fixed parameterization, the RS protocol generates exactly four coded elements, out of which any two can decode the message. Fountain and RLNC codes, on the other hand, produce unlimited combinations and can generate any number of coded elements.
At Timeslot 1 (Fig. 3[b]), one coded element from each protocol is delivered to two connected children (block and fountain codes are shown using the same symbol). Since none of the children receives two coded elements for the same protocol, they cannot decode the original data yet.
In Timeslot 2 (Fig. 3[c]), the root loses connection to the third child node but still manages to send coded elements to the first two children. The first child now has a pair of codes for all three protocols and successfully decodes the message. The rest of the children are still waiting for enough shards. An interesting point: during this timeslot, the first and third children of the root forward their RLNC shards to the lower level. As a result, one of the second-level nodes receives enough RLNC shards to decode the message.
It is worth highlighting that the first child performs recoding, producing shard number 7. Also, take note of the RS shred duplication at the leftmost node in the second level, which prevents it from decoding using RS codes. Under the same conditions, block and fountain codes require 6 timeslots to propagate the message to all nodes in the network. Web3’s currently hierarchical structures, used with legacy codes, are inherently wasteful of bandwidth and/or time. OptimumP2P (powered by RLNC) eliminates that wastefulness. In future articles, we shall explain also how removing the need for hierarchical structures is beneficial for fully using the network resources.
[Coh03] Bram Cohen. Incentives build robustness in bittorrent. In Workshop on Economics of Peer-to-Peer Systems, volume 6, pages 68–72. Berkeley, CA, USA, 2003.
[KMN] Battle of the Codes: RLNC vs Reed-Solomon & Fountain Codes. https://mirror.xyz/0xBfAC4db6d990A6cF9842f437345c447B18EbeF73/GD-GGB8jlpv9wxwpLQzSFkJfT_ F8fz91MxRoocI4L20 . Accessed: May 8, 2025.
[Lab23] Monad Labs. Monad: Parallel evm execution. https://www.monad.xyz/, 2023. Accessed: 2024-05-01.
[Lab] Monad Labs. Raptorcast. https://docs.monad.xyz/monad-arch/consensus/raptorcast. Accessed: 2024-05-08.
[Tea23] Monad Research Team. Raptorcast: Scalable communication for decentralized networks. Technical report, Monad Labs, 2023.
[Y+17] Anatoly Yakovenko et al. Solana: A new architecture for a high performance blockchain. https://solana.com/solana-whitepaper.pdf, 2017. Accessed: 2024-05-01.