Gnosis Chain: Block Size Impact and EIP-4844
September 14th, 2024

In this exploration, we analyze the correlation between block size and chain performance, focusing on its implications for the Gnosis Chain. We aim to leverage these insights to optimize parameters for the Gnosis Chain, especially considering the introduction of EIP-4844 in the Ethereum Dencun upgrade. EIP-4844 proposes a new transaction format, accommodating 'blob-carrying transactions' with a significant data load, aligning with future sharding goals and necessitating an increase in block size for enhanced network efficiency.

TOC


Big Blocks Experiment

Formal Description

The Big Blocks experiment on Gnosis Chain is inspired by a similar experiment originally conducted by the Ethereum Foundation.

We analyzed block diffusion latency based on data collected from the so-called Sentry nodes. To collect most of the chain metrics, we used the Xatu network monitoring tool. For collecting execution payload sizes (in bytes), we utilized a fork of Goteth.

Sentry nodes are actual Gnosis Chain full nodes with the Nethermind execution client and the Lighthouse consensus client running in different geolocations. During the analysis, we primarily focused on the effect of block size (in bytes) and derived a latency distribution for any given size.

Experiment Steps

  • Measure "baseline" metrics under normal usage conditions.

  • Run a spammer tool to increase the average block size to a specific level.

  • Contrast key metrics against the baseline.

Analysis Time Range

The experiment takes place on 04/10/2023 (dd/mm/yy). Big blocks were spammed within a range of approximately 3000 slots. The several days leading up to the experiment date were used to collect baseline network metrics:

  • Highload range: 11480000 - 11483000

  • Baseline range: 11360000 - 11480000

Analysis

Preface

It's important to understand that various size buckets were utilized to spam the network during the experiment. It is considered that the average latency has a linear correlation with the block size up to a certain size threshold, with latency spikes occurring after surpassing the threshold.

Latencies

During the period of spamming large blocks, it was observed that high-sized blocks could significantly increase the average chain latency. The plot below illustrates the average latencies during the highload period:

Average Latencies Plot
Average Latencies Plot

We can observe spikes where the average block observation latency increases to more than 2 seconds for over 50% of the blocks. There were also instances of super high latency blocks with latency exceeding 5 seconds (depicted by the blue spikes).

Participation

The following dashboards display how the participation rate was affected by the big blocks spam. The lowest chain head participation rate we recorded was approximately 17%.

Participation Rate Dashboard 1
Participation Rate Dashboard 1

Deeper latencies analysis per size bucket

This section aims to analyze the correlation between block size and latency with increased precision. Blocks were organized into groups based on their sizes.

Additionally, we need to consider the time interval required to attest the block.

As per the specification:

A validator should create and broadcast the attestation to the associated attestation subnet when either (a) the validator has received a valid block from the expected block proposer for the assigned slot or (b) 1 / INTERVALS_PER_SLOT of the slot has transpired (SECONDS_PER_SLOT / INTERVALS_PER_SLOT seconds after the start of the slot) -- whichever comes first.

For the Gnosis chain, the attestation interval is calculated as follows:

INTERVALS_PER_SLOT = 3
BLOCK_TIME = 5

interval = 5 * 1/3    # ~1.66 seconds

This implies that to achieve a high head participation rate, a block should be propagated to the chain within a time lower than the 1.66-second threshold.

Two following plots shows the average latency of all observations for a specific block sizes:

It is evident from this observation that latency increases significantly for the size buckets exceeding 0.5MB (purple and brown groups). Furthermore, a considerable number of blocks from these groups no longer fit within the required 1.66-second interval, resulting in a decline in the head participation rate. Hence, we can consider blocks of size >0.5MB as high latency blocks that may have a non-negligible impact on the chain's performance.

Here, a significant spike is evident within the size range of 0.75MB to 1MB. It is also evident that the distribution of latencies is less stable for the groups with larger block sizes.

The mean block size across the analyzed range is roughtly around 75KB ~= 0.075MB.

Analysis recap

  • "Safe" block size threshold 0.5MB

  • Average block size 0.075MB

EIP-4844

The "Blobs" in the chain context refer to a unique transaction format. These transactions carry a substantial amount of data that cannot be accessed through EVM execution. They are included in a mempool awaiting inclusion by a block builder. Blobs essentially represent off-chain data, referenced within the chain.

When builders include blob transactions in a block, they contribute to the block's data size, affecting its overall capacity. Blobs are designed to play a crucial role in data sharding, providing dedicated data space for rollups. Furthermore, network manages blob transactions within blocks as a "sidecar" to a beacon blocks.

Define proper number of blobs per block

Is follows from the specification that the target and max number of blobs per block may be specified with TARGET_BLOB_GAS_PER_BLOCK and MAX_BLOB_GAS_PER_BLOCK parameters.

The size of single blob can be calculated as follows:

BYTES_PER_FIELD_ELEMENT = 32
FIELD_ELEMENTS_PER_BLOB = 4096

byte_size = 32 * 4096            # 131072

mb_size = byte_size / 1024**2    # 0.125 (megabyte units conversion)

Sizes that were stated in analysis recap above leading to the formula to calculate possible blobs blobs number per block to not create a DoS vector:

import math

avg_block_size = 0.075
safe_size_threshold = 0.5
blob_size = 0.125

number_of_blobs = math.floor(                            # 3
    (safe_size_threshold - avg_block_size) / blob_size
) 

Safe blob count arguments

  • Therefore, having a maximum of 3 blobs per block is considered safe and will not significantly impact network bandwidth, as demonstrated by the Big Blocks experiment. The key consideration in determining the suitable value in this context is the concern regarding the possibility of DoS attacks, as a higher blob count increases the potential for this vector.

  • It's essential to consider that the Ethereum network plans to leverage blobs for a form of exchange, potentially allowing for an increased block size in exchange for offloading on-chain activity to Layer 2 (L2) protocols. Conversely, Gnosis Chain currently doesn't have as many Layer 2 solutions in place, making the Ethereum concern less directly relevant for Gnosis at the moment.

  • Additionally, it's important to note that the average block size and on-chain activity are expected to increase as we move towards a bull market period.

Higher blob count arguments

  • More blobs per block can facilitate the development and deployment of decentralized applications (dApps) on Gnosis Chain. Developers can leverage the increased data availability to create complex and feature-rich dApps.

  • Increasing number of blobs per block may enhances usability and support a variety of applications. It also facilitates seamless integration and advancement of Layer 2 (L2) protocols.

Backwards compatibility

It is also feasible to configure the count of blobs per block, ensuring that modifying this count later in response to demand remains cost-effective and necessitates changes solely in the execution layer clients.


Recap

Based on the analysis described above, the 'safe' options are currently considered more relevant. The strongest argument for this lies in the very low cost to increase the maximum blobs count in the future. This quality allows the team to regulate the balance between security and usability in a reactive way. Not every feature allows for such an approach, so we should apply this scheme to EIP-4844 and move from the safest to the less safe options (if the demand for blob space overtakes the supply) to mitigate risks.

1/2

  • Target blobs per block: 1

  • Maximal blobs per block: 2

The safest available option with some reserve in case the block size on the network increases several times. It also fully covers the current demand on blob space (october 2023).

Subscribe to 4rgon4ut
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from 4rgon4ut

Skeleton

Skeleton

Skeleton