The Initiation: A Retrospective

With the first phase of the initiaFDN testnet coming to a close, we'd like to take a moment to reflect on the progress thus far.

The Initiation has been an exceptionally popular testnet. Here are some key stats:

In this article we’ll explore technical aspects such as network stress and our mitigation strategies, node counts, as well as our involvement and contributions to the testnet thus far.

Network Node Count

We regularly crawl the chains on which we operate and use that data to create geographical maps.

At the peak of the Initiation, our crawlers registered in excess of 5,000 Initia testnet nodes running globally. Even now, 7 weeks in and at the end of the first phase, we're observing around 1,300 live full nodes.

Granted, a significant portion of these are likely from hopeful airdrop farmers automating deployment of numerous nodes, along with potential validator transaction generations. Nonetheless, this is by far the highest node count we've seen on any testnet to date, highlighting Initia's popularity. The only thing that comes close was at the peak of Babylon’s testnet, where we observed ~1.5k nodes at the peak.

Network Stress & Performance Tuning

During the early days of the testnet, many node operators—validators and others—faced challenges keeping up with heavy onchain activity. Several frequently fell behind the chain head and missed a relatively large percentage of blocks compared to what’s generally seen on other CometBFT based testnets.

Several factors likely contribute to these issues:

  • Large Node Count: With many nodes, transactions and block propagation can take longer to fully propagate, especially if most nodes are configured with default settings. For instance, nodes blocking incoming P2P connections with a firewall and using the default maximum of 10 outbound peers can lead to suboptimal p2p connectivity for a chain with short block times.

  • Insufficient Hardware: Nodes on inadequate hardware struggle with high transaction volume and block frequency. These nodes particularly face issues with pruning the historical state using default values, which only kick in once a significant amount of state has accumulated—362,880 blocks or ~7.5 days worth of on-chain activity. Processing such a large amount of data every pruning interval will likely A larger database leads to slower read and write operations, which can be exacerbated by lower powered hardware.

  • Syncing Issues: A significant number of nodes are often in a constant syncing state, likely due to insufficient provisioning on the hardware or network level. This means well-powered nodes could end up connecting to lagging nodes exclusively, creating a cascading effect that compromises network synchronization due to the added strain and inefficient gossipping from these nodes.

Initially, we had problems staying up-to-date with the chain head with our provisioned hardware in Iceland, meaning higher RTT (ping/latency) from the hotspots seen above. We also had a lower specced SSD which was a major culprit.

What we did to mitigate this and continue running on the provisioned hardware:

Alternative DB backendOn other CometBFT based chains chains with high onchain activity and thus frequent disk read/writes, we've seen performance improvements by using PebbleDB, an alternative database backend for Tendermint/CometBFT nodes. In our experience it offers better compression, quicker pruning, and more efficient memory usage (ie: utilizing memory more), resulting in less I/O overhead.

Key stats observed from running w/ PebbleDB:

  • >75% reduction in pruning operation times

  • >70% reduction in disk IOPS due to more efficient memory utilization, and thus better caching

  • Significantly faster state-syncing times

Read our guide on how to install PebbleDB with your Initia node (as well as other performance tunings) here:

Improving PeeringTo increase our chances of peering with well-connected peers, we increased our peer counts by modifying max_num_inbound_peers and max_num_outbound_peers to be = 50 each and made sure to keep our p2p port 26656 open on our public interface on the firewall level.

Modifying Pruning ParametersThe default pruning paremeters translate to: Prune everything except the past 362880 blocks, and do so every 10 blocks.

This is a large database for a chain with the transaction volume of Initia’s testnet. To lower read/write times we prefer to keep this lower:

pruning-keep-recent = "100"
pruning-interval = "10"

ValiDAO’s Involvement & Contributions

We've actively participated in Initia's testnet, operating a validator. Additionally, we have:

  • Operated public RPCs for the L1 as well as two minitias: minimove-1 and miniwasm-1 -- find them here: https://validao.xyz/#rpcs

  • Managed an IBC relayer (hermes) between the above chains

  • Managed a network crawler and geographical node maps

  • Conducted a deep dive into Initia and produced an easy-to-digest article, which you can read here: An Initiation to Initia.

  • Written a guide on our experience performance tuning an Initia testnet node here

Subscribe to ValiDAO
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.