Obol Network - Bia Testnet

Easy step by step guide to running a distributed Validator cluster on Obol’s – BIA Testnet.

(document last update:16/4/23)

Obol Network: is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators. Charon is a distributed validator middleware client, for running a validator across a cluster of nodes across many client implementations to improve resilience.

Bia Testnet:

Bia is the second official Testnet for Obol succeeding the first Testnet Athena, I am running a DVT Cluster with Obol Ar líne, and will be updating this guide as the testnet evolves.

BIA Testnet - Getting started

Hardware Requirements: CPU with 4+ cores, 16 GB+ RAM, A fast SSD drive with at least 1 TB of space (storage capacity will grow over time) 25 MBit/s bandwidth

Install Linux: Get started with Ubuntu on a local device

1. Install Pre-requisite software

Update Ubuntu

sudo apt update && sudo apt upgrade -y

Install curl and git

sudo apt install curl git -y

Install Docker and Docker compose plugin

Install Docker instructions from here.

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

remove the install script and add $USER to docker group, to allow use without sudo

sudo rm -r get-docker.sh
sudo usermod -aG docker $USER

Restart a new shell, this is to allow the adding $USER to docker group.

Check installed with (if correct will output version)

docker --version

This script should also install docker compose plugin, which you can check with the following command

docker compose version

If docker compose is not present, Install Docker Compose instructions from here.

Firewall Settings

30303 udp/tcp: Execution p2p ports
9000 udp/tcp: Consensus p2p ports
3610: Charon port

sudo ufw allow 30303
sudo ufw allow 9000
sudo ufw allow 3610
sudo ufw enable 

NOTE: if running on a VPS you should be sure to enable SSH ufw allow ssh, check the status of firewall with ufw status

Port forwarding: under local setup you will need to port forward these ports for the device local IP, this varies depending on internet provider and router. Can be done by logging into router settings.

2. Cluster group formation

Address collection

Collect addresses of all Operators for the cluster, messages are required to be signed from the launchpad. The Cluster leader requires these addresses to continue.

3. Create ENR for Charon

In order to prepare for a distributed key generation ceremony, you need to create an ENR private key for your Charon client. All operators including the leader will need to do this step.

Clone Repository for Charon

git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git

Create ENR private key

cd charon-distributed-validator-node
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.13.0 create enr

Backup ENR key

Your ENR private key is stored under .charon/charon-enr-private-key this should be backed up. This is displayed on the output it can be copied to a text file and backed up externally. To view it you can display with cat or nano commands

sudo cat .charon/charon-enr-private-key

Create the DKG configuration file (LEADER ONLY)

The cluster leader must prepare the configuration file for the distributed key generation ceremony using the launchpad.

4. Cluster Configuration

Cluster operators will be invited to join the configuration by invite link, this will take you to a specific group created for the DV launchpad. All Operators will need to contribute their ENR key to configure the cluster-definition.

You will need Metamask for this, using the same wallet provided to the Cluster leader, and connected to Goerli test network.

It will look similar to this, and you will need to connect your wallet
It will look similar to this, and you will need to connect your wallet

Enter ENR key for configuration

Have your ENR key ready for adding to the configuration.

Download cluster-definition (optional)

download the cluster-definition file manually and move it to the hidden .charon folder. If using the UI on Ubuntu, copy the file to a local folder such as $HOME/obol-files and then move to .charon directory like so

sudo mv $HOME/obol-files/cluster-definition.json .charon

5. Run the DKG Ceremony

All operators need to be running the command simultaneously, at an agreed time

the docker run command can be copied from here, all operators partaking in the cluster must run this simultaneously.

6. Backup Validator Keys

After successful DKG, a number of artefacts will be created in the .charon folder. deposit-data.json, cluster-lock.json & validator_keys/ folder containing keystores of number of validators setup during DKG.

Backup the Validator Keys folder

All operators must backup their unique keystores

cd
mkdir charon-backups
cd charon-distributed-validator-node
sudo cp -r .charon/validator_keys $HOME/charon-backups 

The same process above can be followed for deposit-data.json & cluster-lock.json Note: these files are identical per operator.

To grant permissions to this folder so it can be freely moved, the -R will grant to every file inside.

sudo chown $USER:$USER -R $HOME/charon-backups

7. Start the Validator Node

To run our validator node, we must first run and fully sync and Execution layer client and consensus layer client. By default Charon is set to sync execution layer client (geth) and a consensus layer client (lighthouse).

from the working folder charon-distributed-validator-cluster

docker compose up -d

this will pull several docker containers and output the logs of each on the terminal (this can be messy see ‘viewing container logs’ in useful commands). -d flag means detach the containers will start in the background, to view logs individually see ‘Useful commands’ section.

At this point we are waiting for the Execution and Consensus clients (default GETH and Lighthouse) to finish syncing, before proceeding to the next step.

Execution client not finished syncing
Execution client not finished syncing

You might see something similar, the execution client is still syncing so the consensus is waiting for sync to complete, however client is connected to at least 1 peer which is required for nodes to start sync.

Lighthouse logs once synced
Lighthouse logs once synced

Check Grafana Dashboard

Open Grafana dashboard from the terminal

open http://localhost:3000/d/singlenode/

Direct from the browser type into the search bar: http://0.0.0.0:3000/d/singlenode/single-charon-node-dashboard?orgId=1&refresh=10s

If you want to do this externally you will need to open port 3000, and use the public IP of your device.

**Things to check:**Your Charon client can connect to the configured beacon client and it can connect to all peers.

8. Make Deposit

When all cluster operators are synced and healthy, the cluster is ready to proceed to the deposit step.

ONE operator may process to activate this deposit data with the existing staking launchpad.

You will need to upload the deposit_data.json found in .charon

You will be prompted to go though even more checklists, eventually connect metamask and confirm the transaction.

You will be given your validators public key which helps to take note of.

Useful Commands

Viewing container logs

You can view running containers with

docker ps -a

View individual logs for selected container, from the working directory

docker compose logs -f <service name>

Service name will be on the docker-compose.yml, default these are either geth: execution logs, lighthouse: consensus or teku: for validator logs

Troubleshooting

Selecting Alternative Clients

Charon (at least for now) is set to run GETH for execution client and Lighthouse for consensus, with Teku client for the validator client, Ethereum does particularly well in client diversity which is important for network health, you can find various client implementations here.

To run other clients you can edit the docker-compose.yml file in the working directory. It would be a good idea to backup the default docker-compose file before this.

cd charon-distributed-validator-node
sudo cp docker-compose.yml $HOME/charon-backups
nano docker-compose.yml

Replace the Service

the service needs to be replaced, with the service of the desired client. This includes the relevant docker images and arguments for that client.

an example configuration can be found here, for using Nethermind as Execution client in place of Geth.

NOTE: updating this repository with working docker-compose configurations for various non default clients throughout this testnet and upcoming testnets

Upgrade Procedure

For updating Charon, the steps can be followed for updating images for clients also. Can find on the community announcements: releases channel or on Github here.

To check current version sudo docker ps -a find the version currently running under the charon container.

Stop Node/s

cd charon-distributed-validator-node
docker compose down

Pull latest Charon version

  1. Running default configuration

    sudo git reset --hard
    sudo git pull
    
  2. Running non-default configuration
    if you have made specific changes to the config, then follow these steps.

Change Charon docker image manually

nano docker-compose.yml

Navigate the charon service and change the highlighted text to the latest version from the git releases repository.

Restart Nodes

docker compose up -d

Check logs

docker-compose logs --tail 100 -f

Migration Procedure

Should you need to migrate a node in the cluster to another server/device the following steps can be followed.

Locate backup files

These are the files that were backed up in Step 6, including the ENR key in Step 3. In total these files are required. If following this guide they should be in folder $HOME/charon-backups and saved somewhere externally in case of device failure.

deposit-data.json

cluster-lock.json

validator_keys folder containing keystores and keystores.txt

charon_enr_private_key located in .charon/

Setup Charon on new server

Follow setup ‘Step 1’ on the new device to prepare the server

Clone charon:

git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git

Insert the backup files into the required directories

Shut down docker containers in Old server

from Old server

cd charon-distributed-validator-node
docker compose down

shut down old server

Start Charon on New server

cd charon-distributed-validator-node
docker compose up -d

Exit Procedure

Should you wish to exit your validator, along with the other cluster members this process is called a voluntary exit, and a quorum of operators needs to run the same exit command for the exit to succeed.

  1. Confirm the Exit Epoch

If you want to exit as soon as possible, the default epoch of 162304

export EXIT_EPOCH=162304

otherwise, you can define a future epoch for exiting https://beaconscan.com/epochs

2. Run the Exit Command

This command should be broadcast from your validator client.

docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
      --beacon-node-api-endpoint="http://charon:3600/" \
      --confirmation-enabled=false \
      --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
      --epoch=162304
Subscribe to GLCstaked
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.