This is intended to be a user-friendly guide for anyone with little experience, I will be updating this as the Phases open up throughout the Testnet, and include later Testnet’s with Obol here as well.
THIS GUIDE IS NOW DEPRECATED AS IS THE ATHENA TESTNET, PLEASE SEE THE FOLLOWING FOR THE CURRENT BIA TESTNET
Obol Network is building technology for distributed validators (DVT) this essentially turns Ethereum Validators into a multi-sig, where a validator operates across a cluster of nodes in order to improve resilience (safety, liveness, or both) as compared to running a validator on a single node.
This opens the path to institutional staking as a validator can be secured to higher standards required by institutions.
This also democratises ETH staking, allowing users with much less than 32 $ETH required to run a single validator.
Obols distributed validators can connect to various consensus layer and execution layer clients forming a truly credible neutral layer.
You will need to run the Charon client briefly to generate an ENR private key for use in a scheduled Distributed Key Generation ceremony.
see the article on Athena Testnet, to find the form to fill if you wish to apply to join this Testnet
Hardware Requirements: Wired connection to a local device with at least 8GB memory (I am using 16GB with no issues) 4 core CPU, would recommend starting with 500GB at minimum, as a node operator you're providing a service, so best to start with good hardware that will last and perform up to standard.
This guide is assuming you have a fresh install of Ubuntu 20.04 LTS, I am running locally. Navigate to the terminal.
Update the device
sudo apt update && sudo apt upgrade -y
Install pre-requisite software
we need to install curl and git
sudo apt install curl git -y
curl -fsSL https://get.docker.com -o get-docker.sh
change permissions and check you have installed correctly
sudo usermod -aG docker $USER
git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
Generate your Charon ENR private key
run the following to generate your ENR private key
sudo docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.1 create enr
Copy your ENR private key and back this up
you need this ENR key for your Testnet application. its a good idea to keep a backup copy locally and elsewhere. You can simply copy and save this to a text file.
Backup your Private key
make a backup of your private key, (do not submit this in your application)
The private key is stored in
charon-distributed-validator-node folder, The file we are locking for is called
charon-enr-private-key i’ve chosen to save a copy in
Make the folder
Note: change to your username on your device.
sudo cp /home/<user>/charon-distributed-validator-node/.charon/charon-enr-private-key /home/<user>/Documents/backups
In order to freely move this (if you want to save to another device) we need to change permissions.
sudo chown <user>:<user> /home/<user>/Documents/backups/charon-enr-private-key
Assigned cluster captain will do this stage
Cluster Captain- Create an .env file
cp .env.sample .env
Here we can cloned the example
.env file which we can now see in our working directory, we will populate with our clusters unique details.
Cluster Captain- Open .env for editing our cluster details
We need to first get the ENR keys from each member in our cluster (IMPORTANT: these must match with the Private Keys created in Step 1, so make sure that the cluster members have double checked the correct file is in use)
Now open the env for editing in the terminal, to fill in our members unique ENR keys
Place ENR keys for each member (Cluster Captain first) separated by comma no space, include the ENR:- prefix like so
Cluster Captain- runs DKG configuration
Replace the following
$NAME = name of your cluster
$FEE_RECIPIENT_ADDRESS = Eth address of our Deposit address
$WITHDRAWAL_ADDRESS = Eth address of our Deposit address
These can be set as variables, but I simply replaced into the command, Recommended to create a new ETH wallet to use as our deposit and fee address.
sudo docker run --rm -v "$(pwd):/opt/charon" --env-file .env obolnetwork/charon:v0.9.0 create dkg --name=$NAME --fee-recipient-address=$FEE_RECIPIENT_ADDRESS --withdrawal-address=$WITHDRAWAL_ADDRESS
Cluster Captain- Shares Configuration file with Cluster Members
Find the file in the directory
Copy to our backups directory so we can extract and share with the other team members
sudo cp /home/<user>/charon-distributed-validator-node/.charon/cluster-definition.json /home/<user>/Documents/backups
sudo chown <user>:<user> /home/<user>/Documents/backups/cluster-definition.json
You should now be able to extract this file from your device and share with other Cluster members
Other Cluster members- Receive
Other cluster members should now receive the
cluster-definition.json created by the Cluster captain. This should be placed in our ‘
backups’ directory from Step 1.
Place in the working directory
sudo cp /home/<user>/Documents/backups/cluster-definition.json /home/<user>/charon-distributed-validator-node/.charon/
Confirm location with
ls -la to see it there before moving on.
Now all members must run the DKG Ceremony at the same time, first all members prepare the following steps
cluster-definition.json is stored in the correct directory
Ensure that our ENR is the same one Generated from the Step 1 and therefore matches our Private Key
All members are online and in the correct directory to run the same command
To run the Ceremony, all members at the same time.
sudo docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.9.0 dkg --p2p-bootnode-relay
The will connect to all cluster members, errors can occur if other Peers/cluster members use the incorrect ENR key
Back up the Files created by each member
a number of artefacts will be created in the
.charon folder. Which should now be backed up
validator_keys/ folder. Unique to each member
cluster-lock.json are the same across all members, the Cluster captain should back this up, but good practice that all members do.
sudo cp /home/<user>/charon-distributed-validator-node/.charon/cluster-lock.json /home/<user>/Documents/backups
sudo cp /home/<user>/charon-distributed-validator-node/.charon/deposit-data.json /home/<user>/Documents/backups
Back up the Validator Keys Folder
Set up Port Forwarding
All Members will need to open Ports 30303 & 3610 on their Router - in Port Forwarding settings.
This will be different depending on internet provider and router but generally you can log onto your router settings on a browser by typing into the search bar
This should prompt a log in screen, and you should find your log in details on your router device.
Your local device IP: which is the local IP of the device you will run Obol, find on the settings with, IPv4 address is what to look for
Set up Rules - something like this
In home directory Install Docker Compose
sudo apt install docker-compose -y
docker-compose.yml Configuration file
We need to edit the version of our config file, to 3.3
Option 1 - via the UI
Open this file, in your ‘
distributed-charon-validator-node’ folder, right click and open via text. Change
3.3 and Save
Option 2- Via terminal
Change this to 3.3 and Write out (ctrl+o) then Exit (ctrl+x)
Run the docker containers
Close the terminal and open a new terminal
Change into the working directory
sudo docker-compose up -d
View Logs & Confirm running
Check running containers
sudo docker ps -a
To View the logs run the following on separate terminals windows for easier management
ETH 1 - Client- GETH Logs
sudo docker logs charon-distributed-validator-node_geth_1 -f
ETH 2- Client- Lighthouse Logs
sudo docker logs charon-distributed-validator-node_lighthouse_1 -f
Obol Charon Client- Logs
sudo docker logs charon-distributed-validator-node_charon_1 -f
This one I don’t think will work until we have the rest synced, will return to this later
at this stage and lets wait until all teams GETH and Lighthouse Nodes are synced.
After your cluster teams nodes have all Synced, depending on hardware used can vary but currently on goerli less than 12hrs.
Get Goerli ETH
Add Goerli Network to Metamask on networks, by enabling ‘test networks’ in the settings
Cluster Captain - MAKE THE DEPOSIT
Cluster Captain makes the deposit
deposit-data.json needs to be edited for the launchpad to accept the file, Open with text editor and replace the following.
Ethereum Staking LaunchpadWe can head over to the ETH2 staking launchpad UI for the easiest way to make the deposit.
You will have to run through the tutorial explaining the risks and commitments of Ethereum staking to get to the end where you can add your
deposit-data.json (edited with the config changes earlier) and finally make the deposit.
Save the Transactions IDs for reference, you can now wait for the deposit to be accepted on the Beacon chain which takes around 12-24 Hrs.
Shut down node containers
sudo docker-compose down
At this point it could be a good time to update and restart your OS
Pull the Latest image
sudo git reset --hard
sudo git pull
Check the version for docker-compose in the
docker-compose.yml, change the version from 3.8 to 3.3 again if needed
sudo nano docker-compose.yml
Restart the containers
sudo docker-compose up -d
Logs and Checks
sudo docker-compose logs --tail 100 -f
To check if some peers are not online
sudo docker logs charon-distributed-validator-node-charon-1 2>&1 | grep 'absent'
To check specific client only, you can replace geth/teku/lighthouse/charon
sudo docker logs charon-distributed-validator-node_<charon>_1 -f
Check all containers are running
sudo docker ps -a
you should see something like so
Teku keystore file /path/to/keystore-*.json.lock already in use
There is an error that is due to Teku not shutting down gracefully and not deleting the lock file it created to protect the keystore file. I’ve encountered this twice with two different cluster team members, here is a way to solve, expanded on here
Edit the docker-compose.yml with the following flag, this disables the lock file.
--validators-keystore-locking-enabled=false Like so
Another method is to simply delete the lock file in the location specified in the logs from the error, and start up the node again. however, might be likely to encounter this more than once and with a cluster of multiple, disabling this would reduce downtime.