Proof of Archival Storage: “…each farmer stores as many provably unique segments of the chain history as their disk space allows. The more pieces of the history a farmer stores, the more likely they are to be elected to produce a new block.”
More below on how this works:
In an effort to minimize farming gamification, the Subspace team has been quiet regarding optimized farming strategies, which I applaud. Current participants seem to be genuinely interested in participating in this particular mechanism, from my Discord travels.
There are two executables you’ll need to run to farm on Subspace: nodes and farmers.
Based on my understanding, one runs a node to sync with the current state of the chain, and a farmer to assert eligibility for new block rewards.
After trying Windows and Linux CLI based options, I duplicated the same farming setup using Docker. This yielded at least 1.7 more bps than what was recorded in any other CLI-based farming attempt, and offers the added benefit of automatic restarts if either the farmer or node should crash, which would not happen with running OS-specific CLI nodes and farmers.
As more nodes become synced (more trusted peers in existence), sync speeds will increase, but in my experience, 100GB on an SSD with a 300 Mbps connection (+new Ryzen CPU, max of 8GB of RAM) led to a fully synced node in less than 12 hours.
Sync times have varied wildly throughout the initial phases of this testnet, and this is very much a “your mileage may vary” situation.
I understand the team has implied that their goal for hardware-inclusivity is to make use of Raspberry Pis and older HDDs. Such hardware may currently struggle to either plot blockchain history or maintain sync with the latest block.
As of the time of writing this (June 7, 2022) the lastest node and farmer release is dated June 5th. So upon further releases you’d need to update the image title, but otherwise, (assuming docker is installed) create a subspace
directory containing a docker-compose.yml
file containing the code supplied below, with a few tweaks. Note that three (or four, if snapshot date has changed beyond June 5) things need to be edited: replace INSERT_YOUR_ID
with your desired node name, replace WALLET_ADDRESS
with your desired substrate or SS58 address (starts with 5 or st, respectively), and replace PLOT_SIZE
with the capacity you intend to volunteer.
More from the team themselves:
To run multiple nodes on the same network, which is currently necessary to make use of multiple disks on the same computer/network, there are a few arguments to be aware of.
Port 30333 is used by default in your first docker-compose.yml will have to change to another unused port. I usually increment to 30334, 30335… for subsequent nodes with the
“--port”, “30334”,
argument in the node section of the docker-compose file.
Each node communicates with farmer via Websocket server on a specific port. Default is 9944, so for subsequent nodes, to have the farmer correctly connect to the node, force another port with this argument in the node section:
“--ws-port”, “9945“,
.
Now to tell the farmer to listen to the ports the node is using. Uncomment the lines reading:
`# ports:
- "127.0.0.1:9955:9955"`
and increment 9955 to another port like 9956.
Update "--node-rpc-url” with an incremented port number used in the node:
“--node-rpc-url", "ws://node:9945",
Update “--ws-server-listen-addr" with the new port number from the lines you uncommented in Farmer step 1:
--ws-server-listen-addr", "0.0.0.0:9955",
Node Folder Location (up to 50GB needed @ time of writing). If need be, i.e. very little space on the drive containing your OS, comment:
- node-data:/var/subspace:rw
and uncomment:
# - /path/to/subspace-node:/var/subspace:rw
then replace /path/to/subspace-node
with your desired location such as C:/a/b/c
or /media/user/drive
Farming Plots Location (as much space as you decide to volunteer with a minimum of 60GB recommended @ time of writing). To plot where you’d like, comment:
- farmer-data:/var/subspace:rw
and uncomment:
# - /path/to/subspace-farmer:/var/subspace:rw
then replace /path/to/subspace-farmer
with your desired location such as C:/a/b/c
or /media/user/drive
(make sure different from node location).
There is optimization to be done based on your individual setup (network speed, etc.) More space volunteered increases reward probability, but also increases sync time, and if the block time for your sync isn’t considerably less than the block time for the network, your network sync will never/very slowly catch up to gain the chance to earn block rewards with that storage capacity you volunteered. For example, at the time of writing, one user quoted a 50GB sync at 23 hours (other parameters not known).
I dedicated 60GB as this was the quoted minimum by the Subspace team, and again, using Docker and having max download speed of 320 Mbps, the sync took 8 hours, given periods of being stuck and not importing blocks. A second 110GB node synced in maybe 5 hours. I’ll try a 1 TB, then more soon and report back. This may involve “daisy-chaining” nodes such that my first node syncs from the usual 50 or so web-based peers, then all remaining nodes sync only from my one trusted local node to speed up sync and decrease bandwidth usage (thanks teslak). Furthermore, as more nodes sync, average sync speed will increase, and the team will likely continue to optimize the farming experience.
Get in touch with me on Twitter @curi0n or @curion.lens on lenster.xyz with any questions or comments!
version: "3.7"
services:
node:
# For running on Aarch64 add `-aarch64` after `DATE`
image: ghcr.io/subspace/node:gemini-1b-2022-june-05
volumes:
# Instead of specifying volume (which will store data in `/var/lib/docker`), you can
# alternatively specify path to the directory where files will be stored, just make
# sure everyone is allowed to write there
- node-data:/var/subspace:rw
# - /path/to/subspace-node:/var/subspace:rw
ports:
# If port 30333 is already occupied by another Substrate-based node, replace all
# occurrences of `30333` in this file with another value
- "0.0.0.0:30333:30333"
restart: unless-stopped
command: [
"--chain", "gemini-1",
"--base-path", "/var/subspace",
"--execution", "wasm",
"--pruning", "1024",
"--keep-blocks", "1024",
"--port", "30333",
"--rpc-cors", "all",
"--rpc-methods", "safe",
"--unsafe-ws-external",
"--validator",
# Replace `INSERT_YOUR_ID` with your node ID (will be shown in telemetry)
"--name", "INSERT_YOUR_ID"
]
healthcheck:
timeout: 5s
# If node setup takes longer then expected, you want to increase `interval` and `retries` number.
interval: 30s
retries: 5
farmer:
depends_on:
node:
condition: service_healthy
# For running on Aarch64 add `-aarch64` after `DATE`
image: ghcr.io/subspace/farmer:gemini-1b-2022-june-05
# Un-comment following 2 lines to unlock farmer's RPC
# ports:
# - "127.0.0.1:9955:9955"
# Instead of specifying volume (which will store data in `/var/lib/docker`), you can
# alternatively specify path to the directory where files will be stored, just make
# sure everyone is allowed to write there
volumes:
- farmer-data:/var/subspace:rw
# - /path/to/subspace-farmer:/var/subspace:rw
restart: unless-stopped
command: [
"--base-path", "/var/subspace",
"farm",
"--node-rpc-url", "ws://node:9944",
"--ws-server-listen-addr", "0.0.0.0:9955",
# Replace `WALLET_ADDRESS` with your Polkadot.js wallet address
"--reward-address", "WALLET_ADDRESS",
# Replace `PLOT_SIZE` with plot size in gigabytes or terabytes, for instance 100G or 2T (but leave at least 10G of disk space for node)
"--plot-size", "PLOT_SIZE"
]
volumes:
node-data:
farmer-data: