Hey everyone 👋
Firstly! Thanks a lot for the overwhelming response we received after launching the campaign. We are excited to see all of you trying out innovative things and testing our Madara and the CLI itself. However, we have seen some of you face a few hiccups when deploying the chain and that’s why we have decided to write this blog to answer some of the common questions we have received.
We have made some patches to the CLI tool to make the deployment process easier and to fix some bugs. To stay updated to the latest version, simply run the git pull
command. Also, init
a new app chain to ensure you’re using the latest code.
The CLI tool creates a da-config.json
file for you at ~/.madara/app-chains/<your_app_chain_name>/da-config.json
. The JSON file looks like this
{
"ws_provider": "wss://goldberg.avail.tools:443/ws",
"mode": "sovereign",
"seed": "<seed>",
"app_id": 0,
"address": "<address>"
}
Edit the seed
and address
fields with your specific wallet details to change the wallet used for submitting data to Avail.
As a part of the campaign, you’re required to host your chain on a server and share your endpoints with us here. However, there seems to be some confusion on how to go about this. To clarify this, we are sharing some basic steps we used to launch a node on AWS using the Madara CLI.
WARNING: The CLI tool currently is to deploy quick devnets that are easy to test and play around with. Hence, it doesn’t make optimizations that are required for a production build. So while this setup might work for your devnet, we don’t recommend running this when you decide to run your mainnet
Firstly, we have made some patches to the CLI tool after the launch. So if you already have it installed, please update it using
git pull
cargo build --release
Install the dependencies mentioned here
Initialise your chain with
./target/release/madara init
Currently, the CLI tool doesn’t support running in a detached mode. So we will use screen
to start the Madara chain in a separate session.
screen -S madara
./target/release/madara run
Now that Madara is up and running, you can exit this screen session using CTRL + A
D
Now, in a separate session, start the explorer. When starting the explorer, pass an additional flag to specify the host that will be used to access it. If you’ve a domain name configured, this is where it should go. It could also be your plane IP address. For example, --host=13.233.147.221
. (notice you don’t need to specify http/https)
screen -S explorer
./target/release/madara explorer --host=<HOST_ADDRESS>
Again, exit the session using CTRL + A
D
Your Madara chain is up and running now 🚀
Now that your chain is up and running, the next step is to expose the endpoints needed to list the chain. Specifically, you need
rpc_url
: A public endpoint for your app chain to make RPC calls (port 9944 by default)
explorer_url
: A public endpoint where your app chain explorer is visible (port 4000 by default)
metrics_endpoint
: A public endpoint for your prometheus metrics (port 9615 by default)
If you followed the above part, then these endpoints are already available locally on your system. You just need to expose them for the outside world to use them. On AWS, you can do this by creating a security group with the following inbound rules
If your service is running on 13.233.147.221
for example (this could be a domain name as well if you’ve it configured), then your endpoints will be
rpc_url
: http://13.233.147.221:9944
explorer_url
: http://13.233.147.221:4000
metrics_endpoint
: http://13.233.147.221:9615/metrics
NOTE 1: You need to add
/metrics
at the end of the metrics endpointNOTE 2: Do NOT add a trailing/
to the endpoint
We have changed the default block time from 6s to 20s. This should significantly reduce the costs for running the app chain. However, if you find yourself in the need of more tokens, do reach out to us on the Discord channel mentioned below.
Thanks to the community, we have got a very high number of PRs already. In order to make the process easier for everyone, we are added a Github workflow which would automatically make the necessary checks on your PR to ensure all your endpoints are live and working. If you’ve already created a PR, please rebase on the latest main commit.
You can ask use to do it 🫡
At Karnot, we excel at providing infrastructure for app chains so that you can focus on your core business logic. As a part of the Avail campaign, we are running these nodes free of cost for some limited users. If you’re interesting in this offering, please fill the form here. However, we only have limited places for this and app chains that are serious on getting their solution to the market will be given priority over here.
Ideally, 2 GB memory and 0.5 vCPU should be enough to run a devnet node. However, the Madara CLI tool currently builds the Madara image locally. This allows us to use the latest images and saves us the time of building and pushing docker images for multiple OS. While we do plan to optimize this eventually, currently this is the fastest way to play around with Madara. For my testing, we used an Ubuntu instance on AWS, we used an t4g.xlarge
instance (4 vCPU, 16 GB memory and 50 GB storage).
The best place to ask questions right now would be the #developer-discussions
channel on the Avail Discord. You can access it here. Do avoid asking questions on the Madara telegram for now as we want to reserve that for more dev discussions. However, we will be launching a Madara Discord soon 👀. Follow us here to remain updated!