Running Heurist on Io Net

In the computing landscape, a significant shift towards decentralization is taking place, moving away from traditional centralized models. This evolution is underscored by innovations such as the Io Net, driven by a combination of factors that spotlight the limitations of legacy Web2 infrastructures. Among the primary concerns are the inefficiencies and bottlenecks caused by centralization, which compromise adaptability to the fast-paced demands of modern computing. Furthermore, the scarcity of high-performance GPUs obstructs the execution of compute-intensive tasks critical for advancements in machine learning and artificial intelligence. Compounding these issues are the significant risks to data privacy inherent in centralized systems, which raise alarms about the security of sensitive information. The opacity of pricing models adds another layer of complexity, making it challenging for users to estimate costs accurately. Additionally, the inflexible and unclear financial policies of these platforms often hinder users' ability to manage their funds effectively.

At Nirmata Labs, our firsthand experiences with Web2 service providers have illuminated several specific challenges. The process to access particular GPU models is fraught with justification requirements and a prolonged approval process, constraining our agility and responsiveness to project demands. Moreover, the limited availability of top-tier GPU models, such as Nvidia's H100, A100, or RTX A6000, essential for cutting-edge ML and AI projects, restricts AI labs technological capabilities. The cumbersome process for withdrawing funds, necessitating direct interaction with providers and enduring waiting periods of 5-7 days for bank processing, further exemplifies the operational inefficiencies we face. Additionally, the limitations on customization options, with only a few providers offering essential tools like Docker, Nvidia drivers, or the CUDA toolkit pre-installed, and the scarcity of options to preload machine learning models, stifle our capacity for innovation and experimentation.

In contrast, io.net stands out for its full customizability, allowing users to tailor their instances precisely to their needs. It offers seamless deployment options, includes nvidia drivers, CUDA toolkit and Anaconda, providing a smoother and more efficient experience for developers and users alike while offering full decentralization, absolute liquidity, instant withdrawals and no approvals to rent any of their machines. In this article, we will take a look at how we can use io.net to set up a miner for Heurist.

Hardware and Software Requirements for running a Stable Diffusion Miner for Heurist:

  1. Nvidia Cards with at least 12 GB of VRAM

  2. CUDA Toolkit 12.1 or 12.2

  3. Nvidia GPU Drivers

  4. Anaconda3 or Miniconda3

Step 1: Connecting to the Console:

  • Visit io.net and sign up for an account.

  • You can add USDC to your balance on io.net by clicking on the ‘Reload with Solana Pay’ button at the top left hand corner. This will enable us to pay for our compute needs. You can also choose to pay at a later stage in the cluster deployment setup also.

  • Navitagte

Select IO Net Cloud
Select IO Net Cloud
  • Click on the Deploy Button next to the “Ray” Option which is the first one here:
Deploy Ray Cluster
Deploy Ray Cluster
  • Now once we are on the Create New Cluster page, We can select the “General” Option in the Cluster Types. If deploying an LLM Miner you can use the Inference Type as well to handle heavy workloads and produce low latency inferences, but for this tutorial, we’ll just use the General option. Next, scroll down and select the supplier, we’ll use io.net. This is a glimpse of what the setup screen encompasses:
Creating a Cluster
Creating a Cluster
  • Scroll Down until it shows Select Your Cluster Processor option. IO Net lists a multitude of GPUs available for leasing as a cluster. The GPUs provided by IO Net workers range from low-end general purpose GPUs to high-end cutting edge AI & ML GPUs like A100s and H100s, which are very scarce in supply on the majority of the centralized providers:
Select Your Cluster Processor
Select Your Cluster Processor
  • In this example, we have selected an RTX A4000 with 16 GB of VRAM to run the Stable Diffusion Miner for Heurist, and after scrolling down we will select the location as United States. For the connectivity tier we have selected ultra high speed :
Select Location
Select Location
Select Connectivity Tier
Select Connectivity Tier
  • After all the appropriate options have been selected, this is what the final summary before the deployment will look like:
  • Re-check your selections and click on the Deploy button here:
  • The deployment screen will look like this while its processing your payment and getting ready for deployment:
  • After Deployment your IO Cloud dashboard will have this:
  • We have now successfully deployed an 8 x Nvidia RTX A4000 Cluster!

Step 2: Initializing the Deployment

  1. After the deployment is successful, click on the clusters tab. Select on the instance.

  2. After scrolling down on your cluster’s page, on the bottom right you will see this:

  • We can either use Visual Studio or Jupyter Notebook to access the cluster’s head node and its terminal.

  • For this tutorial we’ll use Visual Studio through the IO Net cluster by clicking on the Visual Studio Button. We recommend using Jupyter Notebook if you wish to use Miniconda instead of Anaconda.

  • The password for your dev environment will be provided by IO Net under the dev environment tab. After the Visual Studio set up is complete, this is what you will see:

  • Press CTRL + SHIFT + ` in order to open up a terminal for our head node.

  • Once the terminal loads up, we can verify that the NVIDIA drivers and the CUDA toolkit are pre-installed by running this command in the shell:

nvidia-smi

  • A Successful installation will look like this:
  • Displayed is our configuration featuring an RTX A4000 equipped with 16GB of VRAM and the CUDA Toolkit version 12.2, in addition to the installed NVIDIA drivers. This setup fulfils the prerequisites for operating a Heurist miner. With these specifications in place, we are now positioned to proceed with the installation of Anaconda or Miniconda.

Step 3: Installing Dependencies:

  1. If not already logged in, we need to log into the root user by entering command:

    su -

  2. Update and upgrade Linux packages and dependencies:

    apt-get update

    apt-get upgrade

  3. Install wget, tmux and Neovim:

    apt-get install wget

    apt-get install neovim

    apt-get install tmux

Important Note: IO Net clusters usually come with Anaconda3 pre-installed. You can check if you have it preinstalled by typing conda list or conda –version. If anaconda3 is pre-installed, you can skip to step 4 where we create the environment.

If Anaconda3 is not installed, Install Miniconda using these steps:

Installing Miniconda (Only if you don’t have anaconda installed):

  1. Create a directory for Minconda3:

    mkdir -p ~/miniconda3

  2. Download the latest Miniconda Installation Script:

    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh

  3. Run the Install Script:

    bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3

    After the installation is finished, it will look like this:

  • Delete the Install Script:

    rm -rf ~/miniconda3/miniconda.sh

  • Add a conda initialize to your bash:

    ~/miniconda3/bin/conda init bash

  • Exit from the shell as we have to reconnect to it in order to initialize conda:

    exit

  • After exiting, wait 5 seconds and re-open the terminal by pressing:

    CTRL + SHIFT + `

Verifying the Installation and creating the conda environment:

  • After restarting the shell, run:

    conda list

  • If Miniconda has been installed successfully, you will see something like this after running conda list:

Step 4. Creating the Environment:

  1. Create a new tmux session in order to create and activate the environment and setup a Heurist miner by first running:

    tmux new -s heurist

  2. To establish our environment, execute the following command and allow some time for the downloading and extraction of the necessary packages:

    conda create --name gpu-3-11 python=3.11

  3. Activate the environment once you have entered the tmux by running:

    conda activate gpu-3-11

  4. Since we already have nvidia-smi working and the CUDA toolkit installed, we can go straight ahead and install the conda environment dependencies we need to run the miner (this will take ~10mins to install):

    conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Command taken from: https://pytorch.org/get-started/locally/

When you see:

Preparing transaction:

doneVerifying transaction:

doneExecuting transaction: done

It means pytorch and other dependencies were installed correctly. We can now move onto the next step.

Step 5. Cloning the miner repository to access the miner:

  • Clone the official repository by running:

    git clone https://github.com/heurist-network/miner-release

  • Enter into the miner-release directory:

    cd miner-release

  • Install the python dependencies needed to run the miner:

    pip install python-dotenv

    pip install -r requirements.txt

  • After installing all the dependencies, configure your Miner ID to receive rewards by creating a .env file using an editor of your choice:

  • For this tutorial we have chosen to use Neovim as our file editor but you are free to use nano, vi or vim as well.

  • Create and open the .env file while we are still in the miner-release directory:

    nvim .env

  • Configure your Miner ID in order to be eligible for incentives and rewards by entering your 0x EVM address into your .env file like this: MINER_ID_0=0xYourWalletAddressHere

  • Exit Neovim by pressing your keyboard’s escape button, or writing :exit or :wq + press enter

  • Make sure that after exiting Neovim we have the .env configured and we are still in tmux.

Step 6. Finally, Running the Miner:

  1. While we are still in tmux, run:

    python3 sd-miner-v1.0.0.py

    or

    python sd-miner-v1.0.0.py

    Note: Make sure you select the correct version by checking the directory. As of mid march 2024, 1.0.0 is the latest version for stable diffusion.

  2. After running the miner, you will be asked for a yes or no to install the miner’s packages so enter yes

  3. Soon the model will be ready and your tmux will show: No Model Updates Required. and then this: All model Files are up to date, Miner is ready.

  4. When you see these, it means that the Miner is ready and running so you can detach from the tmux by pressing CTRL + b at the same time on your keyboard and d right after.

  5. Now you can successfully exit your machine as your miner is up and running in the tmux session in the background.

Nirmaan

The cornerstone of crypto AI networks lies in computing power. This includes the computing necessary for the inference of computationally demanding models, or the computing required to execute a model and generate a cryptographic proof verifying the correct execution of the model. For these tasks, high-performance GPUs are essential to the operation of such networks. However, not everyone has access to this high cost hardware or has the technical know-how to run the hardware with high performance and uptime.

We at Nirmaan are democratizing access to compute and are excited to enter a strategic partnership with Heurist, providing our miner-as-a service middleware product to Heurist users who wish to provide compute to earn rewards.

Nirmaan aggregates the most cost effective compute from web2 & web3 providers such as Io Net, securing cheap and effective compute so that we can provision it to Heurist.

We will initially offer and manage NVIDIA RTX A6000 GPUs to run both LLM & Stable Diffusion models on the Heurist network.

Subscribe to ASXN Labs
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.