The concept of permanence has taken center stage, particularly with innovations like Arweave — a blockchain designed for permanent data storage. Here we will explore how Large Language Models (LLMs), stored on Arweave, can be seamlessly retrieved and utilized through the AR.IO network, showcasing a practical application of permanence in technology.
Integration with Projects like Brave:
Consider Brave, a browser known for its focus on privacy and innovation, integrating these permanently stored LLMs through AR.IO. This integration exemplifies how:
Permanence Enhances Reliability: By using models stored on Arweave, Brave can ensure that the AI capabilities it offers, like enhanced browsing experiences or intelligent assistants, are based on models that won't disappear or change unpredictably. This permanence provides a stable foundation for continuous service improvement without the fear of data loss or model drift due to external factors.
Decentralization Empowers Users: With AR.IO, users and developers get more control over the data and models they use. This aligns with Brave's ethos of empowering users by giving them control over their digital experience. The decentralized nature ensures that there's no single point of failure or control, aligning with the principles of Web3.
Efficiency and Accessibility: ArNS makes it straightforward for applications like Brave to retrieve specific versions of LLMs or update them, ensuring that users always have access to the latest or most appropriate AI models without needing to understand the underlying storage technology.
In this guide, I am going to show you how LLMs stored on Arweave can be retrieved through AR.IO with ArNS for use in other projects like Brave on Ubuntu.
My Pc configuration had 2 cores cpu, 8gb ram, 100gb ssd and Ubuntu 22.04
Install Curl if Not Already Installed:
sudo apt install curl
Download the Brave Nightly Keyring:
sudo curl -fsSLo /usr/share/keyrings/brave-browser-nightly-archive-keyring.gpg https://brave-browser-apt-nightly.s3.brave.com/brave-browser-nightly-archive-keyring.gpg
Add Brave Nightly Repository:
echo "deb [signed-by=/usr/share/keyrings/brave-browser-nightly-archive-keyring.gpg] https://brave-browser-apt-nightly.s3.brave.com/ stable main"|sudo tee /etc/apt/sources.list.d/brave-browser-nightly.list
Update Package List:
sudo apt update
Install Brave Nightly:
sudo apt install brave-browser-nightly
Visit the official LM Studio website and download LM Studio for LInux:
chmod +x LM_Studio-0.2.31.AppImage
Meta LLaMA 3.8B Instruction Model: Visit meta-llama-3-8b-instruct-q4.ar.io
CodeQwen Chat Model: Visit codeqwen-chat-q3-k-m.ar.io
Rename the Model Files:
The files downloaded will come with file name ‘download’
and ‘download1’
so
‘download’
to Meta-Llama-3-8B-Instruct-Q4_0.gguf
by using the command:mv download Meta-Llama-3-8B-Instruct-Q4_0.gguf
‘download1’
to CodeQwen1.5-7B-Chat-GGUF.gguf
by using the command:mv download1 CodeQwen1.5-7B-Chat-GGUF.gguf
Organize Files into LM Studio Directories:
Create or ensure the following directory structure:
For Meta LLaMA: models/Meta/Llama/
For CodeQwen: models/CodeQwen/Chat/
Place the renamed .gguf
files into these directories by using the following commands:
mv Meta-Llama-3-8B-Instruct-Q4_0.gguf /home/your-user/.cache/lm-studio/models/Meta/Llama
mv CodeQwen1.5-7B-Chat-GGUF.gguf /home/your-user/.cache/lm-studio/models/CodeQwen/Chat/
Restart LM Studio:
Select Your Model in LM Studio:
Navigate to the "Local Server
" tab.
Select your newly added model from the list.
Server Integration:
After Your Model finishes setting up, you'll get a server endpoint (e.g., http://localhost:1234/v1/chat/completions
).
Open Brave Nightly:
Leo
on the left hand sideAdd new model
On the Add model page input Label, Model request name and Server endpoint (http://localhost:1234/v1/chat/completions
)
Click Add Model to save your settings
Navigate to Brave Browsers Home Page then select Leo on the top right
Select our Custom Model (CodeQwen) from the Leo Language Models options Menu
AR.IO made it easy and fast for me to access the LLM by easy to read domain names and download the custom LLM, the setup was a straight forward process considering my knowledge with Linux.
Server Endpoint: Keep your server endpoint handy for integration with other applications or for testing different models. You can also set this up on a VM in a remote location and just use the server endpoint and connect it to your local machine Brave Browser Nightly.
Model Performance: Different models might require different system resources. Monitor your system's performance when switching between models.
Benefits of AR.IO to AI:
Scalability: AR.IO's gateways can handle large-scale data queries, which is vital for deploying AI models that require significant data throughput for real-time applications.
Cost-Effectiveness: By leveraging Arweave's unique economic model where storage is paid for once and lasts for centuries, projects can reduce ongoing costs associated with data storage, making innovative AI integrations more financially viable.
Innovation in AI Development: The permanence and accessibility provided by this setup encourage developers to experiment with and deploy AI models in environments where data integrity and availability are guaranteed, fostering innovation in how AI can be integrated into everyday technology.
In conclusion, the synergy between Arweave's permanent storage, AR.IO's decentralized gateways, and ArNS's user-friendly naming system not only demonstrates the practical application of permanence in technology but also opens new avenues for how AI can be integrated into user-centric applications like Brave. This approach not only promises a future where data and models are perpetually available but also where technology serves users with unprecedented reliability and autonomy.