Computational Concertos — The Power of Orchestration

How does Blockless enable users to power applications just by using them? The answer is in our Uber-esque matchmaking of community devices with computational tasks.


A couple of weeks ago, we announced that Blockless would be the first network enabling users to contribute compute resources to applications just by using them.

For decentralized applications (dApps) across every vertical, this bridges the gap between decentralization as a marketable concept and decentralization as a reality, with everyday users powering and receiving rewards from their favorite everyday applications.

We’ve been working on this for over two years, having recognized that recent industry efforts to replace Amazon Web Services and other centralized cloud-computing providers had created networks that are just as inaccessible, confusing and in many cases downright exploitative for your average person.

Once our engineers had worked out how our network could take and use compute resources from a user’s internet browser tab without them needing to download a single thing, the key challenge we faced was simple…

If current networks rely upon huge professional-grade servers, maintained round-the-clock by professionals, how could our network match, or even outperform them with nodes run on a 2007 Dell XPS? Or a 2017 Chromebook? Or your dad’s iPhone 6? Or was it a 5s…


Ahhhh I’m orchestrating

The answer is orchestration — deciding which nodes go where. We call it “Dynamic Resource Matching” (DRM) which is a fancy way of explaining how different workloads across different applications will make use of different nodes, depending upon 1. the demands of that particular workload and 2. the capacity of the node device.

This differs from other networks that only permit nodes that can run every type of workload, with system requirements out of reach for the majority of people. Blockchain networks such as Solana and SUI, for example, require network nodes to have at least 128 GB of RAM, which isn’t just inefficient, but also makes participation impossible for most of the community.

DRM works like this. Imagine an application has 10,000 different nodes across the blockless network that opted-in to help with its computations. For each type of workload, these 10,000 nodes are evaluated, taking into account the following:

  1. Hardware Specification: The processing power, memory, storage, and network capabilities of each node device.

  2. Geolocation: Nodes distance to the data source and/or the end-user for reduced latency.

  3. Existing Workloads: The existing load on a node device to avoid overworking any single device.

  4. Computation Type: The specific requirements of the computational task, such as the need for GPU processing or large memory space.

  5. Node Performance History: Reliability and efficiency score in task execution, with specific scores for different types of workloads.

Let’s look at a simple betting system as an example. For webpage hosting (which is what lets a website respond to you once you’ve clicked something) tab nodes on devices like your home PC would suffice. For processing arbitrary data (i.e. determining the roll of a dice, or the random draw of a set of cards) in a verifiable manner using ZK Proofs, the computation involved would be heavier, requiring computers with high-end GPUs (i.e. RTX 4090) or a cluster of GPUs for proof generation.

A Step-by-Step Breakdown of Blockless’s Resource Matching Process

  1. Task Initiation and Categorization.

Someone, somewhere, pressed a button on an application. Oops. That means they’re expecting something to happen, so it’s time to sort the task based on its specific requirements, such as the necessary processing power, memory, storage capacity, and any special hardware needs (like GPU for machine learning tasks).

2. Node Capability Assessment.

U up? Our DRM system assesses the capacity and suitability of all available node devices in the network. This includes evaluating their hardware specifications, such as CPU/GPU performance, RAM, storage, and network bandwidth, and what those devices are currently busy with.

3. Performance History Assessment.

Time to check the receipts. The system reviews the historical performance data of nodes, including their reliability, efficiency in executing previous tasks, and any reported failures or issues.

4. Geolocation Assessment.

WYA? Time to dig into the location of each node in relation to the data source and/or end-user. Nodes that are geographically closer to the data source or the end-user are identified, as they’re often able to offer lower latency and faster data transmission.

5. Resource Matching and Task Allocation.

It’s time for a marriage of convenience. Using the collected data, the DRM mechanism matches the task with a pool of the most suitable nodes in the network. The task is then allocated to some of these nodes at random, whilst ensuring that the task is distributed in a way that optimally uses the network’s resources.

6. Task Execution Monitoring.

Just checking in… Once the task is allocated, the system continuously monitors its execution, tracks the performance of each node, and makes adjustments if necessary — such as reallocating parts of the task if a node becomes unavailable or is underperforming.

7. Result Compilation and Verification.

Ahhhhhh I’m verifying. Upon task completion, the results from various nodes are compiled. The system then verifies the results (using the app’s preferred verification algorithm) for accuracy and integrity, ensuring that the output meets the required standards and specifications of the task.

8. Feedback and Optimization.

If you were happy with our service today… The node performance data is recorded for future assessments. This feedback is used to continuously optimize the DRM algorithm, improving the efficiency and reliability of future task allocations.

9. Rewards and Incentives.

We’re sure you’re in it for the tech too, but it’s worth noting that participating nodes can receive rewards from both the application and the Blockless Network based on their contribution to the task, encouraging continued participation and investment in the network.


A perfect match? xoxo

By breaking down applications into tasks, we’re not only able to leverage the devices of those who use these applications, but we’re also able to optimize every single click, tap and swipe of a button. From ensuring low latency (less lag) with our geolocation assessment, to selecting the most efficient consensus and verification algorithms for workloads that need unique or additional forms of security, we’re laser-focused on matching the best operators with each and every task.

And since we’re so keen on matchmaking — we’ll see you here on Valentines Day…

Subscribe to Blockless
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.