What is the goal of LabDAO?
January 17th, 2022

“At LabDAO, we are coming together to build a community-owned and operated platform to run scientific laboratory services, exchange protocols, and share data.” Currently, this is the one-liner we use at labDAO to describe what we are doing. As you can probably tell from the sentence, we are still working on improving our messaging.

This piece is to help me think more clearly about LabDAO and what its protocol, LAB-protocol, could help us achieve. Hopefully, it is also useful for community members and friends on the internet that are hearing about LabDAO for the first time.

I believe the goals of LabDAO are threefold:

  1. build a web3 LAB-protocol that serves as an exchange to request and provide services among peers.
  2. enable an exchange, not only of the services themselves but also open-source instructions on how to provide them.
  3. enable an organically growing knowledge graph of results coming out of laboratory experiments.

I am going to expand on these three points below and explain how succeeding with one goal will be the starting condition to succeed with the next goal.

1. Building a LAB protocol

When Arye and I first started writing about LabDAO, we talked about the “AWS for deep tech”. Different from existing providers of cloud computing, we thought having a platform to consume laboratory services with a standardized API would require an open-source network of small labs offering and consuming their specialized services among peers. The participating labs within the network could be both physical laboratories, but also computational APIs, and other providers of systematic work (laboratory → laborare (Latin) = to labor).

Right now I am more and more interested in computational-biology APIs, mostly because that’s something a lot of DAO members use or interact with regularly (“solve your own problem”) and because it is one of the few elements of the supply-side within the protocol that we can start building ourselves today.

I hope that in the near future we can have a working version of the protocol and publish it together with a set of example services so community members can start hosting their own APIs and offer them on the web. In the near future, I believe the protocol will mostly be based on computation and the exchange of token-gated encrypted results via IPFS. Hopefully, we can then start supporting more and more atom-space/wet-lab services to join the platform, too. Finally, adding a payment and reputation layer to the protocol, I believe, will help enable the sustainable growth of services.

A world of small APIs coming together.
A world of small APIs coming together.

2. Enabling the open-source spread of instructions

I believe that over time we will see multiple independent participants within the network offer the same service. The emergence of multiple providers for the same service has important consequences for the protocol:

  • Competition emerges and market thickness increases. Potentially a reputation-weighted Dutch descending auction system can be introduced to lead to transparent price discovery of high-quality services.
  • Redundancy improves security for the requester in two ways. First, by increasing the uptime of the service the two providers are offering. Second, by enabling “parallel compute” to validate the correctness of the results and ensure reproducibility of results (with added costs of course).
  • Standardization emerges. In a pursuit to simplify the process of finding a provider for a requested service, marketplace participants will gradually converge on a common set of metadata to describe a laboratory service. These instructions are going to be simple, but not too simple to facilitate larger adoption. Potentially the standardization can even be further incentivized using additional token-based mechanisms.

It is the Standardization that I hope will create more transparency in how work is being done, particularly in the life sciences. Surely, instructions could also be posted in an encrypted/private manner. My hope, however, is that we will see a growing corpus of accessible and standardized instructions for biotech emerge based on LAB protocol.

3. Enabling an organically growing knowledge graph

Once we see structure emerge in the metadata that users exchange to share information about requested services, we will not only be able to analyze the metadata itself to learn about how processes are usually done, but also analyze the linked data files that resulted from requested laboratory services. The result will be a public knowledge graph of biological experiments, manufacturing processes, etc. The JSON metadata will be the descriptor, the data asset itself the LEAF node. Perhaps not all datasets will be openly accessible, but they will be indexed on the permaweb (and people could be offered payments for sharing their data).

What could be done with these organically growing knowledge graphs is something I have only briefly explored in the past. One potential outcome would be a whole category of “layer 2” services that consume open data from other transactions and offer up-to-date machine learning APIs to predict gene function, protein-folding, or cellular decision making.

If you are interested in building the internet of work, focused on the life sciences, get in touch below:

Thank you, Arye, Jocelynn, Boris, Jan, Liz, and Lily for the conversations and messages leading to this post.

Arweave TX
Ethereum Address
Content Digest