“At LabDAO, we are coming together to build a community-owned and operated platform to run scientific laboratory services, exchange protocols, and share data.” Currently, this is the one-liner we use at labDAO to describe what we are doing. As you can probably tell from the sentence, we are still working on improving our messaging.
This piece is to help me think more clearly about LabDAO and what its protocol, LAB-protocol, could help us achieve. Hopefully, it is also useful for community members and friends on the internet that are hearing about LabDAO for the first time.
I believe the goals of LabDAO are threefold:
I am going to expand on these three points below and explain how succeeding with one goal will be the starting condition to succeed with the next goal.
When Arye and I first started writing about LabDAO, we talked about the “AWS for deep tech”. Different from existing providers of cloud computing, we thought having a platform to consume laboratory services with a standardized API would require an open-source network of small labs offering and consuming their specialized services among peers. The participating labs within the network could be both physical laboratories, but also computational APIs, and other providers of systematic work (laboratory → laborare (Latin) = to labor).
Right now I am more and more interested in computational-biology APIs, mostly because that’s something a lot of DAO members use or interact with regularly (“solve your own problem”) and because it is one of the few elements of the supply-side within the protocol that we can start building ourselves today.
I hope that in the near future we can have a working version of the protocol and publish it together with a set of example services so community members can start hosting their own APIs and offer them on the web. In the near future, I believe the protocol will mostly be based on computation and the exchange of token-gated encrypted results via IPFS. Hopefully, we can then start supporting more and more atom-space/wet-lab services to join the platform, too. Finally, adding a payment and reputation layer to the protocol, I believe, will help enable the sustainable growth of services.
I believe that over time we will see multiple independent participants within the network offer the same service. The emergence of multiple providers for the same service has important consequences for the protocol:
It is the Standardization that I hope will create more transparency in how work is being done, particularly in the life sciences. Surely, instructions could also be posted in an encrypted/private manner. My hope, however, is that we will see a growing corpus of accessible and standardized instructions for biotech emerge based on LAB protocol.
Once we see structure emerge in the metadata that users exchange to share information about requested services, we will not only be able to analyze the metadata itself to learn about how processes are usually done, but also analyze the linked data files that resulted from requested laboratory services. The result will be a public knowledge graph of biological experiments, manufacturing processes, etc. The JSON metadata will be the descriptor, the data asset itself the LEAF node. Perhaps not all datasets will be openly accessible, but they will be indexed on the permaweb (and people could be offered payments for sharing their data).
What could be done with these organically growing knowledge graphs is something I have only briefly explored in the past. One potential outcome would be a whole category of “layer 2” services that consume open data from other transactions and offer up-to-date machine learning APIs to predict gene function, protein-folding, or cellular decision making.
If you are interested in building the internet of work, focused on the life sciences, get in touch below:
Thank you, Arye, Jocelynn, Boris, Jan, Liz, and Lily for the conversations and messages leading to this post.