Introduction to CRL
December 22nd, 2024

Nowadays, the AI training process has many flows: the datasets that AI models were trained are unknown, and the majority of contributors to datasets do not receive their rewards. That is, you cannot determine which artworks Stable Diffusion used to train. Similarly, all of the contributors to the model - dataset creators, architecture contributors, and testers do not receive their rewards. As several major companies currently control the AI space, they are not bothered by transparency and fairness.

This is where CRL steps in. CRL introduces a decentralized approach to building transparency and fair value distribution in AI training. Instead of hiding data and relying on a few dominant entities, CRL opens access to the datasets behind models and ensures that everyone who contributes—data providers, model developers, validators—receives equitable rewards. CRL tackles two major issues: hidden training data and unfair compensation.

By giving anyone the ability to trace where a model’s knowledge comes from, CRL makes the process verifiable, responsible, and trustworthy. At the same time, it breaks the reward monopoly held by large corporations, using decentralized validation and tokenization to direct rewards straight to the actual contributors. CRL’s use of Stellar’s Soroban contracts tokenizes model usage and ensures a transparent, on-chain record of who provided which data, who trained what, and how reliable the results are. Diffusion models, self-driving car systems, financial AI tools, and chatbots can all run on verifiable, transparent, and fairly compensated foundations. Instead of relying on opaque AI controlled by a few, CRL invites everyone to participate and benefit.

Subscribe to CRL
Receive the latest updates directly to your inbox.
Nft graphic
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from CRL

Skeleton

Skeleton

Skeleton