In the dynamic landscape of music and technology, the surge of Generative AI has ushered in a new era of creativity, enabling tools like ChatGPT and DALL-E to enhance our writing and artistic abilities. As these advancements democratize artistic creation, offering a plethora of platforms for diverse creative endeavors, a critical question emerges: How can artists, and specifically musicians, safeguard their original work in this rapidly evolving digital realm, especially on the blockchain?
This article aims to explore effective strategies for protecting musical copyrights in the age of AI-generated content, focusing on the unique challenges and solutions for Music on the Blockchain.
One of the challenges with the rise of Generative AI is protecting Copyright over original work and AI-generated work based on original music work.
How do we protect singers from having copies of their voice generated with AI and used to create music without their consent?
How do you link voice in a song to the original artist and not to a fake AI-Generated “artist”?
How do you spot AI-Generated copies of a singer’s voice in songs scattered across blockchains?
Let's discuss then two main themes on how to answer these questions:
Protecting an Artist's Voice as a Copyright-able Asset
Monitoring AI-Generated Content on the Blockchain
Protecting an artist's voice as a copyright-able asset, especially in the context of AI-generated content, can be challenging but not impossible. Here are a few practical solutions that should be considered by artists putting their music on-chain:
Timestamping and Proof of Creation: Singers can timestamp and digitally record the creation of their work on a blockchain, establishing a clear record of their ownership and originality. When AI-generated content emerges that uses their voice, they can point to this timestamp as evidence of their prior creation. Although this might sound like manual work, but it's the simplest way of saying "I was here first".
Watermarking and Unique Identifiers: Vocal samples used in original and derivative works could be watermarked with unique identifiers tied to the original artist. These identifiers can be stored on a blockchain, making it clear where the vocal samples originated. If AI-Generated content is released under a fake alias, the unique identifier can link it back to the original artist. I expand a bit more on this topic at the bottom, scroll down to the Watermarking and Unique Identifiers section.
Legal Protections and Copyright Registration: Artists should ensure their work is legally protected with copyright registration. If AI-Generated content infringes upon their voice, legal action can be taken to claim rights and seek compensation. All creative work is automatically protected by Copyright law, but it's not a bad idea to get it registered with your Copyright office.
Blockchain-Backed Contracts: The original artist can enter into smart contracts with AI creators. These contracts should outline the terms of use for their voice and establish royalty structures for derivative works. Smart contracts can be enforced automatically, ensuring fair compensation.
Due to the decentralized and pseudonymous nature of blockchain transactions it can be challenging to monitor AI-Generated Content across blockchains. However, here are some ways to overcome this:
Blockchain Analytics Tools: Specialized blockchain analytics tools can be used to monitor transactions and smart contracts for content matching. These tools can identify patterns or keywords associated with copyrighted material and link back to original content.
Blockchain Forensics: Forensic experts can analyze blockchain data to track the flow of assets tied to AI-Generated content. Suspicious patterns or transactions can be flagged for further investigation.
Community Reporting: In a decentralized community, artists and fans can (and should!) report suspicious or infringing content to the artist, relevant authorities or organizations that oversee blockchain-based content.
Blockchain Auditing Services: Third-party auditing services can be employed to verify the authenticity and compliance of blockchain-based content, especially in the context of music and royalties.
Cross-Blockchain Data Sharing: Blockchain networks can collaborate on a system for sharing data related to copyright violations. For example, Blockchain A, where the original content is published, could share identifiers or metadata with other blockchains, like Blockchain B and C and keep a common knowledge of an artist's discography, for example, across blockchains.
For more examples on these topics, scroll down to the Blockchain Analytics Tools and Cross-Blockchain Data Sharing section.
It's not an easy task to keep track of AI-Generated content on chain, but the methods discussed above offer a starting point to improve tracking original work on-chain and protect it from unauthorized derivative work. Some of the techniques above are in their infancy, but as more artists release music on-chain, better tools are created for content monitoring and blockchains improve cross-chain communication, it'll be easier to protect original work.
Watermarking and Unique Identifiers in the context of protecting an artist's voice involve embedding a digital mark or code into audio recordings. Here’s a few ways this could work:
Creation of Unique Identifier: When the original artist records their voice, they can generate a unique identifier for that specific recording. This identifier could be a cryptographic hash or a combination of characters that is generated from the recording itself.
Embedding the Identifier: The unique identifier is then embedded into the audio recording as a digital watermark. This can be done using techniques that are imperceptible to the human ear, such as inaudible frequency modulation or slight amplitude variations.
Blockchain Storage: The unique identifier and its association with the artist are stored on chain. This creates an immutable record of ownership and the originality of the voice recording.
Detection: Whenever AI-Generated content emerges that uses the artist's voice, an analysis tool can scan the audio for the presence of the unique identifier. If detected, this indicates that the AI-generated content is based on the original artist's work.
Now, one of the challenges with Generative AI is that entirely new content can be created using other original work as inspiration and not directly use it for new creations. Well, watermarking and unique identifiers can still be effective in this case. Here's how:
Watermark Resilience: The watermarking techniques used should be robust enough to survive various transformations, including AI-based transformations. Some advanced watermarking methods can withstand modifications to the audio, making it challenging for AI to remove or alter them without degrading the quality of the content.
Analysis of Transformed Content: When AI generates content based on the original work, it might try to remove or modify the watermark. However, even AI-altered content can be analyzed to detect remnants of the watermark or unique identifier. Analysis tools can identify similarities between the transformed content and the original watermark, signaling that it's derived from the original work.
Blockchain Verification: The blockchain's role remains crucial. Storing the unique identifier on the blockchain ensures that there's a tamper-proof record of the original work and its associated identifier, which can be used as evidence in disputes.
While AI can attempt to mimic or alter watermarked content, the combination of robust watermarking techniques, advanced analysis tools, and blockchain-backed proof can make it challenging for AI to completely eliminate traces of the original identifier.
If you’d like to dig into the nerdier side of things, here are some examples of tools and methods used for watermarking and the analysis of transformed content:
Invisible Watermarking: This technique embeds digital information within audio or video content without altering the perceptible quality. Tools like Adobe Audition and MATLAB can be used for audio watermarking. For video, technologies like Digimarc offer some solutions.
Steganography: Steganography involves hiding information within media files. Tools like OpenStego or Steghide can be used for this purpose. However, it's important to note that this is not exclusive to audio and can be applied to various forms of media.
Blockchain-Based Watermarking: Some blockchain platforms offer watermarking capabilities as part of their content protection solutions. These can be used to embed ownership information into media files.
Spectral Analysis: Audio signal processing tools like Audacity, Sonic Visualizer, and MATLAB can perform spectral analysis to detect alterations or similarities between transformed content and the original.
Machine Learning and Pattern Recognition: Machine learning models can be trained to recognize patterns or features in audio data. Tools like TensorFlow and PyTorch can be used to build such models for identifying similarities or alterations in audio content.
Fingerprinting: Audio fingerprinting technology, like that used by Shazam, can identify audio content based on its unique acoustic fingerprint. This can help recognize audio even when it has undergone alterations.
Forensic Audio Analysis: Forensic audio analysis experts can employ specialized software and techniques to detect tampering or alterations in audio recordings. These experts are often called upon in legal cases involving audio evidence.
It's exciting to see all the new possibilities Generative AI is unlocking in the creative industries and artists should always look for ways to protect original work and their voice from plagiarism. This will ensure we can take full advantage of Generative AI to enhance our creativity and not have artists constantly fight for their identity and original work as AI and Music On-Chain become more and more ubiquitous.