Skip to main content

Since the launch of ChatGPT, generative AI has taken the world by storm. Text-to-image AI generators can create extraordinary images with just a text prompt, while ChatGPT promotes their ability to provide detailed responses to whatever questions you ask. AI music generators, too, are becoming increasingly popular, giving the user the potential to create songs with just a few words. The new technology can seem like magic, but the way these systems are created raises questions and concerns, with many creatives fearing for their jobs. As a trade that saw its value cut in half by illegal downloads just two decades ago, the music industry is especially anxious not to let it happen again.

Currently, concerns over intellectual property infringement are at the forefront of the debate as the current culture surrounding AI puts the acquisition of data and accelerated advancements before all else. Nations across the globe are racing to become the next Silicon Valley and “reap the economic benefits that would follow” reported Billboard in April. In the same article, they wrote that Israel’s Ministry of Justice announced it would be eliminating the copyright laws surrounding AI training so that they can “spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine learning] and content creation.”

The Human Artistry Campaign, however, argues that these sorts of exemptions do more economic harm than good. Formed in March of this year, the organization wants to “ensure artificial intelligence technologies are developed and used in ways that support human culture and artistry – and not ways that replace or erode it.” On their homepage, they list their core principles, and argue that, “Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.”


Training

How Are AI Systems Trained and Why is it a Problem?

One of the most popular ways to create AI systems is through machine learning (ML) algorithms. This gives computers the ability to learn without being explicitly programmed, instead learning through experience. Extraordinary amounts of data are gathered for the machine to be trained on and programmers let the computer find patterns and make predictions among said data.

The datasets (officially called ontologies) depend on the goal of the AI system. Musical generators, for example, are trained on ontologies of all things music. The problem is that these systems are often using copyrighted material without the necessary permissions or licensing agreements, and there’s no remuneration system in place to pay artists for the work used to train the machines. In this way, companies are essentially stealing from artists in order to create technology that could one day disrupt their livelihoods.


Inspiration vs Infringement

Inspiration vs. Infringement

If artists don’t keep track of every song they’ve ever heard, or pay every time they are inspired by a song, why should companies have to list the copyrighted works they use or pay to train their AI platforms on them? J Herskowitz, a self-proclaimed hobbyist musician who has recently been exploring the world of AI production capabilities, understands artists not wanting their music to help train AI, but is conflicted as to whether or not he agrees with the demand. “The Beatles trained generations worth of artists with their music. We generate music based on what we heard, so to say you can’t write a song because you listened to The Beatles…seems like a slippery slope.” In terms of listing sources, he wonders if it should be any different for machines than it is with humans. “For myself, I write things all the time and say, I like the way that sounds, but I don’t always know if I like the way it sounds because I made it up or because I’ve heard it before.”

Mike Fiorentino of indie publisher Spirit Music Group, however, argued that although we might not always know our sources, the artists we’ve heard in our lives are almost always compensated for their work in some way. “Let’s say I wanted to write a song à la Led Zeppelin,” he told Variety. “My dad bought the LPs and cassettes, I bought the CDs, and I also listen to the radio, where ad dollars are being generated. But if you feed a bot nothing but Led Zeppelin, that bot isn’t influenced by Led Zeppelin — you fed it data. Did that data get paid for and what about those copyrights?” Unlike humans, AI can’t truly be inspired. It only works through pattern finding and some level of imitation and direct reproduction of the sounds that have been directly and purposefully inputted into the system. For many creatives, this distinction is of utmost importance.

Some of the generative AI systems infringe more obviously than others. As first reported by TorrentFreak in October of last year, the Recording Industry Association of America (RIAA) flagged several “Artificial Intelligence Based” music mixers and extractors as emerging copyright threats in their annual overview of “notorious” piracy markets. One of the flagged systems is Songmastr, a platform that promises to “make your songs sound (almost) as good as your favorite artist.” On the site, you can upload a track that you’ve made and a track from an artist you want to sound like. Songmastr explained that the algorithm then “masters” your track with the same RMS, FR, peak amplitude, and stereo width as the reference song chosen.

The copyright issue is pretty clear. The tracks that users choose are used by the site to create derivative works without permission from or acknowledgment to the artist. Other systems that were flagged included Acapella-Extractor and Remove-Vocals. If it wasn’t obvious from their names, Acapella-Extractor can take any track you give it and isolate the vocals and its partner site, Remove-Vocals, will leave you with just the instrumentals.

However, the RIAA explains that “To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members’ works… In any event, the files these services disseminate are either unauthorized copies or unauthorized derivative works of our members’ music.”

The repercussions of sites like these are especially apparent when you look at how platforms like YouTube catch copyright infringements. Ezra Sandzer-Bell is the creator of AudioCipher, a plugin that uses musical cryptography to turn words into melodies in a Digital Audio Workstation (DAW). While AudioCipher itself does not use AI, it puts a spotlight on the sites that are. He helped explain some of the behind the scenes of YouTube and how artist’s get royalties from videos that use their songs.

“If you want to go on YouTube today and upload someone else’s song, no one is going to stop you. You might get a DMCA [Digital Millennium Copyright Act takedown notice] that says ‘Hey this is copyrighted material, etc.’ but only the biggest major labels are going after it and saying ‘Take that down.’ Everyone else, major indie artists even, are in a position where they’re going through CD Baby or District Kid or one of these distributors, and that system is managing their tracks across all of these platforms. From there, there’s a button that you can click to elect to receive royalties for any YouTube videos that are using your music. So from an artist’s perspective they’re like, “Great I guess I’m still getting my remunerations.'”

YouTube is able to identify when a song is played through audio fingerprinting, so that if the song is played in the video, even if it’s just in the background, the artist can get paid. However, Matthew Stepka, former VP of business operations and strategy for special projects at Google told Variety that “it has to be an exact copy of a commercially published version” in order for the fingerprinting system to work. Therefore, there is no way to catch derivatives that platforms such as Acapella-Extractor, SongMastr, and Remove-Vocals create and use especially if they are manipulating smaller creators’ music, i.e. creators who need those royalties more than anyone.

Finding a solution is not as simple as one would hope. Take Google’s new generative text-to-music AI system, MusicLM, for example. Like all of these machine learning systems, MusicLM requires a ton of data. Luckily for Google, they own YouTube, meaning that they have access to tens of millions of tracks in their dataset which they technically have the right to use.

Sandzer-Bell explained that Google used three datasets for training: MusicCaps, AudioSet, and MuLan. There is a lot of complicated computer science behind gathering the data and the difference between the sets, but here are the essentials. The MusicCaps dataset contains about 5,000 ten second YouTube audio clips while AudioSet is much larger, and contains noises outside of just music, such as water dripping, voices, engine sounds, etc. but about half of Audioset’s 2.1 million files are still music clips. Finally, MuLan, the largest dataset with about 370,000 hours of audio, is made up of about 44 million thirty-second clips that are all at least 50% music.

There are a couple issues with this data. As previously mentioned, there’s no system in place for artist remunerations. Had someone been listening to these YouTube videos and using them for inspiration, the artists would be paid, but when feeding MusicLM the data, the artists don’t receive any royalties. Furthermore, all of these music files are only labeled with the YouTube ID of the video. The artist name, the song title, the album, none of that is included in the description. By doing this, Google has made it really hard to create said remuneration system.

“What we don’t talk about is that when YouTube/Google trains on all their data that is technically theirs because it’s on their platform, artist’s did not necessarily upload those things to begin with,” says Sandzer-Bell. As previously mentioned, artists don’t necessarily approve of or upload every video on YouTube with their song in it. Instead they sign blanket licenses and opt-in to receive royalties automatically from the videos that use their songs. So by not labeling their data clips with the song or artist, Google has made it extremely difficult to find out whose song is being used in any data. The YouTube ID only sends you to the YouTube channel and the YouTube channel might not be that of the artist whose song it is. In order to find what song is being used in that specific video, you’d have to watch the clip and figure it out from there

“Let’s say Google was like ‘Okay, instead of the Youtube IDs, we’re going to scrape them and get you the names of the Youtube channels.’ Well, that still might not tell me who’s song it is. So they say ‘Okay, we’re going to have to scrape channels and find the names of the songs used and …’ Like why wouldn’t you do that from the beginning?”

Sandzer-Bell says he can’t claim to know the answer to that, but suspects the reason might be an economic one. “If you were Google, do you want a list that says, we trained on 500 Taylor Swift songs? Like no!”

The Human Artistry Campaign’s mission statement includes compensating artists for the work that has already been used to train these machines. MusicLM’s current configuration, however, exemplifies why this would be a very complicated, arduous process.


Moving Forward

Moving Forward

While some wish the world could stop and burn it all to the ground, the only certainty is that AI isn’t going anywhere. As the technology continues to advance, users and developers alike need to respect the rights of those whose work helped create this new technology and whose jobs are likely to be disrupted by it. To Variety, RIAA Chairman and CEO Mitch Glazier notes, “Human artistry is irreplaceable. Recent developments in AI are remarkable, but we have seen the costs before of rushing heedlessly forward without real thought or respect for law and rights. Our principles are designed to chart a healthy path for AI innovation that enhances and rewards human artistry, creativity, and performance.”

Similarly, the Harvard Business Review wrote that in order to advance smoothly AI developers must ensure they are complying with the law and consumers must hold corporations accountable. “This should involve licensing and compensating those individuals who own the IP that developers seek to add to their training data, whether by licensing it or sharing in revenue generated by the AI tool. Customers of AI tools should ask providers whether their models were trained with any protected content, review the terms of service and privacy policies, and avoid generative AI tools that cannot confirm that their training data is properly licensed from content creators or subject to open-source licenses with which the AI companies comply.”

Transparency is hugely important for all sides going forward. Among their core principles the Human Artistry Campaign states that “Trustworthiness and transparency are essential to the success of AI and protection of creators.” Executive VP and chief digital officer at Universal Music Group Michael Nash uses nutrition labels as an analogy for what he hopes to see in the future. “The same way that food is labeled for artificial content, it will be important to reach a point where it will be very clear to the consumer what ingredients are in the culture they’re consuming,” he told Variety in early May.

In terms of policing copyright infringements, many hope that AI can actually be a solution. As Matthew Stepka mentioned earlier, YouTube’s fingerprinting system only works on exact copies of the commercially published version of songs. “AI can actually get over that hurdle,” says Stepka. “It can actually see things, even if it’s an interpolation or someone just performing the music.” This ability could lead to more precise evaluations of copyright cases within the law systems and could pose a huge benefit to artists.

In the meantime, music technology company Spawning has created a website called HaveIBeenTrained. This platform can help creators see whether or not their work is being used to train these machines and then, free of charge, opt-out of the training. However, like we’ve seen with YouTube, blanket licenses and opt-outs come with their own problems and some want better standards. “We don’t want to opt out, we want to opt in,” Helienne Lndvall, president of the European Composers and Songwriters Alliance, told Billboard. “Then we want a clear structure for remuneration.”

As that structure is being built, another question looms: who should be receiving copyrights on the content that’s going to be created with AI? Currently, authoring has been seen as a uniquely human activity and only human creation is eligible for copyright protection. Therefore, (at least, for now) AI systems themselves are not able to hold copyrights on the material they generate. So who can?

In short, it’s unclear. In February, the U.S. Copyright Office decided that AI generated images in Kris Kashtanova’s comic book “Zarya of the Dawn” should not be granted copyright protection. They stated in a letter that Kashtanova is entitled to a copyright for her words and arrangement, but not the images themselves. Therefore, one answer to the question is that there can’t actually be copyright protection for content that AI generates.

If protection is possible, however, it is still unclear whether it would fall to the user inputting text prompts or the owner of the AI tool itself, and whether or not all artists whose work was used to train the AI would receive royalties for the content created. Until this issue is resolved in the courts, it is often resolved contractually. For example, the musical AI system AIVA assigns copyrights to the user for the material they create, but only if they subscribe for certain premium plans. If not, the copyright is owned by AIVA. Another site, WarpSound, is working to reinvent how we understand musical expression and ownership. Combining music and visuals, their subscribers (or WVRP holders as they call them) are able to mint the AI music they create on the site as an NFT.

On the one hand, the artistic community doesn’t want to give copyrights to music or art created using AI. At the same time, a huge concern for the music industry is what is being called “functional music” or “royalty-free music.” This can be generated by AI systems without much, or any, real input from humans besides the initial machine learning data. Thus, it could theoretically provide an unlimited supply of music. If AI-generated music doesn’t have the ability to be copyrighted, it may be able to undercut human-made, copyrighted music more easily because no one would have to worry about licensing costs or royalty fees.

Deepfake vocal synthesizers have also raised many copyright questions. When “Heart On My Sleeve,” a track that used AI to simulate the voices and styles of Drake and The Weeknd, went viral this year, the world was understandably shocked. Universal Music Group invoked copyright violation to remove the song from most streaming platforms, but it can still be found on YouTube.

While it is currently impossible to copyright a voice or style of singing, there are some protections in place against the imitation of distinctive voices to endorse products. One case to keep an eye on is Yung Gravy‘s use of a Rick Astley impersonator on his recent track “Betty (Get Money).” While Gravy’s use of the melody and lyrics of “Never Gonna Give You Up” were authorized, Astley says he never authorized the use of his “signature voice” and is taking Gravy to court over it. Additionally, Astley’s legal team is hoping to set a precedent against the use of imitation for any commercial purpose, not just fake endorsements. If the courts rule in Astley’s favor, it could create an avenue for artists to take action against the use of deep fake voices.

Many questions remain as the world works to understand the future of AI and answer all copyright uncertainties. It is clear, however, that artists’ participation and input will be essential if creative rights are to be respected. “Policymakers must consider the interests of human creators when crafting policy around AI,” says the Human Artistry Campaign. “Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.”

Source

Leave a Reply