Tech Giants Say That Users Of Their Software Should Be Held Responsible For AI Copyright Infringements
AI developers say it’s not their fault that their machine learning programs produce copyrighted material, even though they are the ones who trained their systems on copyrighted materials. Instead, they want users to shoulder the legal responsibility of material generated by their systems.
The U.S. Copyright Office is mulling over new regulations regarding generative AI, and, in August, published a request for comments on artificial intelligence and copyright. Responses to the request are public, and can be found here.
Among the replies, companies including Google, Dall-E developer OpenAI, and Microsoft wrote in, arguing that only the unlicensed production of copyrighted materials violates existing protections. According to them, AI software is just like audio or video recording devices, photocopiers, or cameras, all of which can be used to infringe on copyrights. Manufacturers of those products aren’t held to account when that happens, so why should AI companies be held accountable, or so the thinking goes.
Microsoft, which has a multi-billion dollar partnership with OpenAI, wrote:
[U]sers must take responsibility for using the tools responsibly and as designed. … To address the concerns of rightsholders, AI developers have taken measures to mitigate the risk of AI tools being misused for copyright infringement. Microsoft incorporates many such measures and safeguards to mitigate potential harmful uses across our AI tools. These measures include meta-prompts and classifiers, controls that add additional instructions to a user prompt to limit harmful or infringing outputs.
It should be noted that the safeguards that Microsoft supposedly has in place have done little to prevent mass trademark and copyright infringement. In fact, The Walt Disney Company recently asked the tech giant to prevent users from infringing on its trademarks.
Google meanwhile argued:
The possibility that a generative AI system can, through “prompt engineering,” be made to replicate content from its training data does raise questions around the proper boundary between direct and secondary infringement. When an AI system is prompted by a user to produce an infringing output, any resulting liability should attach to the user as the party whose volitional conduct proximately caused the infringement. … A rule that would hold AI developers directly (and strictly) liable for any infringing outputs users create would impose crushing liability on AI developers, even if they have undertaken reasonable measures to prevent infringing activity by users. Had that standard applied in the past, we would not have legal access to photocopiers, personal audio and video recording devices, or personal computers — all of which are capable of being used for infringement as well as for substantial beneficial purposes.
And OpenAI wrote:
In evaluating claims of infringement relating to outputs, the analysis starts with the user. After all, there is no output without a prompt from a user, and the nature of the output is directly influenced by what was asked for.
It’s worth pointing out that all of the above companies have used copyrighted and trademarked material without permission to train their software, and OpenAI is currently being sued by over a dozen major authors who accuse the company of infringing on its copyrights.
And to further muddy the waters, despite these companies telling the U.S. government that users should be liable for the output of their systems, many of them, including Google, OpenAI, Microsoft, and Amazon, are offering to cover their clients’ legal costs in copyright infringement suits.
But, ultimately, the companies argue that current copyright law is on their side and that there is no need for the copyright office to change that, at least not right now. They say that if the office cracks down on developers and changes any copyright law, it could hamstring the nascent technology. In its letter, OpenAI said it “urges the Copyright Office to proceed cautiously in calling for new legislative solutions that might prove in hindsight to be premature or misguided as the technology rapidly evolves.”
Perhaps surprisingly, the major film studios are on the side of big tech here, though they are coming at it from a different angle. In its submission to the Copyright Office, the Motion Picture Association (MPA) drew a distinction between generative AI and the use of artificial intelligence in the film industry, in which, “AI is a tool that supports, but does not replace, the human creation of the members’ works.” The MPA also argued against updating current legislation:
MPA’s members have a uniquely balanced perspective regarding the interplay between AI and copyright. The members’ copyrighted content is enormously popular and valuable. Strong copyright protection is the backbone of their industry. At the same time, MPA’s members have a strong interest in developing creator-driven tools, including AI technologies, to support the creation of world-class content. AI, like other tools, supports and enhances creativity, and draws audiences into the stories and experiences that are the hallmark of the entertainment industry. MPA’s overarching view, based on the current state, is that while AI technologies raise a host of novel questions, those questions implicate well-established copyright law doctrines and principles. At present, there is no reason to conclude that these existing doctrines and principles will be inadequate to provide courts and the Copyright Office with the tools they need to answer AI-related questions as and when they arise.
Even though the MPA writes that existing copyright laws are sufficient, it offered strong objections to the idea that AI companies should be able to freely train their systems on their material. In its letter, the MPA wrote:
MPA currently believes that existing copyright law should be up to the task of handling these questions. A copyright owner who establishes infringement should be able to avail itself of the existing available remedies in §§ 502-505, including monetary damages and injunctive relief. … At this time, there is no reason to believe that copyright owners and companies engaged in training generative AI models and systems cannot enter into voluntary licensing agreements, such that government intervention might be necessary.
Pictured at top: Pixar-inspired images created with Microsoft’s Bing Image Creator.