A raft of lawsuits is raising questions about how AI companies can use copyrighted material.
Open AI recently announced a promise to pay the legal fees of customers who face copyright claims.
Anthropic, meanwhile, says responsibility ultimately rests with users.
If 2023 was the year of AI, 2024 may be the year lawsuits kneecap it. With Anthropic readying a defense in a copyright lawsuit brought by the world's largest music company and Open AI being sued by many of the authors on the New York Times bestseller list, the question of who's responsible when an AI model steals someone else's work, and if it's really stealing, have become big questions.
At its first ever developer's day conference earlier this month, OpenAI tried to tackle the situation head on by announcing Copyright Shield. That's a promise that OpenAI will pay the legal fees of its business customers in the event they're sued for something they made using its products. As generative AI services like OpenAI add more capabilities, users can ask them to do everything from creating works of art and videos to writing screenplays, novels and websites.
"We can defend our customers and pay the costs incurred if you face legal claims around copyright," said OpenAI CEO Sam Altman in his keynote speech.
But OpenAI rival Anthropic, founded by former OpenAI researchers and backed by billions of dollars from investors like Google and Amazon is taking a different approach. In a letter to the United States Copyright Office last month, Anthropic Deputy General Counsel Janel Thamkul laid out the company's stance on the question of copyright infringement.
Like many other AI companies, Anthropic argues that training large language models on copyrighted material should fall under fair use because the content is only being copied for statistical analysis and not for its expressive content.
"Copying is for functionality, not for copying creativity," Thamkul writes. She compares it to Sega Enterprises v Accolade, a 1992 landmark case in which video game giant Sega unsuccessfully sued a scrappy Silicon Valley startup for reverse engineering a Genesis console in order to make competing games for it.
Thamkul also argues in the letter that Anthropic's AI has been designed to avoid producing creative works that infringe on another's copyright. "Outputs do not simply 'mash-up' or make a 'collage' of existing text," she writes.
But to the extent that liability does exist, Anthropic says it's the person using the product that bears the blame.
"Responsibility for a particular output will rest with the person who entered the prompt to generate it. That is, it is the user," writes Thamkul. "If we detect repeat infringers or violators, we will take action against them, including by terminating their accounts."
Anthropic's approach represents a traditional "liability shifting model," says Erika Fisher, chief administrative and legal officer at Atlassian, a maker of productivity software and a customer of OpenAI. Fisher, a veteran corporate lawyer, is part of a working group within Atlassian figuring out how to advise customers on the potential legal pitfalls of using generative AI.
She says that OpenAI's Copyright Shield promise feels novel and is compelling to enterprise customers looking to deploy AI-powered tools without inviting new liability.
But she says, the commitment may ultimately prove to be a moot point because if the courts rule that these generative AI companies must pay copyright holders in order to use their material to train their models – rather than gobbling it all up for free – that will likely hurt them deeply.
"The reality is some of the cases that are pending, if they don't get decided favorably for Open AI, I'm not sure that they can sustain their business model," said Fisher. "So at that point, they've got bigger problems than a list of indemnification fees to pay."
Do you work in AI and have a story to share? Reach out to Darius Rafieyan by phone or Signal at +1-714-651-1367 or by email at firstname.lastname@example.org
Read the original article on Business Insider