Paul McCartney has backed calls for laws to stop mass copyright theft by companies building generative artificial intelligence, warning AI “could just take over”.
The former Beatle said it would be “a very sad thing indeed” if young composers and writers could not protect their intellectual property from the rise of algorithmic models that have so far learned by digesting mountains of copyrighted material.
He spoke out amid growing concern the rise of AI is threatening income streams for music, news and book publishers. Next week, the UK parliament will debate amendments to the data bill that could allow creators to decide whether or not their copyrighted work can be used to train generative AI models.
The amendments, championed by Beeban Kidron, would require operators of internet bots that copy content to train generative AI models to comply with copyright laws.
Some publishers, such as Rupert Murdoch’s News Corporation and the Finanical Times, have already struck licensing deals to allow Open AI to train its large language models on their journalism. In contrast, the New York Times has sued Open AI and Microsoft for copyright infringement.
In a statement supporting the News Media Association (NMA) campaign for creatives to get paid by the AI companies using their work, McCartney said: “We[’ve] got to be careful about it because it could just take over and we don’t want that to happen particularly for the young composers and writers [for] who, it may be the only way they[’re] gonna make a career. If AI wipes that out, that would be a very sad thing indeed.”
McCartney used machine-learning technology to help produce last year’s Beatles song Now and Then by isolating John Lennon’s vocal performance from a recording made in 1970. But that differs from the way AI firms train their large language models on vast bodies of often copyrighted material without paying for it.
Ministers are also set to consult on how the copyright issue should be handled in the UK, amid warnings. The system that is likely to best suit the tech companies would require artists, writers and publishers to opt out of having their creations mined to train large language models. But lobby groups, such as the NMA, which represents newspaper publishers, want a system that requires them to opt in instead.
On Tuesday, Lisa Nandy, the culture secretary, told the Commons culture, media and support select committee that the government had not decided which model it would propose in the forthcoming consultation but highlighted reservations about a system that would require creatives to opt out.
Nandy said: “We have looked at the limitations of similar legislation in the USA and the EU so we have reservations about this idea that you can simply just say I want to opt out and then find that you have been completely erased from the internet.”
That may put her in opposition to the technology secretary, Peter Kyle, whose department has “fully drunk the Kool-Aid on AI”, according to the committee chair, Caroline Dinenage. He is thought likely to want copyrighted material to be available to the tech companies unless creators opt out.
The novelist Kate Mosse has also backed the campaign for amendments that would allow the enforcements of the UK’s existing copyright law, thereby allowing creators to negotiate for fair payment when licensing their content. She said an opt-out would not work.
“As a writer, I want to engage with AI, and I do engage with AI,” she said. “But we are looking for the F word – fairness. Copyright exists. Intellectual property exists. But the law is not being kept and there is a clear obfuscation of the law. If you say you want to be paid, it will seem you are dismissing AI. There is a deliberate blurring from the tech firms … if copyright is watered down, it will severely damage the creative industries and without [it] there will be nothing left.”