bridge the EU US AI regulatory chasm?
Does the UK’s AI copyright proposal bridge the EU US AI regulatory chasm?
Does the UK's new AI copyright proposal bridge the widening US and EU regulatory positions or will it simply harm its creative industries for the benefit of the tech industry?
The UK government’s consultation on regulatory proposals to give creative industries and AI developers clarity over copyright laws closed on 25 February amid protests from musicians and other creative content makers.
Data hungry AI requires AI model developers to have frictionless access to data. The UK government’s proposal seeks to address this requirement by placing the onus on creative content makers to opt out of making their content publicly available for scraping.
The proposals aim to drive growth across both sectors “by ensuring protection and payment for rights holders and supporting AI developers to innovate responsibly,” according to the UK government.A spokesperson for the Department for Science, Innovation and Technology (DSIT) said in a statement that the UK’s “current regime for copyright and AI is holding back the creative industries, media and AI sector from realising their full potential – and that cannot continue”.
The statement went on to say that proposals for a new approach would protect the interests of both AI developers and right holders, delivering a solution that allows both to thrive.
But critics say that the proposals favour AI industry growth over creative industry copyright protections. An onus on individual creators to notify all AI companies, of which there may well be thousands, of their intention opt out is unrealistic, say critics.
A lack of regulatory clarity has seen AI companies become increasingly vulnerable to litigation. On 11 Feb Thomson Reuters won the first significant AI copyright case in the US following a 2020 filing against legal AI startup Ross Intelligence for copying materials from Thomson Reuters research firm Westlaw.
Lawsuits filed by creative industry content creators continue to amass against AI companies and are working their way through the US legal system. In the UK, a high profile case filed by Getty Images against AI image generation company Stability AI is also underway.
Policy makers have yet to fully address the issue and create a robust framework for both the creative industries and AI industry to define copyright boundaries. The Trump administration repealed Biden era AI safety regulation in favour of a light touch regulatory environment that it says will promote growth and innovation. This approach sits firmly in opposition to the EU’s risk-based approach, placing the UK somewhere in between.
The UK’s approach to AI regulation has always differed from that of the EU, as it has always claimed that it won’t introduce any statutory obligations, according to Laura Petrone, GlobalData’s principal analyst specialising in global tech regulation.
Petrone said that, despite this, the opportunity to target frontier AI, or the most advanced models, has, indeed, been discussed by the current government.
Petrone said the UK government’s AI copyright proposals are more an attempt to seize an opportunity “to act like a bridge between the EU and the US and their increasingly diverging approaches on AI regulation.”
“Unfortunately, if the government doesn’t do anything to reassure creative industries, there is a risk that it appears as if it’s falling into line with the US administration’s stance on AI regulation.
“For now, the ‘rights reservation’ is just a plan, and so the government can listen to the industry, in particular to the fact that the opt-out model is difficult to implement, take stock, and meet the artists’ demands,” said Petrone
Alexandra Ebert, chief AI and data democratisation officer at MOSTLY AI said the more trust there is in AI, the faster and more broadly it will be adopted. “The argument that AI innovation and responsible AI are conflicting priorities is outdated and must be shot down,” said Ebert.
The AI Action Summit in Paris on 10 February saw world leaders trying to find a consensus on a global framework for AI regulation. The UK and the US remained outliers by refusing to sign an international agreement on AI at the global summit.
“The declaration isn’t binding, so the refusal to sign is largely symbolic.
But having signed the declarations from the previous two iterations of the AI Action Summit, which were stronger and clearer in their language on responsible AI, backing out of this one is a sharp policy pivot from the US and UK, and it could lead to further announcements that will damage global progress on responsible AI practice and multinational collaboration.
Ebert said regulation is not to blame for stifling AI innovation, adding: “it’s not the law that’s the problem, it’s how we apply it.”
Event: International Conference on Strategic Management and Business Strategy
visit :business-strategy-conferences.scifat.com
Nomination link: business-strategy-conferences.scifat.com/award-nomination/?ecategory=Awards&rcategory=Awardee
Registration link: business-strategy-conferences.scifat.com/award-registration
contact:managementstrategy@scifat.com -----------------
visit : youtube: @bussinessstrategy250
Twitter: twitter.com/awards32874
blogger: bussinessanagement.blogspot.com
Comments
Post a Comment