Daily Management Review

Why EU Legislators Are Fighting To Limit ChatGPT And Generative AI?


04/29/2023




Why EU Legislators Are Fighting To Limit ChatGPT And Generative AI?
Generative artificial intelligence (AI) was not a major part of the proposals for regulating ChatGPT or other generative AI technologies by EU legislators until just a few months ago.
 
Two years prior, the bloc's 108-page AI Act proposal that was made public contained just one use of the term "chatbot." Deepfakes, which are visuals or audio created to impersonate human beings, were frequently mentioned when discussing AI-generated content.
 
However, by mid-April, MEPs were rushing to update those regulations in order to keep up with the explosion in interest in generative AI, which has caused both awe and anxiety since OpenAI unveiled ChatGPT six months earlier.
 
A revised draught of the legislation that recognised copyright protection as a crucial component of the attempt to keep AI in check came out of that scramble on Thursday.
 
Interviews with four senators and two more insiders shed light on how this small group of lawmakers shaped potential monumental legislation over the course of just 11 days, changing the legal environment for OpenAI and its rivals.
 
The draught bill is not final, and according to attorneys, it will probably take years to become effective.
 
However, the speed at which they work is also a noteworthy instance of consensus in Brussels, which is frequently criticised for the slowness of its decision-making.
 
The fastest growing app ever since its November release, ChatGPT has generated a flurry of action from Big Tech rivals and investment in generative AI firms like Anthropic and Midjourney.
 
Thierry Breton, the EU industry chief, and others have called for regulation of ChatGPT-like services due to the applications' explosive popularity.
 
The billionaire CEO of Tesla Inc. and Twitter, Elon Musk, supported a group that took things a step further by writing a letter warning of existential risk from AI and advocating for tighter controls.
 
The dozen MEPs who were involved in the legislation's formulation wrote an open letter on April 17 in which they agreed with some of Musk's points and encouraged world leaders to convene a conference to discuss how to regulate the advancement of powerful AI.
 
Four sources who attended the meetings and asked to remain anonymous due to the sensitive nature of the discussions said that two of them, Dragos Tudorache and Brando Benifei, proposed changes that would require companies with generative AI systems to disclose any copyrighted material used to train their models.
 
According to the sources, there was bipartisan backing for that strict new idea.
 
Conservative MEP Axel Voss made a proposal that would have required corporations to obtain permission from rights holders before utilising the data, but it was rejected as being too onerous and having the potential to stifle the developing industry.
 
The EU presented proposed legislation that might impose an uncomfortably high level of transparency on a traditionally secretive industry after working out the details over the course of the following week.
 
"I must admit that I was positively surprised on how we converged rather easily on what should be in the text on these models," Tudorache told Reuters on Friday.
 
"It shows there is a strong consensus, and a shared understanding on how to regulate at this point in time."
 
On May 11, the committee will vote on the agreement. If it passes, it will move on to the trilogue, where EU member states, the European Commission, and the Parliament will discuss its contents.
 
"We are waiting to see if the deal holds until then," one source familiar with the matter said.
 
MEPs were yet to be persuaded that generative AI merited any special treatment until recently.
 
According to Tudorache, generative AI was "not going to be covered" in-depth in a statement to Reuters in February. He said, "I don't think we are going to deal with that subject in this text.
 
He declared: "I am more afraid of Big Brother than I am of the Terminator." He did this by citing data security threats over warnings of human-like intelligence.
 
However, Tudorache and his colleagues now concur that laws that specifically prohibit the use of generative AI are necessary.
 
Companies like OpenAI, which is supported by Microsoft Corp., would be required to reveal any protected content, including books, images, movies, and more, that was used to train their systems under new proposals aimed at "foundation models."
 
In recent months, allegations of copyright infringement have alarmed AI companies. Getty Images has filed a lawsuit against Stable Diffusion for utilising protected images to train its systems. For refusing to disclose specifics of the dataset used to train its software, OpenAI has also come under fire.
 
"There have been calls from outside and inside the Parliament for a ban or classifying ChatGPT as high-risk," said MEP Svenja Hahn. "The final compromise is innovation-friendly as it does not classify these models as 'high risk,' but sets requirements for transparency and quality."
 
(Source:www.reuters.com)