Some regulators use outdated regulations to regulate artificial intelligence services like ChatGPT, which could change society and industry.
The rapid improvements in generative A.I.I. technology powering OpenAI’s ChatGPT have raised privacy and safety concerns. ThE.U.E.U. is leading the way in creating A.I.w A.I. guidelines that could set the worldwide standard.
The law will take years to enforce. “In the absence of regulations, governments can only apply existing rules,” said Massimiliano Cimnaghi, a European data governance expert at consultancy B.I.P.
“If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined A.I.r A.I., but they are still applicable.”
After Italian regulator Garante shut down ChatGPT, accusing OpenAI of breakinE.U.’se E.U.’s GDPR, Europe’s national privacy watchdogs formed a task force to fix it in AprU.S.The U.S. corporation reintroduced ChatGPT after adding age verification and letting European users optA.I.ut for A.I. model training.
A Garante insider told Reuters the agency would examine various gA.I.erative A.I. techniques. France and Spain also initiated privacy investigations into OpenAI in April.A.I.enerative A.I. models are notorious for “hallucinations” and disinformation.
These mistakes could be costly. For example, a bank or governmentA.I.gency using A.I. to speed decision-making could unfairly deny loans or benefits. As a result, Google (GOOGL.O) and Microsoft (MSFT.O) discontinued utilA.I.ing ethically questionable A.I. goods like financU.S.l products.
According to six U.S. and European authorities and experts, copyright and data protection laws would be applied to model data and content.
“Interpret and reinterpret their mandates,” said former White House technology advisor Suresh Venkatasubramanian. He referenced the F.T.C.’s algorithm discrimination inquiry using exiE.U.’sgA.I.egulatory authority.
The E.U.’s A.I. Act will require companies like OpenAI to reveal protected materials like books and photos to train their models, exposing them to legal action.
Sergey LagE.U.insky, one of numerous E.U. proposal drafters, said copyright infringement would be difficult to prove.
“It’s like reading hundreds of novels before you write your own,” he remarked. “Copying and publishing is one thing. It doesn’t matter what you learned if you’re not actively plagiarizing.
Bertrand Pailhes, CNIL’s technology lead, says the regulator is “thinA.I.ng creatively” about how A.I. legislation applies.
In France, the Defense des Droits handles discrimination lawsuitsA.I.He stated CNIL had led the A.I. bias issue due to its lack of knowledge.
“We are looking at the full range of effects, although our focus remains on data protection and privacy,” he told Reuters.
The company may use GDPR’s automated decision-making protection.
“At this stage, I can’t say if it’s enough, legally,” Pailhes added. “It will take some time to build an opinion, and there is a risk that regulators will take different views.”
The Financial Conduct Authority is one of severaA.I.British authorities are drafting A.I. guidelines. In addition, the Alan Turing Institute in London and other legal and academic organizations are helping it grasp the technology; a representative told Reuters.
Industry insiders want regulators to engage more with corporate leaders as technology evolves.
Harry Borovick, general counsel at Luminance, an AI-powered legal document processing business, told Reuters that authorities and companies have “limited” contact.
“This doesn’t bode well for the future,” he said. “Regulators seem either slow or unwilling to implement the approaches which would enable the right balance between consumer protection and business growth.”
Comment Template