The U.S. Supreme Court may remove a robust shield safeguarding internet corporations in the coming months, which might affect quickly expanding technologies like ChatGPT.
The judges will decide by June whether Alphabet Inc.’s (GOOGL.O) YouTube may be sued over its video suggestions. That lawsuit questions whether a U.S. rule that shields computer platforms from liability for user material applies when corporations employ algorithms to propose content.
Those judicial decisions affect more than social media. According to technology and legal experts, its ruling could influence the debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, in which Microsoft Corp (MSFT.O) is a major investor, or Bard from Alphabet’s Google should be protected from legal claims like defamation or privacy violations.
The experts said that ChatGPT and its successor GPT-4 employ algorithms similar to those that propose content to YouTube viewers.
“The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and AI expert. “Chatbot issues are similar.”
OpenAI and Google declined comment.
In February, Supreme Court justices debated whether to weaken Section 230 of the Communications Decency Act of 1996. As a result, Justice Neil Gorsuch highlighted that AI technologies that produce “poetry” and “polemics” may not have legal protections.
The case is one part of a debate over whether Section 230 protection should extend to AI models educated on internet data yet capable of creating creative works.
Section 230 protects third-party material from technology platform users but not company-developed content. Courts do not yet cover AI chatbot responses.
Democratic Senator Ron Wyden, who helped craft that measure in the House, said the liability shield should not apply to generative AI technologies since they “create content.”
Section 230 protects speech hosts and organizers. “It should not protect companies from their own actions and products,” Wyden told Reuters.
Despite bipartisan opposition, the tech sector wants to keep Section 230. ChatGPT, like search engines, directs visitors to exist material.
AI creates nothing. “It’s taking existing content and putting it in a different fashion or format,” said Carl Szabo, vice president and general counsel of NetChoice, an Internet trade organization.
Szabo warned that a weaker Section 230 would make it hard for AI developers to avoid litigation that might hinder innovation.
Some analysts expect judges to consider the AI model’s background before ruling.
The shield may apply if the AI model paraphrases sources. However, chatbots like ChatGPT have been known to construct fictitious replies unrelated to internet information, which experts say would likely not be secured.
Hany Farid, a technologist and lecturer at UC Berkeley, said it’s hard to imagine AI engineers being free from litigation over models they “programmed, trained and deployed.”
“When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products,” Farid added. “And when they’re not held liable, they produce less safe products.”
The Supreme Court is hearing an appeal by the family of Nohemi Gonzalez, a 23-year-old California college student tragically shot in a 2015 Paris Islamist militant attack, of a lower court denying her family’s complaint against YouTube.
The complaint accused Google of “material support” for terrorism and alleged that YouTube’s algorithms improperly pushed Islamic State terrorist organization videos to select viewers.
Comment Template