Snapchat may have failed to adequately examine privacy threats to minors from its artificial intelligence chatbot, Britain’s data watchdog said Friday. It will consider the company’s answer before enforcing it.
The U.K.’s Information Commissioner’s Office (ICO) threatened to ban “My A.I.” in April if the U.S. corporation doesn’t resolve its concerns.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My A.I.,'” Information Commissioner Edwards stated.
The agency said the results do not necessarily imply the teenage-focused instant messaging service has violated British data protection regulations or that the ICO would issue an enforcement notice.
Snap stated it studied the ICO’s warning and was dedicated to user privacy.
“My A.I. underwent a thorough legal and privacy review before being released,” claimed Snap. “We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
For Snapchat’s 21 million U.K. users, including 13-17-year-olds, “My A.I.” handles personal data, which the ICO investigates. “My A.I.” uses OpenAI’s ChatGPT, the most prominent generative A.I. module, which regulators worldwide are trying to regulate for privacy and safety.
Snapchat has had mixed success keeping kids off the site despite its 13-year-old age limit. In August, Reuters reported that the agency investigated whether Snapchat was blocking underage users.
Comment Template