AI Gone Wrong: How ChatGPT Falsely Accused a Norwegian Man of Murder
In a startling case of AI-generated misinformation, Arve Hjalmar Holmen, a Norwegian man, found himself at the center of a fabricated scandal. ChatGPT, an AI language model developed by OpenAI, falsely accused Holmen of being a convicted murderer. The AI claimed he had killed two of his children and attempted to murder a third, serving a 21-year prison sentence in Norway. While the accusation was entirely false, the AI mixed in real details about Holmen’s life, such as his hometown and the number and gender of his children. This incident, which occurred before ChatGPT was updated to include web searches in its results (a change made in October 2024), has sparked a legal battle and raised serious concerns about the accuracy and accountability of AI systems.
The story came to light in an article published on March 21, 2025, by Dominic Preston, a news editor at *The Verge*. Holmen, understandably distressed by the false accusation, sought help from Noyb, an Austrian advocacy group specializing in digital privacy and data protection. Noyb filed a formal complaint with the Norwegian Datatilsynet, the country’s data protection authority, accusing OpenAI of violating the General Data Protection Regulation (GDPR).
Joakim Söderberg, a data protection lawyer at Noyb, emphasized the gravity of the situation. “Under GDPR, personal data must be accurate, and individuals have the right to correct or erase false information,” he said. “OpenAI’s disclaimer about potential inaccuracies is not enough to absolve them of responsibility.” The complaint demands that OpenAI be fined, remove the defamatory output, and improve its model to prevent similar errors in the future.
This isn’t the first time Noyb has taken action against OpenAI. In April 2024, the group filed a complaint on behalf of a public figure whose date of birth was inaccurately reported by ChatGPT. At the time, OpenAI claimed it could only block erroneous data for specific queries, not correct it—a response Noyb argued was insufficient under GDPR.
As of the article’s publication, the same query about Holmen now returns results related to Noyb’s complaint instead of the false accusations. The initial query and response are no longer replicable, but the damage has already been done. This case highlights the risks of AI-generated misinformation and the challenges of ensuring accuracy in AI outputs. It also underscores the importance of GDPR compliance for AI companies, particularly when it comes to the accuracy and rectification of personal data.
The incident raises critical questions about OpenAI’s responsibility to prevent harmful misinformation and its ability to correct errors in its models. While AI has the potential to revolutionize industries and improve lives, cases like this serve as a stark reminder of the need for robust safeguards and accountability measures.
For Arve Hjalmar Holmen, the experience has been deeply unsettling. “It’s terrifying to think that an AI could spread such damaging lies about you,” he said. “I hope this case leads to better protections for others in the future.”
As AI continues to evolve, stories like this remind us of the human impact behind the technology. Ensuring accuracy, accountability, and transparency isn’t just a legal obligation—it’s a moral one.
Comment Template