Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Connect with us

Hi, what are you looking for?

slide 3 of 2
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Business

Business

ChatGPT accused of saying an innocent man murdered his children

**Excerpt:**
In a shocking case of AI-generated misinformation, ChatGPT falsely accused Norwegian man Arve Hjalmar Holmen of being a convicted murderer, claiming he killed two of his children and attempted to murder a third. The AI mixed real details about Holmen’s life with fabricated claims, sparking a legal battle and raising concerns about AI accuracy and accountability. Advocacy group Noyb filed a GDPR complaint against OpenAI, demanding fines, removal of the defamatory output, and improvements to prevent future errors. This incident highlights the risks of AI misinformation and the urgent need for robust safeguards and accountability in AI systems.

*Source: “AI Gone Wrong: How ChatGPT Falsely Accused a Norwegian Man of Murder” by Dominic Preston, The Verge, March 21, 2025.*

Listen to the article now

AI Gone Wrong: How ChatGPT Falsely Accused a Norwegian Man of Murder

In a startling case of AI-generated misinformation, Arve Hjalmar Holmen, a Norwegian man, found himself at the center of a fabricated scandal. ChatGPT, an AI language model developed by OpenAI, falsely accused Holmen of being a convicted murderer. The AI claimed he had killed two of his children and attempted to murder a third, serving a 21-year prison sentence in Norway. While the accusation was entirely false, the AI mixed in real details about Holmen’s life, such as his hometown and the number and gender of his children. This incident, which occurred before ChatGPT was updated to include web searches in its results (a change made in October 2024), has sparked a legal battle and raised serious concerns about the accuracy and accountability of AI systems.

The story came to light in an article published on March 21, 2025, by Dominic Preston, a news editor at *The Verge*. Holmen, understandably distressed by the false accusation, sought help from Noyb, an Austrian advocacy group specializing in digital privacy and data protection. Noyb filed a formal complaint with the Norwegian Datatilsynet, the country’s data protection authority, accusing OpenAI of violating the General Data Protection Regulation (GDPR).

Joakim Söderberg, a data protection lawyer at Noyb, emphasized the gravity of the situation. “Under GDPR, personal data must be accurate, and individuals have the right to correct or erase false information,” he said. “OpenAI’s disclaimer about potential inaccuracies is not enough to absolve them of responsibility.” The complaint demands that OpenAI be fined, remove the defamatory output, and improve its model to prevent similar errors in the future.

This isn’t the first time Noyb has taken action against OpenAI. In April 2024, the group filed a complaint on behalf of a public figure whose date of birth was inaccurately reported by ChatGPT. At the time, OpenAI claimed it could only block erroneous data for specific queries, not correct it—a response Noyb argued was insufficient under GDPR.

As of the article’s publication, the same query about Holmen now returns results related to Noyb’s complaint instead of the false accusations. The initial query and response are no longer replicable, but the damage has already been done. This case highlights the risks of AI-generated misinformation and the challenges of ensuring accuracy in AI outputs. It also underscores the importance of GDPR compliance for AI companies, particularly when it comes to the accuracy and rectification of personal data.

The incident raises critical questions about OpenAI’s responsibility to prevent harmful misinformation and its ability to correct errors in its models. While AI has the potential to revolutionize industries and improve lives, cases like this serve as a stark reminder of the need for robust safeguards and accountability measures.

For Arve Hjalmar Holmen, the experience has been deeply unsettling. “It’s terrifying to think that an AI could spread such damaging lies about you,” he said. “I hope this case leads to better protections for others in the future.”

As AI continues to evolve, stories like this remind us of the human impact behind the technology. Ensuring accuracy, accountability, and transparency isn’t just a legal obligation—it’s a moral one.


Comment Template

You May Also Like

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok