© 2025 CoolTechZone - Latest tech news,
product reviews, and analyses.

OpenAI slammed with GDPR complaint over creating fake child murderer


Austrian privacy organization Noyb has filed a complaint against ChatGPT developer OpenAI for creating false and slanderous ‘hallucinations’.

Experts have warned that AI chatbots like ChatGPT sometimes make up facts. These so-called ‘hallucinations’ are made-up, fictional stories. Most of the time they are harmless and can even be amusing.

But what if these hallucinations have catastrophic consequences for people’s lives? According to noyb, there have been numerous reports about made-up sexual harassment scandals, false bribery accusations, and alleged child molestation.

A Norwegian user wanted to know what ChatGPT knew about him, so he entered his name and asked the AI chatbot what information it had about him. Subsequently, ChatGPT presented him with a story, suggesting he was a convicted criminal who murdered two of his children and tried to kill his third son.

To make matters worse, the output of the AI chatbot included real elements of his personal life, like the number and gender of his children, and the name of his hometown. In addition, ChatGPT said he was sentenced to 21 years in prison.

Without a doubt, these kinds of hallucinations can have nasty consequences for the Norwegian user. On top of that, this is clearly a violation of Article 5.1(d) of the General Data Protection Regulation (GDPR), which states that personal information must be accurate and kept up-to-date, and that companies must take ‘every reasonable step’ to erase or rectify any inaccuracies in personal data without delay.

However, ChatGPT only shows a small disclaimer saying that it may produce false results. Hardly enough when you spread false narratives about someone, noyb argues.

“The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” Joakim Söderberg, data protection lawyer at noyb, says in a statement.

Therefore, the Austrian privacy organization has filed a complaint with Datatilsynet. Noyb is asking the Norwegian data protection authority (DPA) to delete all defamatory output and fine-tune its model to eliminate inaccurate results. To prevent a recurrence, Noyb also wants the DPA to impose a hefty fine.

This is the second time noyb has filed a complaint regarding hallucinations. In April 2024, the advocacy group requested the Austrian DPA to investigate OpenAI’s data processing and the measures taken to ensure the accuracy of personal data is processed accurately.


Leave a Reply

Your email address will not be published. Required fields are marked