When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

ChatGPT labels Norwegian man a child murderer, sparking complaint from Noyb

The OpenAI logo

ChatGPT has been accused by a privacy rights group of fabricating an entire criminal history for a Norwegian man, falsely claiming he murdered two of his children and tried to kill a third. Now, OpenAI is facing serious privacy complaints in Europe, and regulators might have to take a hard look at ChatGPT's tendency to make things up.

This isn’t just about getting someone’s birthday wrong or mixing up job titles. This is a chatbot spitting out a completely false and wildly damaging claim, putting an innocent person at the center of a horrific crime that never happened.

Hallucinations are nothing new for LLMs that power chatbots like ChatGPT, and this is certainly not the first time we've heard of a case like this, nor attempts to fix it. Remember the glue incident that happened last year, when Google's AI overviews claimed that putting glue on pizza will help cheese stick better? That was dumb, sure, but it didn’t falsely accuse someone of murder.

Privacy group Noyb (None of Your Business) is backing the complaint, arguing that OpenAI is violating the EU’s General Data Protection Regulation (GDPR). The law makes it clear that if you’re handling personal data, it has to be accurate. If it's not, people have the right to get it fixed. OpenAI’s response is to block queries about specific individuals instead of offering a way to correct misinformation.

Noyb’s stance is that this isn’t good enough. Their argument is that OpenAI can’t just throw a tiny disclaimer at the bottom of the screen saying, “ChatGPT can make mistakes,” and think that gets them off the hook. According to Joakim Söderberg, data protection lawyer at Noyb, the law doesn’t work like that.

The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

So, what went wrong? Why did ChatGPT decide to generate a completely false, yet oddly specific, accusation against Arve Hjalmar Holmen? Even Noyb isn’t sure. They did some digging and checked newspaper archives and other sources but found no real-world basis for the chatbot’s made-up crime story.

The best guess is that large language models predict the next word based on patterns in their training data. If ChatGPT has been trained on a bunch of crime stories, including real cases of parents harming their children, it’s possible that it "hallucinated" a story that fit the patterns it had seen before. But that’s still just speculation. According to Holmen,

Some think that ‘there is no smoke without fire.’ The fact that someone could read this output and believe it is true, is what scares me the most.

When Neowin tested ChatGPT with the same "who is Arve Hjalmar Holmen?" question, it provided a more neutral response. Instead of the fabricated murder claims, it listed general biographical details, mentioning individuals named Arve Holmen linked to business ownership in Norway. The chatbot also acknowledged uncertainty about whether these individuals were related or the same person.

A screenshot of ChatGPTs response to the question who is Arve Hjalmar Holmen

Clearly, OpenAI has tweaked its system. But as Noyb points out, the concern isn’t just that the chatbot is making up false information. The incorrect data may still exist somewhere within the model.

Simply hiding the response from users doesn’t mean the AI isn’t still internally processing and storing misinformation. Noyb has filed the complaint with Norway’s data protection authority, hoping they’ll take action.

Meanwhile, OpenAI is staying quiet on this particular complaint. If they do respond, it’ll probably be the same line they’ve used before: something about AI not being perfect and disclaimers warning users about potential errors.

Report a problem with article
Free Play Days
Next Article

Shin Megami Tensei V and three more games join Free Play Days on Xbox

iPhone 17
Previous Article

Leaked iPhone 17 series dummy units highlight 'part-glass' back panel

Join the conversation!

Login or Sign Up to read and post a comment.

2 Comments - Add comment