OpenAi faces another privacy complaint in Europe because of its inclination in AI Chatbot Virus to the hallucinations of wrong information – this may prove the difficulty for the organizers to ignore it.
Privacy Rights Call Group NoB Supports an individual in Norway who was terrified to find him Chatgpt The return of the makeup information that he claimed was convicted of killing two of his children and trying to kill the third.
Previous privacy complaints about generating incorrect personal data include problems such as Date of birth is incorrect or Wrong CV details. One of the concerns is that Openai does not provide a way for individuals to correct the incorrect information that artificial intelligence generates. Openai usually offered responses to such claims. But under the European Union’s general data protection regulation (GDPR), Europeans have a set of rights access to data that includes the right to correct personal data.
Another component of the data protection law requires data control units to ensure that the personal data they produce about individuals are accurate – and this is a source of concern that Noyb teaches the latest Chatgpt complaints.
“The gross domestic product is clear. Personal data should be accurate,” Joachim Soderberg, a data protection lawyer at Nobeeb, said in a statement. “If not, users have the right to change it to reflect the truth. Show Chatgpt users little evacuation that Chatbot can make mistakes clearly not enough. You can not only publish wrong information and ultimately add a small evacuation that everything you said may not be true.”
The confirmed violations of GDP can lead to penalties up to 4 % of a global annual rotation.
Implementation can also force changes on artificial intelligence products. It is worth noting that the early GDP intervention by the Data Protection Control Authority in Italy, which has seen access to Chatgpt temporarily banned in the country in Spring 2023 LEDONAI to make changes to the information it reveals to users, for example. the The International Energy Agency later arrived in Openai 15 million euros To process people’s data without an appropriate legal basis.
Since then, though, it is fair to say that monitoring privacy throughout Europe has adopted a more cautious approach with Genai where they are trying to do so. Discover the best way to apply GDP to these artificial intelligence tools.
Two years ago, Data Protection Committee (DPC) – which has a role to enforce GDP in a previous complaint of noyb chatgpt – Urge Genai, for example. This indicates that the organizers instead must take a long time to learn how to apply the law.
It is worth noting that the privacy complaint against ChatGPT has been under investigation by monitoring data protection in Poland since then September 2023 It is still not the result of a decision.
It seems that the new Noyb Chatgpt complaint aims to get rid of vigilant privacy organizers when it comes to the dangers of hallucinogenic AIS.
The non -profit organization shared the screenshot (below) with Techcrunch, which shows an interaction with Chatgpt in which artificial intelligence responds to a question asking, “Who is Arve Hjalmar Holmen?” – The name of the individual who submits the complaint – by producing a tragic imagination that falsely stipulates that he was convicted of the accusation of killing the children and sentenced to 21 years in prison for killing two of his children.

While the defamation claim that Hijalmar Holman is a completely wrong children, noyb notes that the Chatgpt response includes some facts, because the individual concerned has three children. Chatbot also got the sexes of his children. And his city is called correctly. But this makes it more surprising and concerned that artificial intelligence has solved such hideous lies on top.
Noyb spokesman said they could not determine the reason for the production of Chatbot such a specific and wrong date for this person. “We conducted the research to ensure that this was not just a confusion with another person,” said a spokesman.
Language models Like the basic Chatgpt mainly it predicts the following words on a large scale, so we can speculate that the data sets used to train the tool contain many pesticide stories that affected the words options in response to a query about a named man.
Whatever the interpretation, it is clear that such outputs are completely unacceptable.
The Noyb dispute is also illegal under the rules of protecting the European Union data. Although Openai shows a small evacuation at the bottom of the screen it says, “ChatGPT can make mistakes. Check important information,” she says, this cannot excuse the developer of artificial intelligence within the framework of GDP with a terrible lack of production of people in the first place.
Openai was contacted to respond to the complaint.
While this gross domestic product complaint is related to the named individual, NoYB refers to other triggling of Chatgpt to manufacture information that attaches the law – such as the Australian pioneer who said it was In the scandal of bribery and corruption or A German journalist has been falsely named as offering the child – It is clear that this is not an isolated problem for the tool of artificial intelligence.
One of the important things to notice is that, after updating the artificial intelligence model that operates Chatgpt, NoYB says Chatbot stops producing dangerous lies about Hjalmar Holmeen – a change that is associated with to the tool now to search for information about information asking about anything else.
In our own tests, Chatgpt asks “Who is Arve Hjalmar Holmeen?” Chatgpt initially responded to a bit strange group by displaying some pictures of different people, apparently from the sites including Instagram, Soundcloud and Discogs, along with the text that he claimed was “no information can be found” on an individual of this name (see screen shot below). A second response appeared in response to the ARVE HJALMAR HOLMEN as “Norwegian musician and songwriter” “Honky Tonk Inferno”.

While the lies of the risk created in Chatgpt about Hjalmar have stopped, Noyb and Hjalmar Holman still worried that incorrect and defaming information about it could have been kept in the artificial intelligence model.
“The addition of the evacuation of a responsibility does not comply with the law does not make the law disappear,” pointed out another lawyer for data protection in Nobeeb, in a statement. “Artificial intelligence companies cannot also hide” the wrong information only from users while the processing of wrong information is still internally. “
She added: “Artificial intelligence companies must stop behaving as if the gross domestic product does not apply to it, when this happens clearly.” “If hallucinations are not stopped, people can easily suffer from reputation damage.”
NOYB filed a complaint against Openai to the Norwegian Data Protection Authority – and I hope that the Supervisory Authority decides that it is a competent investigation, because OYB targets the complaint in the American entity or Openai, on the pretext that his office in Ireland is not only responsible for the decisions of products that affect Europeans.
However, the total complaint of Noyb against Openai, which was presented in Austria in Austria April 2024It was referred by the organizer to DPC in Ireland because of Change by Openai earlier that year To name its Irish section as a ChatGPT service provider for regional users.
Where is this complaint now? He still sits on an office in Ireland.
“After receiving the complaint from the Austrian Supervisory Authority in September 2024, DPC started in official dealing with the complaint and is still ongoing,” told Risterd byrne, the assistant assistant to the main employee of DPC Techcrunch when requesting update.
No direction was provided when the DPC investigation was concluded in Chatgpt Hilabia.
https://techcrunch.com/wp-content/uploads/2024/11/GettyImages-1733837014-e.jpg?resize=1200,800
Source link