- i24NEWS
- International
- Technology & Science
- OpenAI's ChatGPT blocked in Italy, says privacy watchdog
OpenAI's ChatGPT blocked in Italy, says privacy watchdog
OpenAI has 20 days to respond with how it would address the concerns, under penalty of a $21.7 million fine or up to four percent of annual revenues
Italy’s privacy watchdog said Friday it blocked the controversial ChatGPT, saying the artificial intelligence (AI) software did not respect user data and could not verify the age of users.
The decision “with immediate effect” will result in “the temporary limitation of the processing of Italian user data vis-a-vis Open AI,” the Italian Data Protection Authority announced, saying it launched an investigation.
ChatGPT – created by U.S. startup Open AI and backed by Microsoft – can answer difficult questions clearly, write code, sonnets, or essays, and can even pass difficult exams for students. But the app that appeared in November has faced controversy, with teachers fearing students will use it to cheat and policymakers concerned about the spread of misinformation.
The DPA watchdog said on March 20 that the app experienced a data breach involving user conversations and payment information. It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform.”
It continued to charge that since there was no way to verify the age of users, the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness.”
OpenAI has 20 days to respond with how it would address the watchdog’s concerns, under penalty of a $21.7 million fine or up to four percent of annual revenues.
The blocking of ChatGPI in Italy came a day after the European policy agency Europol warned that criminals were set to use the software to commit fraud and other cybercrimes. It also came shortly after a group of AI experts and industry executives, including Elon Musk, called for a six-month pause in developing such systems which could become too powerful that they pose risks to society and humanity.