On 31 March 2023, Italy’s data protection authority blocked ChatGPT over issues of privacy, and launched an investigation into the extent of OpenAI’s compliance with Europe’s General Data Protection Regulation (GDPR). Known for being the world’s stronger privacy and security law, legislated in 2016, the GDPR restricts the means by which the personal data of living persons may be used, processed, and stored.
Even though the ban is temporary, discourse against the artificially intelligent language model continues to grow.
In order to train ChatGPT, OpenAI uses hoards of personal data, whose legal basis is unjustified, according to Italian authorities. Adding fuel to fire, three days ago, OpenAI confirmed a data breach: a portion of active users’ conversations were revealed, in addition to the payment information of 1.2% of ChatGPT’s Plus subscribers. Further, the language model fails to verify the age of users, exposing “minors to absolutely unsuitable answers compared to their degree of development and self-awareness”, as stated by the regulators.
OpenAI’s representatives in the European region now have 20 days to outline their plans to bring ChatGPT into compliance with the GDPR, or face a penalty of up to four percent of its total revenue.
As we dive head-first into the murky waters of AI, through an investigative approach into the industry, governing bodies may begin to see clearly, and act in the interest of the masses.