After generating significant discussion and raising numerous concerns about potential job losses or the risk of counterfeiting in the education sector, ChatGPT is now under criticism for the risks it allegedly poses to personal data protection.
Publicly available since late November 2022, ChatGPT is a tool developed by the American company OpenAI that provides textual answers to user-submitted questions. What makes it unique is its algorithm, which is advanced enough to deliver tailored responses, adjusting the tone, style, or form of the answer as needed. ChatGPT quickly gained popularity, surpassing one hundred million users by January 2023.
However, the tool has also faced significant criticism. Some question the way its algorithm was developed and trained, relying on numerous online sources, including the entirety of Wikipedia, without citing the origin or author of the information in its responses. There are also concerns about job displacement and its use for plagiarism, whether by students or professionals.
More recently, a new concern has emerged: what ChatGPT does with the information users share with it. The tool does not guarantee user data protection. Specifically, OpenAI’s privacy policy [1] states that interactions with the algorithm are stored on OpenAI’s servers, which are located in the United States—a country that, in the eyes of the European Union, does not provide sufficient guarantees for personal data protection. Samsung, for instance, faced repercussions in March-April when employees attempted to use ChatGPT to enhance internal projects. As a result, confidential work documents and source codes are now stored on OpenAI’s servers. The Korean company reported three data breaches within three weeks before ultimately banning the internal use of ChatGPT. Unfortunately, Samsung has no legal recourse to ensure the destruction of its confidential documents from OpenAI’s servers.
Moreover, the algorithm’s capabilities themselves are subject to criticism. Many instances of false or fabricated responses exist. It is important not to overestimate the tool’s abilities or blindly trust its outputs.
Several key principles of the General Data Protection Regulation (GDPR) are therefore not respected by ChatGPT, including:
- The lack of guarantees for equivalent data protection when transferring data to the United States;
- Non-compliance with the right to erasure of data;
- Failure to uphold the principle of data accuracy.
All these concerns led the Italian data protection authority to ban the use of ChatGPT on March 31, 2023. Other countries may follow suit, as Irish, German, and French regulatory authorities have engaged with their Italian counterparts. In France, multiple complaints have been filed with the CNIL against ChatGPT, the most recent one by Côtes-d’Armor MP Eric Bothorel (Renaissance), after the tool provided incorrect information about him. However, the MP is not calling for a ban on these algorithms but rather for their regulation.
For better or worse, ChatGPT continues to be a topic of discussion. After concerns about its algorithmic capabilities and its societal impact, it is noteworthy that the issue of data protection is increasingly becoming a serious subject of discussion regarding these tools and an essential aspect of our relationship with ChatGPT.
Sources
- Schmid, Alexandre, “Beware of ChatGPT… Samsung fell victim to it!“, Clubic, April 7, 2023
- “Incorrect information: a French MP files a complaint with CNIL against ChatGPT“, Franceinfo, April 12, 2023
- Vincent, Benjamin, “ChatGPT: A revolution at the expense of our personal data?“, Radio France, April 9, 2023
- “Artificial intelligence: Italy blocks the ChatGPT chatbot“, Franceinfo, March 31, 2023
- Tauzins, Alexandra, “ChatGPT, Midjourney, DALL-E… these artificial intelligences raising unprecedented copyright questions“, Sud Ouest, January 21, 2023