The risk of trusting ChatGPT with personal secrets

Tech companies use users’ personal data to improve generative artificial intelligence systems, but there are ways to get this information removed

Experts warn users to be careful about the information they share with ChatGPT.picture alliance (dpa/picture alliance via Getty I)
Natalia Ponjoan

If you ask ChatGPT what it does with the personal data that someone brings into the conversation, it answers: “As a language model developed by OpenAI, I do not have the ability to process, store or use users’ personal information, unless it is provided to me in the course of an individual conversation.” However, OpenAI — the company that owns ChatGPT — can use that information in certain cases, according to the company’s privacy policy.

This is true for certain types of data and for certain cases. It has to be OpenAI account data, such as the user’s name or payment card information, personal information that the user exchanges with ChatGPT or the company, user’s information when interacting with OpenAI accounts on social networks, such as Instagram, Facebook, Medium, X, YouTube and LinkedIn, or data that the user provides to the company in its surveys or events. With this information, OpenAI can improve its products and services, create new developments, conduct research, establish direct communication with users, comply with its legal obligations, and prevent fraud, misuse of the service and criminal activity.

This delicate issue does not only affect the new generative AI. Sending an email via Gmail to a friend, or sharing photos or documents in cloud spaces such as OneDrive, are everyday acts that authorize the providers of these services to share information with third parties. Companies such as OpenAI, Microsoft and Google may disclose information to service providers to meet their business needs, as indicated in their privacy policies.

However, with some exceptions, companies cannot use personal data for other purposes. Ricard Martínez, professor of constitutional law at the University of Valencia in Spain, points out that, in the European Union, this is strictly prohibited by the General Data Protection Regulation (GDPR): “They would be exposing themselves to a high level of regulatory risk. The company could be sanctioned with a fine equivalent to 4% of global annual turnover.” According to Martínez, data can only be used for public interest purposes admitted by the regulations, such as archiving or historical, statistical or scientific research, or if a compatibility test is passed.

Generative artificial intelligence, such as ChatGPT, draws on a large volume of data, some of it personal, and from that information, it generates original content. It analyzes the information collected, responds to user queries and improves its service, despite the fact that the tool “does not understand the documents it is fed,” warns Borja Adsuara, a lawyer expert in digital law.

Recommendation: Be very discreet with chatbots

The Spanish Data Protection Agency (AEPD) recommends that users refuse the chatbot if it asks for registration data that is not necessary; requests consent without defining what the data will be processed for and without allowing it to be withdrawn at any time, or transfers data to countries that do not offer sufficient guarantees. It also recommends users limit how much personal data they provide or directly not provide if there is a chance that it may be moved internationally. “There is no guarantee that the information provided by the chatbot is correct,” adds AEPD, warning that this may lead to “emotional damage, misinformation or being misled.”

Experts agree with the AEPD’s advice: do not share personal information with the artificial intelligence tool. Even ChatGPT itself warns: “Please note that if you share personal, sensitive or confidential information during the conversation, you should exercise caution. It is recommended that you do not provide sensitive information through online platforms, even in conversations with language models like me.”

Delete personal data

If, despite these recommendations, a user has already shared their personal data with an artificial intelligence system, it’s possible to try and delete it. There is a form on the OpenAI website to request personal data be removed. The bad news is that the company warns that “submitting a request does not guarantee that information about you will be removed from ChatGPT outputs.” The form must be completed with “complete, accurate and relevant answers,” and the user has to agree to a series of sworn statements. Additionally, the information provided in the document can be cross-checked with other sources to verify its veracity. Microsoft also offers a privacy panel to access and delete personal data.

Through legal action, Martínez explains that the user can exercise their right to have their personal data deleted, “if they believe that it has been processed unlawfully, is incorrect and inadequate.” He explains: “You can unsubscribe, withdraw your consent, which is free and not subject to conditions, and the company is obliged to delete all information.” The specialist also underscores the right to data portability: “More and more applications allow the user to download their entire history and take it with them in a compatible format. The regulation also recommends the anonymization of personal data.”

Anonymization, according to the AEPD, consists of converting personal data into data that cannot be used to identify a specific person. In its guidelines on how to manage artificial intelligence, the agency explains that anonymization is one of the techniques to minimize the use of data, ensuring that only the data necessary for the given purpose is used.

New artificial intelligence law

After the new EU law on artificial intelligence comes into force, companies that manage personal data will have to take three key issues into account. According to the consulting firm Entelgy, they must disclose how the algorithm works and the content it generates in a European registry; establish human supervision mechanisms (although this is recommended, not mandatory); and finally, ensure large language models (LLM) have security systems and that developers are transparent about the copyrighted material they use.

However, the new law is not incompatible with the General Data Protection Regulation. As Martínez explains: “AI that processes personal data, or that generates personal data in the future, will never be able to reach the market if it does not guarantee compliance with the GDPR. This is especially evident in high-risk systems, which must implement a data governance model, as well as operating and usage records that guarantee traceability.”

The next step for artificial intelligence, says Adsuara, is for the personal information collected to be used in a kind of personal pool: “A place where everyone has their repository of documents with personal data, but the information does not leave. In other words, it is not used to feed universal generative artificial intelligence,” he explains.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?


Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS