Warning: If AI social media tools make a mistake, you’re responsible
Platforms now include references to their generative artificial intelligence tools in their terms of service. They acknowledge that these tools may make errors, but place the responsibility on the user for the content they generate
Instagram and Facebook’s terms of service will be updated on January 1, 2025. LinkedIn’s terms of service were updated on November 20, 2024, X attempted to update its terms without prior notice, and other social networks are likely to follow suit. One common motivation for these changes is to incorporate frameworks for using generative artificial intelligence (AI) tools specific to each platform.
This is not about using ChatGPT or Google Gemini to generate content and post it on social media. In this case, it is Instagram, Facebook, or LinkedIn themselves offering their own artificial intelligence systems. These tools are integrated into the platforms and easily accessible to users. However, the three social networks shift the responsibility to the user if they share content generated by their AI that is inaccurate or even offensive.
This is even though they admit that the answers offered by their generative AI programs may be wrong or misleading, an inherent issue with this type of technology. Meta’s terms of service for Meta AI, present on Facebook and Instagram, state: “The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.”
In LinkedIn’s updated terms of use, the platform notes that content generated by its AI features “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.” It encourages users to review and edit the generated content before sharing, adding that “you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.”
For Sara Degli-Esposti, a researcher from Spain’s National Research Council (CSIC) and author of the book The Ethics of Artificial Intelligence, there is no doubt about the platforms’ position. “This policy is along the lines of: ‘we don’t know what can go wrong, and anything that goes wrong is the user’s problem.’ It’s like telling them that they are going to be given a tool that they know may be defective.”
LinkedIn’s AI is used to generate text that can then be posted on the platform. At the moment, it is only available in English and to paying users. Meta’s AI on Instagram and Facebook can be used to write messages, ask questions — even in group chats —, modify photos, and generate images from scratch. However, it is not yet available in the European Union.
“The fundamental issue is that they’re providing functionalities with tools that have not been fully tested, and in fact, the testing will be done by users themselves,” says Degli-Esposti. “In a way, it’s as if they subtly admit that they are providing you with a tool available, but they clarify that this tool may still have problems, which is like saying that it is still in the development phase. They would have to inform you that you are assuming an additional risk.”
Meta AI’s terms of service hint at the fact that generative artificial intelligence is still in its infancy, although it is described positively. “AIs are a new technology that are still improving in accuracy,” the terms state, before warning: “We make no guarantees that they will be safe, secure or error-free, or will function without disruptions, delays or imperfections.” In another section, the terms directly address the user: “You also recognize and agree that you – not Meta – are responsible for your use of, and/or any actions you take in relation to, content generated by the AIs based on your prompts.”
These concepts may be clear to an advanced user of generative AI systems, but not to everyone. “The key lies in the current lack of culture and education on generative AI, how we obtain information from it, how it should be verified, and how we should approach it,” says Javier Borràs, a CIDOB researcher specializing in the intersection of technology and democracy. “These systems, by their very nature, do not offer true or false answers. They offer you a result based on a statistical prediction extracted from all the data they have. They do not distinguish between what is true and false, they offer you a probability. This knowledge is not widespread among users.”
In search of an educated and informed user
The ethical dilemma lies in whether generative AI tools available should be made easily accessible to the masses on social media. Is this a good idea? Borràs points out that users would likely turn to third-party systems anyway. “Perhaps what they [social media] should do is make it clear that the results may be inaccurate and should be verified. Users should be constantly reminded of this possibility, with a reminder appearing whenever they receive a result,” the CIDOB researcher suggests.
In the English version of Meta AI, there is a small disclaimer under the question bar: “Messages are generated by AI and may be inaccurate or inappropriate.” This allows users to click on a link for further details. The terms of use also remind users: “If you plan to use [Meta AI]outputs for any reason, it is your sole responsibility to verify outputs.”
One of the concerns about introducing generative AI tools on social media is the potential spread of misinformation, a problem these platforms have long been criticized for. However, it is unclear whether AI has had a significant impact on this issue during the critical electoral year of 2024, when half the globe went to the polls. Borràs does not believe that social media tools will have a greater impact than third-party systems.
This issue brings individual responsibility to the forefront. Degli-Esposti points out that, from an ethical perspective, there is another view that focuses on the debate over individual responsibility: “The author is the one who provides the prompt to the system. This means the user maintains some autonomy — they guide the AI in its generation and decide whether to keep the final product.”
The counterargument is that when users use generative AI, social networks benefit, competitively, financially and technologically (by being able to train the algorithm). The more content is generated and shared, the more advertising can be introduced on the platform, which is the primary revenue source for social media companies.
“A process of educating users is necessary so they understand how generative AI works and the risks it entails. And the companies profiting from it should take responsibility for being part of that process,” says Borràs. He adds that this training should go beyond the social media platforms and reach the educational system and the business sector — a formula for enabling everyone to use generative AI systems with confidence.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.