ChatGPT is right-wing and Gemini is left-wing: Why each AI has its own ideology
A study confirms that no major language model is neutral and that its biases tend to be constant in the same direction
“Overall, OpenAI models [behind ChatGPT] display a particular ideological stance, in contrast to the more progressive and human rights-oriented preferences of other Western models,” says a new study on the ideology of large language models (LLM). It concludes that each artificial intelligence (AI) model reflects the opinions of its creators. ChatGPT is less fond of supranational organisations — such as the United Nations or the EU — and welfare policies, which are two concepts dear to the left. Gemini, Google’s AI, instead displays “a strong preference for social justice and inclusivity.”
A screenshot of the title and the first page of this research went viral on X a few weeks ago and reached Elon Musk, who wrote: “Imagine an all-powerful woke AI,” in reference to the derogatory name now given to the most progressive policies. This popularity is due to the fact that it connects with something that is increasingly clear: machines are influenced by their institutional or cultural context. “Our results confirm that there is no such thing as a large language model that is completely politically neutral,” says Marteen Buyl, a researcher at the University of Ghent (Belgium), and co-author of the study.
This research has used a new method to confirm something that was already accepted within the academic world dedicated to AI: “It is not the first one to do something similar. All these studies confirm that different models generate different ideologies when faced with the same inputs, and are more aligned with the values of their creators than with those of other geographical areas or cultures,” says José Hernández Orallo, a professor at the Polytechnic University of Valencia in Spain.
The method the researchers used is not the most common one, which would be to ask the models directly what their opinion is on abortion or immigration, for example. Here they chose to select 4,000 famous individuals from around the world and to ask each model about their characteristics: then, the model decides what to include or omit, and then another model judges whether the original model has a positive, negative, or neutral opinion on each character. And from some labels, the researchers were able to aggregate these opinions into ideological preferences: “Each model seems to have a clear ideological position that is not random,” says Buyl.
Gemini seems to be the most consistent of all in its opinions, in its case progressive. “The fact that Gemini is woke, that it aligns itself with degrowth, with people who have defended minorities, with equality, is a fairly substantial difference. In addition, it is the most stable of all; Gemini has this type of ideology marked in a fairly forceful way,” says Iris Domínguez Catena, from the Public University of Navarra and the only Spanish co-author of the study.
Imagine an all-powerful woke AI https://t.co/TUH3uuOkxU
— Elon Musk (@elonmusk) October 28, 2024
Chinese models do not vote for democracy
The study not only compares Western models with each other. It also measures those from different countries with large language models, especially the United States and China. Here the results are even clearer: the greatest difference in how some characters are viewed is in the case of activists or liberal figures from Hong Kong, such as Jimmy Lai and Nathan Law, who are more valued by Western models. The characters most valued in China are Yang Shangkun, president of China in the years of the Tiananmen massacre, and Lei Feng, a soldier and communist icon from the beginnings of the People’s Republic of China.
To the researchers' surprise, this distinction did not only apply between models created in the West and China. It also occurred if Western models were asked in Chinese and then in English. "The general hypothesis of these models is that they should learn the language and what they then know separately. So, in principle, a model should not give you different information about Jimmy Lai just because you ask it in one language or another. This is really surprising," says Domínguez Catena.
“These models are fed by huge databases drawn mostly from the internet, which are similar. Each company then follows different criteria to refine it. The bias can be at one or both stages: we have not analyzed here how an ideology enters a model. I suspect that the pro-China biases are due more to the training data, while the ideological differences between Western models in English could be more due to the data used in refining or in other alignment steps,” says Buyl.
This is one of the fields that research should pursue, according to Hernández Orallo: “It would be interesting to go deeper into whether it is due to the training set or to the subsequent alignment. My impression is that every day it is due more to the subsequent alignment based on human feedback. Developers in the West use more human opinion makers or follow instructions that tell them how to make these opinions. Developers in China will have feedback and filters that are more biased by the values of the country and especially its government,” explains the professor.
Machines are not neutral either
Users of these models have tended to accept what a machine says as an example of neutrality or certainty: a machine is neither left-wing nor right-wing, the prejudice seemed to be. But it turns out that they are, because they have received their content from decades of already biased human knowledge, and because achieving unblemished neutrality is in many cases probably unattainable.
In the 20th century, the standard advice was to consult several newspapers to find out what had really happened. Now that recommendation could be extended to AI: “I think that is good advice. The relationship with newspapers goes even further: just as there is freedom of the press, we could consider whether a kind of ‘AI freedom’ would be necessary, where regulatory efforts to control the ideology of an AI are avoided,” says Buyl.
As the years go by and these models become increasingly important for education or for consulting information, their biases will necessarily have to be more plural: “The ideal would be for these models to have a more plural distribution of ideologies, even more varied than that existing in humanity, excluding only those opinions that are abominable. Otherwise, we run the risk of AI putting an end to the world’s ideological diversity, concentrating it in two or three centroids determined by political and cultural blocs,” says Hernández-Orallo.
“People need to know which side each model is leaning against,” Domínguez Catena says. Musk created his model, Grok, with the explicit aim of combating what he said was the leftist ideology of OpenAI, Google, and Microsoft. For now, Grok is not in this study due to technical difficulties, but the researchers are already working on adding it. They also measured two Arab models, but their size is currently too small to yield meaningful results.
In the coming years, more countries will release their models, both private and public, including Spain and some Latin American countries. The authors of the study believe that their work can be repeated to detect bias in the way they reflect historical achievements and misfortunes, and their vision of the world: “This is the kind of work that must be maintained and updated because these models are also changing,” says Domínguez-Catena.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.