How can a researcher publish a scientific study every 37 hours? What’s really behind the “most-cited” designation? And how can it be that universities and governments pay huge sums of money to get a researcher to change their affiliation?
These are just some of the many questions I’ve been asked since EL PAÍS reported on cases involving a lack of scientific integrity, in which Saudi Arabian universities paid large stipends to European academics to get them to swap their affiliations.
Some of the answers to these questions are very straightforward: No, it’s impossible to publish a study in 37 hours. Administrations poach academics because they fall prey to global university rankings. And citations are (supposedly) done by researchers.
The issues of integrity and commercialization in the field of science – which we’re seeing today at an accelerated pace – are reflective of an outdated, ineffective and underfunded scientific system.
Science – in addition to being key to creating a better world, dealing with climate change or fighting a global pandemic – is also a human enterprise. The scientific career is a profession, just like any other. To enter and remain in this career, researchers have to publish scientific articles and, of course, obtain funding to carry out research that (in the best case scenario) culminates in a publication in a journal that has a high “impact factor.” The cycle never stops.
The principle of “publish or perish” has given rise to unethical conduct and many bad practices. A lot of research ends up being wasted, without having any impact on society. In many cases, this production for the sake of production can even lead to a crisis in the replication of quality scientific research.
Researchers have – in addition to curiosity, interest and other laudable characteristics that bolster a career in the sciences – the obligation to publish, as an end in and of itself. This is because publishing allows us to obtain financing, or a permanent position in a university. A huge part of this system is dominated by the large scientific publishers – who are paid astronomical amounts – and the companies and institutions that determine the impact factor of said publications, based on the number of academic citations they receive.
One of these companies is Clarivate. It compiles a list of the 40 most-highly cited researchers, along with a ranking of 6,938 researchers, divided up by discipline. However, it’s important to note that the number of citations is not an indisputable indicator of quality, but rather, a marker of popularity. The list of “popular” scientists can be accessed openly… but to make it into the lists compiled by prestigious journals, institutions or countries also have to pay significant seven-figure amounts.
On the other hand, researchers belong to universities or research organizations that also have motivations to attract students, increase their prestige, or attain other forms of recognition. The main element to define the quality (and popularity) of a university is, once again, the institution’s placement on certain lists or rankings. Some of these indexes have arbitrary standards, such as whether the institution has a Nobel Laureate on the faculty or as an alumnus, or if they employ a highly-cited academic… as if a single person can legitimize an entire institution!
The cases revealed by EL PAÍS are only the tip of the iceberg. The field of scientific research is deteriorating because of the way the system is set up. Researchers do the research – financed with public funds – and then the public institutions that they work for pay the big scientific publishers several times, for reviewing and publishing submissions. Simultaneously, the researchers also review scientific papers for free, while companies like Clarivate or the Shanghai Ranking draft their lists, telling everyone who are the good guys (and leaving out the people who, apparently, aren’t worth consideration).
In the last 30 years – since we’ve been living with the internet – we’ve altered the ways in which we communicate, buy, teach, learn and even flirt. And yet, we continue to finance and evaluate science in the same way as in the last century. Young researchers – underpaid and pressured by the system – are forced to spend time trying to get into a “Top 40″ list, rather than working in their laboratories and making positive changes in the world.
As the Argentines say: “The problem isn’t with the pig, but with the person who feeds it.” Consciously or unconsciously, we all feed this anachronistic and ineffective system, which is suffocated by the deadly embrace between scientific journals and university rankings. Our governments and institutions fill the coffers of publishers and other companies, who then turn around and sell us their products and inform us (for a price) about what counts as quality.
It’s pay-for play when it comes to entering an elite community. But according to new laws that have been passed in countries such as Spain, science is supposed to be a “common good” – it must be returned to the researchers who do it and the society that pays for it.
To prevent the current system from sinking even further into decline, researchers, institutions and other parties have to break the deadly cliques and commercialization within science, while changing the ways that we communicate scientific knowledge and, above all, how we evaluate the merits of researchers beyond their papers. Despite the issues, there’s certainly reason to be optimistic: although we scientists are victims (and accomplices) of the current system, we’re also aware of its weaknesses. We want to change this reality.
After a long debate – facilitated by the Open Science unit of the European Commision – the Coalition for Advancing Research Assessment (COARA) has been created. In the last four months, more than 500 institutions have joined COARA, which – along with other commitments – will avoid the use of rankings in the evaluation of research. COARA is a step forward to analyze – in a coherent, collective, global and urgent manner – the reform of research evaluation. This will help us move away from an exclusively quantitative evaluation system of journals, towards a system that includes other research products and indicators, as well as qualitative narratives that define the specific contributions of researchers across all disciplines.
As I tell my friends: science is like a parachute. If it doesn’t open up, it won’t help us. In our rapidly-changing times, another kind of scientific system has never been more possible – or more necessary.
Dr. Eva Méndez is a tenured professor in the Department of Library Sciences at the Carlos III University of Madrid and Chair of the EU Open Science Policy Platform.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition