The hidden labor force behind ChatGPT: The drama of the ‘ghost workers’
We need new labor standards in the artificial intelligence supply chain, mirroring changes in the textile industry
It can seem like magic, but it’s not. The evolution of artificial intelligence (AI), constantly celebrated for its exponential innovation and revolutionary abilities, hides a less glamorous, but critical, component: its remote labor force. This part of its value chain, crucial, but frequently ignored, reminds me of the textile industry’s supply chain of a few decades past. Has anyone been able to forget the Rana Plaza tragedy in Bangladesh, where more than 100 people died? This is a parallel story. While in the production of cheap clothing, the risks faced by workers shut away in factories located in peripheral countries is physical, in this other industry, the digital one, psychological risk reigns. That’s to say, the psychological damage undergone by workers due to their constant exposure to brutal, disturbing, traumatic and indescribable content. Their invisibility and lack of occupational relationship leaves them alone with the aftermath, to which is added the classic fallout of offshoring practices not subject to rule of law: precarious wages, unpaid labor, exploitation, abuse, etc. All videos containing murder, suicide, sexual assault or child abuse that do not make it onto the platforms have been reviewed and tagged by a content moderator or by an automated system that has been trained with data provided by a content moderator. Some of these workers are already forming the first unions to defend their rights, but they will not be enough. It is a new global challenge, and one that requires the standardization of working conditions in this fast-growing industry.
Anthropologist Mary Gray calls them “ghost workers”. People we do not see, working in remote locations, training the great models that make the most famous chat in the world produce quality content. They are not just adults. There are children, too. Just like the ones who sew together soccer balls. Their labor conditions are very different to those enjoyed by Silicon Valley employees, where you can become a millionaire before the age of 30. In this other reality, that of the artificial intelligence supply chain, children like Hassan earn less than two dollars an hour. He’s 18 now, but he began at Toloka, a platform dedicated to data annotation, when he was 15. He is from a region of Pakistan. His friends also worked at these platforms after school until well into the night, according to reporting by Wired. They are able to get around age verification processes, and wind up doing jobs that are psychologically draining and age-inappropriate. This famous industry’s child labor problem is not something that is discussed.
The size of the global market for the collection and tagging of data is expected to exceed $17.1 billion in 2030, with a year-on-year growth rate of nearly 30 percent. It’s a space that is constantly adding new competitors. Amazon Mechanical Turk, Appen, Clickworker, Comeup, Elharefa, Microworkers, PeoplePerHour, Prolific, SoyFreelancer, Scale AI (including its subsidiary Remotasks), Terawork and Workana are just a few of the companies that the team at Oxford Internet Institute has analyzed to arrive at its conclusion that many of this industry’s labor practices are unfair, to say the least. What can be done about it? The Global Partnership of AI (GPAI) through its project AI Fairwork, and the company Sama have voluntarily worked together over the course of a year in auditing and bettering Sama’s operations in Africa, in benefit of more than 4,000 workers. These changes demonstrate the power of raising awareness of, and commitment to, responsible practices. Mark Graham, professor at the Oxford Internet Institute, says that as technology transforms societies and labor markets, we must remember that there are hundreds of thousands of low-wage workers behind the scenes who are shaping, annotating and moderating the data sets on which new products and services are built. It is therefore imperative that we establish minimum fair labor standards for all workers in artificial intelligence production networks.
Just as it happened in the textile sector, where consumer demand drove the development of ethical norms and certifications to improve labor practices and transparency, the artificial intelligence industry is called to take similar steps. It is necessary to establish a global regulatory framework that ensures equitable labor practices and prevents the exploitation of vulnerable workers. Global inequality is as evident in artificial intelligence as in apparel, with workers in developing countries receiving minimum pay in comparison with their counterparts in rich countries. This economic gap perpetuates a form of exploitation that disproportionately benefits the companies and consumers of the nations of the global north.
Faced with this crossroads, it becomes crucial to opt for a path that avoids exploitation and steers towards a more sustainable and just future. It’s a change that demands a commitment to transparency and the adoption of decent labor practices, alongside the development of common standards. The artificial intelligence sector, amidst its boom, should assimilate the lessons learned by the errors made by previous industries, like that of textile, and set a course that prioritizes not only innovation, but also the dignity and well-being of its workforce.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition