One second, 150 dismissals: Inside the algorithms that decide who should lose their job
A software company in Russia fired workers based solely on the guidance of artificial intelligence, a situation that could become increasingly common across the globe
You will probably be fired by an algorithm. It sounds like ominous sci-fi dystopia, but it is what awaits the majority of employees in this eventful first third of the 21st century: to be hired and fired by machines, without human intermediation. It is possible that many of these workers will go through this cycle of creative destruction several times during their careers, which are forecast to be turbulent. It is the end of the idea of a job for life that was commonplace until the end of the 20th century.
Last August, Xsolla, the Russian subsidiary of a software and interactive services company headquartered in Los Angeles, carried out an avant-garde workplace restructuring that caught global media attention. Without prior warning, Xsolla decided to fire 150 of its 450 employees at its offices in Perm, based solely on the verdict of a performance-related algorithm that considered the workers “unengaged and unproductive.”
It was the work of neither the coronavirus pandemic nor the oft-leaned upon “structural review.” On this occasion, the reason given to justify a mass redundancy was the cold calculation of an artificial intelligence program fed by “big data.”
Xsolla is one of many examples of a modern and disruptive company employing artificial intelligence in its decision-making processes. What is genuinely novel is that the functions the machine has taken on in this instance are nothing less than those of the CEO and the human resources and talent management departments.
That machines would one day replace human workers is something that the 19th-century English Luddites were convinced of and that Charlie Chaplin laid out so eloquently in his 1936 movie Modern Times. What nobody expected was that machines would become our bosses.
There is one striking precedent: Amazon, the mother of all disruptive companies, caught the attention of Bloomberg due to its proclivity for firing people based on IT criteria. One of the fired workers, Stephen Normandin, was interviewed earlier this year and the headlines turned him into an emblem of this apparently cold and dehumanized process.
Normandin, a 63-year-old US Army veteran from Phoenix, Arizona, had been working as a delivery driver for Jeff Bezos’ company for four years when he received an email informing him, with no sugar-coating, that his contract had been terminated. The algorithm tracking his daily activity had considered him unfit for the job. He had been fired by a machine.
In the Bloomberg interview, Normandin describes himself as “an old-school kind of guy,” who gives every job “110%.” He took the loss of his job as a personal affront and said it was not justified. Nobody spoke to him to explain the criteria that led the artificial intelligence to question his commitment and competence. “I’ve worked 12-hour shifts in a community diner for Vietnamese refugees in Arkansas,” he said. “I’ve proven on multiple occasions that I am a disciplined and responsible person. I do not deserve to be fired without someone listening to me, taking into account my circumstances or providing an explanation.” In his view, he was fired due to his age, without his enthusiasm for work or his excellent physical and mental health being taken into account, but his attempts to prove this to Amazon though arbitration proved fruitless.
Spencer Soper, who wrote the Bloomberg article, says that Normandin against the machine is a “losing battle” and the result of a “sinister misunderstanding.”
“Men like him continue to believe in the culture of hard work and dignity of employment, while companies like Amazon base their model on the increasing automation of production processes and labor routines that almost completely exclude the human factor,” Soper says.
In an interview with CNBC, Bezos stated his belief that the only business decisions that it is essential to leave in the hands of human beings are “strategic ones.” Everything else, the “day-today” decisions, however important they may be, he prefers to leave to artificial intelligence algorithms because they act “taking into account all of the relevant information without emotional interference.” For the Amazon CEO, “artificial intelligence optimizes processes and, in the medium and long term, will create many more jobs than it destroys.” Unfortunate cases from a human perspective, like that of Normandin, are no more than collateral damage in a revolution that is advancing inexorably.
For Fabián Nevado, a labor law expert at the Catalonia Syndicate of Journalists, “it is disgraceful from a moral point of view that you can be fired by an algorithm applying general criteria that do not take into account personal circumstances and, above all, with no human being bothering to communicate in person with a minimum of respect and empathy.”
Nevertheless, Nevado does not believe that these kinds of events are only likely to take place in poorly regulated labor markets, such as those in Russia or the United States. “On the contrary, in Spain, despite what people believe, dismissal is free. What happens is that the reasons for the dismissal have to be put forward and if there is no agreement, a judge will end up deciding if the reasons given are convincing or not.” But it is perfectly legal for companies to use artificial intelligence to monitor the performance of their employees as long as it is carried out within the framework of the Personal Data Law. “In any case,” says Nevado, “the person doing the firing will always be an employer, a human being or a group of them. But the machine could be the tool used to justify the dismissal. In fact, this is already happening in many cases.”
If all else fails, a judge will rule on the matter, a little like the recommendations of the VAR system in professional soccer, that controversial tool that was supposed to revolutionize sporting justice. What is not acceptable under any circumstances, says Nevado, “is that neither the bosses nor the human resources departments take on responsibility for the dismissal, hiding behind algorithms and other technological innovations to pass the buck and further dehumanize labor relations.” If the trend continues, Nevado predicts a “very dark future” for human resources departments.
These could disappear entirely if the idea of leaving the management of talent (contracts, hiring and firing, pay rises, disciplinary processes, bonuses, etc) fully in the hands of machines crystallizes. “And not just human resources departments,” he adds. “Many management positions will also be in jeopardy, above all those whose salaries depend on the ability to monitor the workers under their charge.” In a world of innovative entrepreneurs, cutting-edge management technology and an interchangeable workforce, the foreman will be unnecessary.
Frank Pasquale, a professor at New York’s Brooklyn Law School, addresses these issues in his book New Laws of Robotics. In the view of this intellectual, who defines himself as a “humanist with technological competence,” artificial intelligence should never replace the experience and capacity for human reasoning in “areas that have clear ethical implications.” That is to say, a machine should never be permitted to decide who gets shot, who gets run over or who gets fired, because it would do so based purely on criteria of efficiency. Decisions like these cannot be automated and cannot be disassociated from a process of “responsible reflection,” a uniquely human tool. For Pasquale, the “digital boss” will always be a tyrant because it dehumanizes people by treating them like they are not people, “by converting them into mere tools and denying them their status as free and rational creatures.”
The safety net against using algorithms to fire employees, says Spain’s labor union UGT in a working document titled Algorithmic relations in labor relations, has to take the form of a clear regulation that demands transparency over what criteria the artificial intelligence uses. “The principle of precaution has to be applied,” says UGT head of digitalization José Varela. Because algorithms, like any product of human intelligence, can make mistakes. Furthermore, they do not concern themselves over whether their decisions could have a negative impact on “the security of people or their basic rights.” In other words, if we are going to get fired by an algorithm, we must demand first of all that it knows what it is doing.
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.