Five years ago, following the death of a friend and bandmate, David Barberá decided to pay for a Google Drive cloud account. He wanted to store music files so that his friend’s children would one day hear how their father played. “So I signed up for the Google Drive service,” he says. “It was the safest thing that occurred to me so that Javi’s music would not be lost, as the children were very young then.”
Barberá, a 46-year-old high school teacher from Valencia, in eastern Spain, had not foreseen a key detail: Google’s terms of service conceal a system that disables accounts when it detects prohibited content, including sexual material involving children or terrorism. “The only thing I can think of is that maybe I uploaded something I shouldn’t have uploaded, like movies I downloaded back in the days of [peer-to-peer file exchange program] eMule. Could there be child pornography or terrorism in there? There could,” explains Barberá in a long telephone conversation.
At first Barberá had no clue why he had been locked out of his account. He only began to connect the dots after reading messages in online forums and news articles. He describes a desperate experience of helplessness as he fought to speak to a human being at Google and find out how exactly he had violated the company’s abuse policies.
In July of this year, Barberá needed some music files that he had on old hard drives. In order to better organize the material, he started uploading everything to his Google Drive account, for which he still pays every month in order to have two terabytes of cloud storage space. Within minutes of starting the process, Google disabled his account with a message saying that “harmful content” had been found.
He filed several claims, answered emails from apparent Google employees who asked for new details (and who called themselves Nahuel, Rocío, Laura), and called all the company phones he could find without ever being able to talk to a human. At that point he asked a relative who works in journalism for help, and eventually managed to chat with an alleged Google employee who asked him for “patience.”
Out of this entire process, Barberá only got one concrete answer, and it was a message addressed to his wife’s email (which he had added as a secondary account). The message said: “We believe that your account contained sexual content that may violate Google’s terms of service and may also be prohibited by law.” But then it added: “We have removed this content” and “if you continue to violate our policies, we may terminate your Google account.” This message was received on August 26 and, although it sounds like a warning, the account is still suspended.
“I have everything there from the last 14 years, and for the last five years, I only have it there,” says Barberá, alluding to the fact that he does not keep files on external drives. The loss of the Google account does not only mean that his photos and videos are gone. Barberá has also lost class material, a blog that he kept, and his YouTube account, not to mention other services that he had contracted with his email, from Amazon to Netflix to a German music app.
In August, The New York Times published a story with two similar cases in the US. Google told the reporter that the problematic images were photos of children’s genitalia that two parents had taken to send to the pediatrician for a skin problem. When EL PAÍS asked about Barberá's case, Google replied that they could not provide that information because of privacy laws, since the user involved is European. The company said it would only share that information with the concerned party. But Barberá has yet to receive any details.
Google did offer this newspaper access to employees on condition that their identities would not be revealed and that they would not be quoted verbatim. According to the company, which insisted it was not talking about this specific case, a “sexual content” email is only sent in cases of child abuse, not adult porn. Why then, that “don’t do it again”? Google didn’t elaborate, other than to say that it all depends on what was in that account. A Google employee asked if this newspaper was going to name the affected user, but did not clarify why he was interested in knowing.
EL PAÍS has found three other cases similar to Barberá: two more with Google accounts and one with Microsoft. All cases are from 2022 and only in one case has the account been restored. In that case, it was not over alleged sexual images of children, but due to a problem with the password. The decision to restore it was never clarified, either.
Another victim, who asked to remain anonymous because his company may have Google among its clients, turned to “a close friend” who works within the company in Spain. This friend does not work in a department linked to content moderation, but he did some internal research and the response was less than optimistic: these cases are handled overseas and he had no idea if anyone actually reads the claims.
This user had seen his account disabled after uploading 40 gigabytes of photos, videos and WhatsApp conversations that he had on his hard drive. The upload of files was so remarkable that the cybersecurity managers of his company called him to ask him what was happening. Google does not clarify when or how it analyzes the accounts of its users. But in both Spanish cases, as well as those documented in The New York Times, it occurred when file movements were detected.
The third victim is suing Microsoft, desperate because he has lost data from his private life but also from work: “His master’s degree, tax forms, photos of the birth of his children and work databases. He is suffering,” says his lawyer, Marta Pascual. “The judge might say that his right to privacy has been violated, although I have found no case studies.”
Pascual’s client believes that the suspicious files come from WhatsApp groups, whose content was backed up automatically. The three victims have children and, although they do not remember photos for the pediatrician, they did have the typical images of children in the bathtub, in bed or in the swimming pool.
Microsoft gives out even less information than Google. It only sends a few statements about how it fights child pornography in its systems: “First, we fund research to better understand how criminals abuse technology. Second, we develop technology like PhotoDNA to detect cases of child sexual exploitation. Third, our staff quickly investigates reports of abusive content and removes it. And fourth, we work with other technology companies and law enforcement to refer crimes.”
Like Microsoft, in a conversation that this newspaper had with Google, the trust these companies place in their detection systems is remarkable. Google’s software is finding more and more false positives: between July and December 2021, it suspended 140,868 accounts, almost double compared to the first half of 2020.
Google analyzes accounts to search for child-related sexual material with two technologies: known pornography images have a numerical code that identifies them. If the systems find images that match those codes, it disables the account. It is the PhotoDNA system cited by Microsoft.
The problem is the new photos. For those, Google has created a second system that interprets the images and assigns them a probability that they are child pornography. Then, in theory, they go to human reviewers who decide if a photo crosses the sexual threshold.
Google has also spoken with pediatricians, so the computer will know how to distinguish between images taken for medical purposes and others. But, despite the laudable purpose, the system can also cause many innocent people to fall into a trap that may even involve a police investigation.
“I have a friend who is a member of the National Police and I called him to tell him about the case and he told me that he would ask colleagues specializing in computer crimes,” says Barberá. “They told him they didn’t know of any case like mine.” In the US, companies such as Google and Microsoft must report any suspicious findings to the National Center for Missing and Exploited Children (NCMEC), which in turns notifies the police. The NCMEC sent 33,136 reports to Spain in 2021. These are usually cases that are not investigated, and in any case, the police do not report back to Google or Microsoft that such and such a person is not a suspect. As a result, the companies make their own decisions and it is up to the victim to justify why the material was legitimate.