_
_
_
_
_

Kashmir Hill: ‘They shouldn’t be collecting photos from social media without people’s consent, but they keep doing it and nobody’s stopping them’

In her latest book, the ‘New York Times’ reporter explores the challenges posed by a technology that even Google and Facebook decided not to use

Kashmir Hill privacidad
Journalist Kashmir Hill wrote the book 'Your Face Belongs to Us.'Earl Wilson
Manuel G. Pascual

In November 2019, journalist Kashmir Hill received a tip that a startup called Clearview AI claimed to be able to identify anyone from a picture. Her source said that the company had collected billions of photos from social networks like Facebook, Instagram and LinkedIn without telling either the websites or the people involved, and that if you uploaded someone’s photo into the app, it would show you all the websites where that person appeared, plus their complete name and personal information.

Until then, no one had dared to develop anything like this. An application capable of identifying strangers was too much. It could be used, for example, to photograph someone in a bar and find out in seconds where they live and who their friends are. Hill, a reporter for The New York Times, published the story about this small company, which in a few months went from being a total unknown to receiving the support of Peter Thiel, one of the godfathers of Silicon Valley, and becoming a service coveted by police forces in the U.S. and abroad. She reached Hoan Ton-That, the inscrutable engineer and co-founder of Clearview AI, who made the tool with Richard Schwartz, a politician with a long career behind the scenes in the Republican Party. Hill’s research informed her book Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy.

“I just thought Clearview AI was striking because of what a small ragtag group it was. Unusual and fascinating characters. And I just thought that really captured something about the tech industry, a certain kind of naivete. And just this desire to create these things, these really transgressive new technologies without a serious reckoning with the implications and how it would change society,” she explains by videoconference from New York. Named after one of Led Zeppelin’s most legendary songs, Kashmir Hill, 43, worked at publications such as Gizmodo, Forbes and The New Yorker before joining The New York Times in 2019. The native Floridian was also struck by the fact that such a young company could master a technology as complex as facial recognition in such a short time.

Question. What is so special about automatic facial recognition systems? Why are you interested in this technology?

Answer. Facial recognition technology is the key to tying people in the real world to everything that’s knowable about them online. Uncontrolled use of facial recognition would eradicate our ability to be anonymous. Governments would know where we are and what we do all the time. It’s how we’re seeing it used in China where it’s tracking people all the time. Russia using it to identify protesters of the invasion of Ukraine. The face is essentially the last bastion of privacy. And so now China has developed Red List. And this is for people who are in power who don’t want to be seen, who don’t want to be tracked all the time. And so they can put their faces on a special list that says, ‘I want to be invisible, I will not have the cameras remember that they’ve seen me.’ And I just think that’s so telling that the privilege there now is not to be seen by the cameras and that powerful people are aware of the risk of being tracked all the time.

Q. Do you think the Clearview AI story is representative of the facial recognition story more broadly?

A. Yes. I think facial recognition technology is one of these technologies that is a double-edged sword. It can be used in so many different ways and some of them are very positive: using it to solve crimes, to find murderers, find rapists. And then there’s very chilling uses of it from authoritarian states using it to track dissidents and political radicals, to the creepy guy who’s using it to take up information about women that are in pornography. With Clearview AI, specifically, originally they had really troubling ideas for what to do with the technology. I mean, they were trying to sell it to Hungary so they could use it to keep out democratic activists and human rights workers. And so in that way, I think it’s almost a reassuring story that, hey, look, this is a troubling technology. They wanted to use it in some of the most troubling ways, but ultimately they just wound up working with police and helping to solve crimes.

Q. Clearview AI has been fined and banned in several countries. What is their current situation?

A. They’re still operating in the United States, still working with lots of law enforcement, including the Department of Homeland Security and the FBI and lots of local law enforcement around the country. They are in battle and fighting a lot of fights. But they have had some success. A court in the U.K. said that U.K.’s privacy regulator couldn’t fine them. And it does seem like the fact that they have ultimately decided just to work with law enforcement and police has allowed them to avoid a lot of negative outcomes. And we’ll see what happens in the other European states.

Q. Do you think that the boom in generative artificial intelligence (AI) has served as a smokescreen for the expansion of companies like Clearview AI?

A. I think that people are very focused on generative AI right now and that it has, maybe, moved attention away from the way that technology threatens our privacy. But in some ways, the concerns are the same, right? The New York Times, where I work, has sued OpenAI for scraping all of our articles and using them to train their software. And that’s very similar to the concerns with companies like Clearview AI that scraped all of our faces from the internet without anyone’s consent. And honestly, I worry very much about facial recognition paired with generative AI, like the idea of creating reputational landmines for somebody. For example, you can generate their face on a pornographic image and just put it out on the internet, knowing that someday somebody will do a face recognition search of them and find it. I do see these technologies having some of the same troubling practices, and then I see the way that they’re going to intermingle to create more concerns for us. Part of why I wrote the book is I wanted people to understand just how powerful facial recognition technology has gotten. It really is trivial now to identify somebody and to find all of the photos of them on the internet. And I just think that has such troubling implications for the future.

Q. What is the social perception of facial recognition in the U.S.?

A. There’s a real resistance to the use of live facial recognition technology in the United States. So, lawmakers have really put in the public a pushback against the idea of searching for people in real time on the streets, which is something that is happening more in Europe, or at least there’s an openness to it in the U.K., for example, where they send out those vans with facial recognition on the roofs, searching for people. At the same time, I think people, for the most part, like the idea of using facial recognition technology after a crime has been committed to try to identify that person.

Q. How is it possible that a company that bases its business on a product built on the non-consensual downloading of millions of photographs of human faces can operate like just another business?

A. When I first wrote about them they had three billion photos. When I finished the book, I think they had 20 billion photos. They now have 40 billion photos. They just keep growing their database. And there are countries that have said that what they’re doing is illegal. They shouldn’t be collecting people’s photos particularly from social media without people’s consent, but they keep doing it and nobody’s stopping them. And within the U.S. we have had some kind of legal precedent for saying that scraping is legal, and they’re based in the U.S. and so that’s really protected them. But again, that’s the question. It’s a question of Clearview AI. It’s a question of generative AI. Should these companies just be allowed to collect whatever they want and use it how they want in these very lucrative ways? I think that’s one of the biggest questions of modern times.

Back in 2011, Google’s CEO Eric Schmidt said it was the one technology that Google developed and did not release

Q. You point out in the book that Google and Facebook had already produced their own facial recognition technology before Clearview AI but decided not to launch it.

A. I really did find that surprising because Google and Facebook are not necessarily known for being conservative when it comes to new data uses. I think part of it is that they were big companies who’d come under a lot of scrutiny for privacy abuses in the past. So I think they were a little bit more careful and worried about the legal and regulatory risk of releasing such radical technology. I also think that they were worried about the technology. Google’s CEO Eric Schmidt at the time back in 2011 said it was the one technology that Google developed and did not release. It was interesting to me in this recent news cycle about generative AI that Google made the same decision with generative AI. Again, they had developed something like ChatGPT, and they thought, ‘I don’t think the world’s ready for this yet.’

Q. Do you think we are heading towards a hyper-vigilant society?

A. We do live in this world of cameras everywhere, but we do not have facial recognition running on those cameras. And I think that’s a big step to take or to avert. And you’re seeing it right now. This is the debate right now in Europe: should we have real-time facial recognition? What if somebody — a child — gets kidnapped or you have a fugitive on the loose? Should we be able to find that person in real time? And once you set up that infrastructure, it means that it could be used in all kinds of other ways. It’s a big step to take and we could choose not to take that step and preserve some privacy, preserve the ability to have autonomy as you move through the world without being tracked all the time. I do think we can still decide and not have this be a world in which we’re tracked by face all the time, every time we leave our houses. And it’s a decision to make right now.

Q. How do we draw the line between appropriate and inappropriate use of facial recognition?

A. I think one big question right now is retroactive versus proactive. Do you use it to solve a crime that’s already been committed, or are you using this to try to prevent crimes or find people in real time? And that’s a big divide that we’re navigating right now. And it feels like the line that we’ve driven in the United States is it’s okay to use it for security purposes. So, if you’re a police [officer], you can use it to solve crimes. If you’re a business, you can use it to try to identify shoplifters and kick them out. And it’s anything outside of security that tends to make people feel more uncomfortable and worried.

Q. Do you think the public will get used to and tolerate this technology?

A. I think part of what will happen is that we’ll have use cases, and we’ll see how comfortable people are with them. It was quite shocking here when you had Madison Square Garden start stopping lawyers at the door and turning them away because they worked for firms that sued them. I think all of a sudden people realized, wow, facial recognition technology could be used in this very alarming way to punish people for who they work for or what they do. And there was this period of time when people were being recorded all the time on film calls. At the White House there’s all these tapes of Richard Nixon making his plans for Watergate, and it’s all recorded because he was secretly recording all of his conversations. And people were really alarmed by that. They said, I don’t want to fear all the time that everything I say is being recorded, and so we passed laws that made that illegal and made eavesdropping illegal and wiretapping illegal except if police have special court orders to do it. And it’s why all these cameras that we have all around the country, all around the world, are only recording video and not sound because we decided that we didn’t want to live in a world in which everything you say is recorded. And I’m thinking that we’ll go the same way with our faces.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_