“The difference with this app is that it uses AI (artificial intelligence) to monitor the images [the teenager] consumers, the texts they read and what they write,” psychologist Alicia González, an influencer with a half million followers, says on Instagram in a video paid for by Bosco, a parental control app. “But you don’t have access to all their communications and internet history, you’re only going to receive an alarm if they receive offensive messages or see images with inappropriate content,” she adds.
Bosco promises to send a nightly report with a “summary” of kids’ online activity, but without parents “seeing the content”. What the app sees and does with that data is another matter. EL PAÍS asked González if she took this factor into account when she agreed to the collaboration with Bosco, but did not receive a response before the publication of this article. A standard fee for a promotional video like the one she made, given her number of followers, can be around $5,400, although this number can vary.
Parental control apps allow remote monitoring of what is happening on a teenager’s cell phone. There are all kinds, more or less intrusive, both in their regard for minors’ privacy and that of their data. Millions of parents around the world use these applications. Experts believe that their usefulness decreases as kids get older, and that there is no one-size-fits-all solution. But right now, with authorities taking action against minors’ use of the internet, they are a resource that is growing in popularity. “We are seeing more offerings when it comes to these apps because there is a greater demand in the market, driven by the fact that cell phones are being used earlier, and in more diverse ways,” says Jorge Flores, founder of the organization Pantallas Amigas (Friendly Screens), which promotes a healthy use of technology.
The variety in these apps is huge. “There are certainly a lot of parental control apps being developed to help keep kids safe online. The concern is over how they are designed and sold,” says Karla Badillo-Uquiola, a professor at the University of Notre Dame.
The most popular program is Google’s Family Link, which, for example, allows the programming of schedules of permitted use and hands the controls to parents when it comes to which apps kids can download. “There are details that make it less of an invasion,” Flores says. “The kid receives information about their own screen time: ‘Today I spent three hours on Instagram, let’s see if I can lower that.’ Some scheduling can help to manage, even prioritize, screen time. An automatic system that reduces screen time, even if you know your parents are behind it, helps to reduce daily clashes, which can wear down communication in a relationship,” he says.
Spying on minors
But a range of control options means that there are tools that come close to spying — without their consent — on minors. In response, teenagers have developed a range of methods and alternatives to evade such surveillance, from creating parallel account, using uncontrolled browsers or speaking in code. Young peoples’ savvy when it comes to circumventing digital monitoring can be impressive. One parent describes how his son overcame Family Link surveillance in an App Store review: “I was able to verify that by using the Duolingo app, my son was able to open a Chrome browser, without any control, logging into his Facebook account,” they say.
“These methods don’t really contribute to the development of resilience and skills among kids and families”— Jun Zhao, University of Oxford
Today, our primary objective is limiting mobile phone usage among adolescents. But experts agree that focusing on control is not a good long-term solution. “The market trend of focusing on solutions that are based on control and monitoring has proven to be of little use when it comes to guaranteeing kids’ online security, and doesn’t help them to learn about risks,” warns Jun Zhao, senior researcher in the department of computer science at the University of Oxford. “These methods don’t really contribute to the development of resilience and skills among kids and families,” she adds.
Families sometimes try to achieve via parental control a thing that parents themselves are unable to practice: reasonable cell phone usage. “Minors see that that their parents’ controlling method of preference is restrictive, based on orders, and that they themselves are not providing a good example,” says Beatriz Feijòo, professor at the International University of La Rioja (UNIR). “The first people who should be reflecting on their use of cell phones and social media are the adults, including the example we are setting for kids. The most appropriate mediation technique is active, which can be much more complex. Installing apps is short-term, which active mediation has a more long-term perspective, because it encourages work at a critical and ethical level, and requires a lot of inter-connection with kids.”
No miracles, but lots of mess
Without such agreement, problems can multiply, and not only between family members. These are sensitive issues, with highly complex ethical implications. “Spying without consent is not the way to go,” says Flores. “It doesn’t build trust. I came across the case of a mother who, by spying on her daughter, had discovered a critical situation for one of her daughter’s friends. She was getting into big problems, and the mother was faced with the dilemma of keeping quiet and taking on that guilt, or intervening and telling on her. I told her, it was her problem. This is not technology, it’s a different kind of dilemma.”
“Using apps, coupled with regular familial communication, works better than simply decreeing screen time limits”— Tiffany Ge Wang, University of Oxford
There are parents who think that AI apps can work miracles: “Does [BoscoApp] know how to tell when teens are speaking in code to try to trick it?” a mother asks González on Instagram. “Hmmmm, it can probably tell sometimes,” González responds, optimistically.
Artificial intelligence can do more and more things, but in the realm of parental control apps, it can be a problem. “AI is seen as a potential solution to detecting risks on the internet,” says Badillo-Urquiola. “Many of these apps use AI to detect inappropriate language or images, but the inaccuracy and bias of these algorithms can be harmful. The concern is that AI needs reams of data to train itself well, with its accuracy dependent on collecting intimate data from teens. At that point, the concern becomes who has access to that data and what they do with it.”
Invasion of privacy
The invasion of their privacy can mean that young people become the victims of data-collecting platforms. “It is important to keep in mind that the vast majority of apps seek to generate profit by collecting user data in order to show them personalized ads,” says Alvaro Feal, a researcher at Boston’s Northeastern University and co-author of a study on the privacy policies of 46 parental control apps, which have collectively been downloaded more than 20 million times. “Therefore, the use of these apps, which by definition need to have access to a large amount of personal data, carries risks. In our study, we found that the majority (72%) of the apps we analyzed shared data with third-party companies. Fewer apps (11%) sent unencrypted data. In some cases, this data can be as sensitive as the child’s location,” he says.
It is a curious contradiction that parents looking to protect their children from the evils of the internet end up making them more vulnerable. Some parental control apps are merely a channel to obtaining information from minors and future consumers: “When children are connected, whether it’s through their mobile phones, tablets or voice assistants, their data is constantly being collected, analyzed and processed by many companies. This allows these companies to send them personalized game promotions or ads. People don’t realize how data is handled across all platforms, allowing these digital companies to have a much more complete picture of our children than we could even imagine. This knowledge is often abused in order to prolong the time children spend online, and to expose them to less appropriate content,” says Zhao.
Children are becoming increasingly aware of the use of their data by these companies, according to scientists at the University of Oxford. “Our research has shown that children in the UK, from the age of 10, are already starting to take control of their data, and are even showing signs of something like data activism, demanding more transparency and access. This demand for autonomy over their data is even stronger in the older kids we have worked with,” says researcher Tiffany Ge Wang.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition