AI fuels rise of content farms and fake news outlets
Websites use artificial intelligence tools like ChatGPT to publish thousands of articles every day solely to generate advertising revenue
A website that publishes for the express purpose of generating revenue from advertising is commonly known as a content farm. They’ve been around for years, ever since someone saw the potential in producing content at a low cost. Quantity is key, as is monetizing the content through automated platforms like Google AdSense. The rise of generative artificial intelligence (AI) tools like ChatGPT and Google Bard has now propelled content farms to a new level, publishing material at an industrial scale.
A study by the NewsGuard misinformation monitor found that an increasing number of content farms now use generative AI. These websites publish articles created by chatbots with little or no human editorial oversight. The numbers are astronomical. One of the websites analyzed in the study — World Today News — published around 8,600 articles during the week of June 9. That’s an average of 1,200 articles per day. The other two websites analyzed by NewsGuard each published 6,108 and 5,867 posts that same week.
“They are clearly using AI to generate low-quality clickbait content,” said McKenzie Sadeghi, a senior analyst at NewsGuard. “These websites use technology to produce articles faster and cheaper.” Sadeghi says human intervention in the process has practically disappeared. “In the past, these web pages used to rely on a team of human contributors — freelancers who got paid to write content. But now, it doesn’t really feel like there’s much human supervision at all anymore.”
Some of these content farms have even published chatbot error messages in headlines, exposing unreviewed AI-generated content. NewsGuard analysts have seen headlines like, “Sorry, as an AI language model, I am not able to access external links or web pages on my own;” and more disturbing text like, “Sorry, I cannot comply with this instruction as it goes against my ethical and moral principles.”
Content farms are driven by one thing — the more they publish, the more traffic they generate to their websites, leading to more ad clicks. According to NewsGuard, over 90% of these ads are displayed via Google Ads, which automatically places ads on affiliated pages. Between May and June 2023, NewsGuard detected 393 ads from 141 large brands on 55 AI-driven websites.
“Google doesn’t have a policy that specifically bans AI-generated content. However, they do have a policy against spammy and low-quality content, which is basically what these sites offer,” said Sadeghi. Google raked in a staggering $224.47 billion in advertising revenue in 2022, according to Statista. However, automatic ads contribute only a small portion to this figure, since the majority of Google’s revenue comes from search advertising.
The use of generative artificial intelligence in content farms is growing rapidly. “We’re finding 25 to 50 sites like these every week. In early May, we found 49 websites. Now, we have 277 websites on the list. Some are new, while others that have been around for years are starting to utilize artificial intelligence,” said Sadeghi.
Most of the websites monitored by NewsGuard feature relevant ads and don’t intentionally spread fake news. However, they sometimes venture into the realm of misinformation with headlines like “Can lemon cure skin allergies?” and “Five natural remedies for attention deficit disorder.” But overall, their main shortcoming is low quality content that is often plagiarized.
The real problem comes from using generative artificial intelligences to intentionally generate and spread misinformation. David Arroyo, who works on fake news detection for the Spanish National Research Council (CSIC), says AI provides the means to produce more fake news. “The phenomenon of disinformation is poised to escalate due to the availability of these tools,” he said categorically.
Ammunition for disinformation campaigns
As early as 2017, an article in Nature magazine warned about the link between fake news and automated advertising, arguing that most of the fake news created during the 2016 U.S. elections was not driven by political motives but by economic incentives. “There was already discussion about the ecosystem of advertisers associated with the generation and dissemination of fake content domains. The use of AI tools further amplifies this phenomenon, as it substantially enhances the capacity to produce convincing content that looks authentic,” said Arroyo. The CSIC has detected an increase in disinformation in recent months, although Arroyo does not attribute it all to AI. “Identifying a single cause can be challenging. It’s important to consider the ongoing electoral processes in Spain, as well as the distortions caused by pro-Russia movements related to the war in Ukraine.”
A few months ago, NewsGuard conducted a study of ChatGPT (versions 3.5 and 4) to evaluate its potential as a creator of fake news. “The chatbots could actually generate misinformation on various topics like politics, health, climate and international issues,” said Sadeghi. “The ease with which these models can be manipulated becomes evident when they produce misinformation in response to human guidance.” And on top of that, it boasts an astonishing capacity for generating content at an industrial scale.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition