Long considered a border that few were willing to cross, California this week became one of the few US states to stand up to Big Tech companies and strengthen regulation.
The Democratic state this week adopted legislation that requires social media platforms such as Twitter and Meta (the owner of Facebook) to reveal their content moderation policies. Bill AB 587, signed by Governor Gavin Newsom, is meant to fight the disinformation, hate speech and extremism that has permeated the public discourse in the United States, with digital channels as the main means of propagation.
The bill took shape in California as a direct reaction to the assault on the US Capitol in January 2021 by supporters of former president Donald Trump. The legislation, known as AB 587, will require companies to deliver, beginning in 2024, detailed reports to the California Attorney General’s Office explaining how they moderate online debate. This includes revealing the company’s offense ranking system, deleted comments and whether oversight is left to artificial intelligence. Companies that do not provide this information will be fined.
“California will not stand by as social media is weaponized to spread hate and misinformation that threatens our communities and our core values as a country,” said California Governor Gavin Newsom in a statement Tuesday.
The initiative has been criticized by the industry. “Forcing social media to share their content moderation policies and strategies in detail is like giving thieves a plan of your house,” said Adam Kovacevich, who chairs the House of Progress, a center-left coalition that includes several technology companies, including Meta and Google. This group does not rule out appealing the state law to the courts because they consider that online regulation threatens the debate on freedom of expression, protected in the first constitutional amendment.
California Age-Appropriate Design Code Act
AB 2273 establishes the California Age-Appropriate Design Code Act, which requires online platforms to consider the best interest of child users and to default to privacy and safety settings that protect children’s mental and physical health and wellbeing, Newsom explained in a statement.
“We’re taking aggressive action in California to protect the health and wellbeing of our kids,” said Newsom, adding that as a father of four, he was “familiar with the real issues our children are experiencing online.”
Under the federal Children’s Online Privacy Protection Act (COPPA), a child is defined as an individual under the age of 13. The California law lifts this to 18, meaning online products and services “likely to be accessed” by children under the age of 18 must meet new legal obligations. For example, all default privacy settings must be configured to the highest level of privacy, so that strangers cannot send messages to minors. Companies must also offer transparent information in a language that children can understand.