US tech royalty attends Congress to discuss AI regulation

Elon Musk, Mark Zuckerberg, Bill Gates and Sundar Pichai talked with lawmakers about the need for rules to control the emerging technology and the risks it presents

Bill Gates arrives to attend the Senate bipartisan Artificial Intelligence Insight Forum.
Bill Gates arrives to attend the Senate bipartisan Artificial Intelligence Insight Forum.JIM LO SCALZO (EFE)
Macarena Vidal Liy

The who’s who of U.S. high technology was on Capitol Hill in Washington on Wednesday. For the first time, the great heavyweights of the sector — from Tesla CEO Elon Musk and Meta CEO (Facebook) Mark Zuckerberg to Alphabet (Google) leader Sundar Pichai and Microsoft founder Bill Gates — appeared as a group in the Senate. They were there to participate in a closed-door session on one of the hottest issues of the moment: the regulation of artificial intelligence. It’s a goal that everyone agrees on. But there is disagreement on how to regulate the emerging technology, and to what extent to do so. One thing, however, does seem clear: the European model has few supporters.

 Elon Musk upon his arrival at the Senate.
Elon Musk upon his arrival at the Senate.JULIA NIKHINSON (REUTERS)

The fact that 20 tech executives — whose joint income statements exceed the GDP of some countries — responded to Senate Majority Leader Charles Schumer’s invitation points to the importance of the matter. Several of the tech titans have previously spoken out in favor of measures to control AI, a sector that is receiving a flood of investments and that has sparked great public interest since the launch of the ChatGPT chatbot less than a year ago.

After leaving the forum, Musk said it was “important for us to have a regulator which you can think of as a referee.” He called artificial intelligence “a double-edged sword,” and said that a regulator was needed “to ensure that companies take actions that are safe and that are in the interests of the general public.”

Zuckerberg, for his part, told the forum: “I believe it’s better that the standard is set by American companies that can work with our government to shape these models on important issues.”

Schumer is hoping to approve an AI law next year before the presidential elections in November. The goal is to prevent the technology from disrupting the 2024 vote. On the one hand, the law will encourage the rapid development of artificial intelligence and its benefits; and, on the other hand, curb the dangers posed by the sector before it becomes fully incorporated into daily life. Lawmakers want to contain the risks presented by AI, such as election interference, the spread of false news, and attacks on key infrastructures.

The idea is to avoid a repeat of what happened with other tech sectors, such as social media, which was allowed to expand without any regulation. Now that they have become commonplace tools among the population, they bring with them a whole series of problems — from the spread of fake news and harmful content to alleged mental health problems among teenagers and children. But placing restrictions on social media companies has proved difficult in the United States. There have been numerous attempts to pass bills that limit social media, but so far, all have come to nothing. This is partly due to pressure from powerful tech companies, and partly due to disagreements between lawmakers themselves.

On this occasion, it remains to be seen if the lawmakers will succeed. New Jersey Senator Cory Booker said that all participants agreed that “the government has a regulatory role,” but said that crafting legislation would be a challenge.

On the eve of the forum, Schumer told the AP news agency that AI regulation is “one of the most difficult issues Congress can ever deal with.” He listed some of the reasons why: it’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.

“Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass,” Schumer said as he opened the meeting. “Congress must play a role, because without Congress we will neither maximize AI’s benefits, nor minimize its risks.”

Later, Schumer made it clear that he was not interested in following the example of the European Parliament, which in June approved sweeping legislation to regulate AI. It is now awaiting approval in the Council and consultations with the 27 member countries. “If you go too fast, you can ruin things. The EU went too fast,” he told reporters after the forum.

The EU’s regulation on artificial intelligence — the first of its kind in the world — affects any product or service that uses AI tools. Each system is classified into one of four levels of risk, which range from minimal to unacceptable. Content needs to make clear when it contains material generated by artificial intelligence, and the rules also include safeguards against illicit content. But in an open letter, more than 160 business leaders argued that the bill endangers the EU’s competitiveness and technological sovereignty.

Sam Altman, the founder of OpenAI, which created ChatGPT, was also present at the forum on Wednesday. It was the launch of the chatbot that sparked interest in the capabilities of AI, which until recently sounded like science fiction. These content generation systems can create images and sound, computer programs and text that are indistinguishable from human-made products. While these tools open up enormous possibilities, they have also raised fears about how they may be misused and whether they will impact jobs.

In March, Musk, business leaders, and AI experts had called for a six-month pause in the development of systems more powerful than Open AI’s GPT-4, citing possible risks to society. In May, Altman warned Congress: “My worst fear is that we, the technology industry, cause significant harm to the world.” “I think if this technology goes wrong, it can go quite wrong,” he added.

That same month, 350 businessmen and experts in the sector warned that AI posed a “risk of extinction” for humanity. George Hinton, one of the fathers of the technology, left Google because he believed these programs could lead to the end of civilization in a matter of years. A report by the market analysis company Forrester estimates that artificial intelligence could replace 2.4 million jobs in the United States by 2030.

Senators will not necessarily accept all the suggestions made by the tech leaders at Wednesday’s forum. But the participants hoped the meeting would lead to a better understanding in Congress of the realities of the sector, its risks and benefits, and what can be done to address them.

Some concrete proposals have already been presented, including a key project that would force, for example, election advertising to include warnings if it includes sound or images generated by AI that could lead to misunderstandings. Another initiative is the possible creation of a regulatory body that studies certain artificial intelligence systems before granting them an operating license.

In July, the White House proposed a series of voluntary commitments to artificial intelligence companies in a bid to ensure that the technology is not used for harmful purposes. One of the proposals is to include a seal or watermark in content generated by artificial intelligence, given the difficulty — or impossibility — of distinguishing between real and AI-made images and text. The White House also addressed artificial intelligence in an executive order.

On Tuesday, eight companies in the sector, including Adobe, IBM and Nvidia, announced they would adhere to the voluntary commitments requested by the White House.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS