The White House unveils measures to mitigate the risks of artificial intelligence
Vice President Kamala Harris met with the heads of Google, Microsoft and two other companies developing the technology and announced an investment of $140 million to establish seven new AI research institutes
Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk. The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.
In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.
The Thursday meeting was designed for Harris and administration officials to discuss the risks they see in current AI development with the CEOs of Google, Microsoft and two influential startups they support: Microsoft-backed OpenAI and Google-backed Anthropic. The government leaders’ message to the companies is that they have a role to play in reducing the risks and that they can work together with the government.
Authorities in the United Kingdom also said Thursday they are looking at the risks associated with AI. Britain’s competition watchdog said it’s opening a review of the AI market, focusing on the technology underpinning chatbots like ChatGPT, which was developed by OpenAI.
President Joe Biden noted last month that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilizing ways. Biden also stopped by the event Thursday. He has been “extensively briefed” on ChatGPT, seen how it works and has even experimented with the tool, according to a White House official.
The release of ChatGPT late last year has led to increased debate about AI and the government’s role with the technology. The ability of new “generative AI” tools to produce human-like writing and fake images has added to ethical and societal concerns about automated systems.
Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That’s made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.
Companies worried about being liable for something in their training data might also not have incentives to rigorously track it, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.
“I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” Mitchell said in an interview Tuesday. “From what I know of tech culture, that just isn’t done.”
Theoretically, some kind of disclosure law could force AI providers to open up their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy for companies to provide greater transparency after the fact.
“I think it’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not,” Mitchell said. “Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”
While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.
The companies also face potentially tighter rules in the European Union, where negotiators are putting the finishing touches on AI regulations first proposed two years ago. The rules could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.
When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.
But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
Foundation models are a sub-category of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of data.
A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.
Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and the European Data Protection Board set up an AI task force, in a possible initial step to draw up common AI privacy rules.
In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though the one-time event might not be as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.
Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.
“This would be a way for very skilled and creative people to do it in one kind of big burst,” Frase said.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.