_
_
_
_
_

28 countries, including the U.S. and China, commit to greater cooperation to address the dangers of artificial intelligence

The Artificial Intelligence Security Summit discusses the ‘existential threat’ posed by this technology, as well as risks that already exist, such as the elimination of jobs and large-scale disinformation

Georgii Dubynskyi
Ukraine's Deputy Minister for Digital Transformation Georgii Dubynskyi speaks with Elon Musk on day one of the AI Safety Summit at Bletchley Park in Bletchley, Britain on November 1, 2023.POOL (via REUTERS)

It is difficult to regulate the unknown. It is much easier to agree on shared fears, such as concerns about the risks of a technology with the revolutionary capacity of artificial intelligence (AI). 28 countries signed the Bletchley Declaration on Wednesday, committing to strengthen international cooperation in all scientific research currently analyzing the security risks posed by AI. This is intended “to ensure that the best available science contributes to the design of public policies and to the common good,” the text states.

The declaration’s tone may understandably prompt skepticism, because it includes nice words and intentions but makes few concrete commitments. But the fact that powers such as the United States, China and the European Union have signed it suggests a willingness to seek a joint response to a historic challenge. The White House sent Vice President Kamala Harris, who has personally pushed the U.S. agenda on AI, to the British government-organized Artificial Intelligence Security Summit. Specifically invited by Downing Street, China is a leader in the number of AI-led patents and projects; the country was represented by Vice Minister of Science and Technology Wu Zhaohui.

Bletchley Park, a special venue

The British government chose an emblematic location for the summit: Bletchley Park, where a team of code breakers and encryption experts shed light on the German army’s messages during World War II. Located 50 miles (80 kilometers) from London, that was the place where Alan Turing, the father of computing, cracked the code of the Enigma machine. Turing also designed the test that bears his name, which is also known as the “Imitation Game.” The name comes from the Turing Test’s initial question, “Can machines think?” In 1950, the mathematician himself reformulated the question: “Are there any conceivable digital computers that perform well in the imitation game?” That is, could any make people believe that they were communicating with another human being? Anyone who has conversed with a chatbot or asked ChatGPT complicated questions can assume that there is already an answer for Turing’s question. The rapidity with which AI is progressing forces governments and technology companies to design ethical regulations similar to the ones that were once enacted in the face of the discovery of DNA and gene therapies.

“A large number of leading experts in the field are seriously concerned that uncontrolled AI advances could lead to catastrophic consequences,” warned entrepreneur Ian Hogarth at the start of the summit. Hogarth became a multimillionaire with his Songkick app, which tracked music concerts, and he has been writing an annual State of AI report that the AI industry has followed closely since 2018. Hogarth has been leading the AI Security Task Force, which British Prime Minister Rishi Sunak’s government is funding to the tune of over €120 million ($126 million). “I am concerned that an unchecked race will result in future systems that undermine democracy, reinforce discriminatory biases and destabilize societies,” Hogarth announced.

The ‘existential threat’ and current risks

Hogarth’s report acknowledges that the scientific community has not reached a consensus about such catastrophic perspectives: researchers are divided between highly optimistic and highly pessimistic on the matter. For that reason, the task force emphasizes concrete, real threats. U.S. Vice President Harris advocates this approach, which was expressed in the Executive Order on AI security issued by the White House this week. The more than 100 participants at the Bletchley summit also want to follow this approach by addressing the discriminatory bias that certain algorithms can drive, the proliferation of fake news and disinformation “at an unprecedented scale and level of sophistication,” and the ability to carry out cyberattacks or develop biological weapons. “Frontier AI will most certainly continue to lower the level of barriers to entry, and allow access to unsophisticated threat actors,” the text asserts.

AI also presents a worrisome threat to social stability and the risk of job losses. “By 2030, the most extreme impacts [of AI] will remain confined to very specific sectors, but may be capable of provoking a violent response from citizens, starting with those whose jobs are disrupted. All of this may fuel a fierce public debate around the future of education and employment,” the report warns.

“The most relevant thing about this meeting, in my opinion, is the idea that we should not only focus on future risks, but on today’s risks, which are not only about national security or the terrorist threat, but real risks and threats to our society regarding mental health and cases of discrimination,” argued Carme Artigas, Secretary of State for Digitalization and co-chair of the UN High-Level Advisory Body on Artificial Intelligence. “There is a sense of urgency in regard to the need to point out these risks that affect the fundamental rights of citizens and society.”

The Bletchley Declaration states: “We recognize that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally.”

In addition to Harris’s presence, the British government ensured that relevant figures including the President of the European Commission, Ursula von der Leyen, and the Secretary General of the United Nations, António Guterres, attended the Bletchley Park summit. Key figures such as the President of France, Emmanuel Macron, and the German Chancellor, Olaf Scholz, are missing from the gathering, but the urgency of the matter has drawn the attendance of the most important academic institutions for AI, along with the main technological giants, such as Amazon, Alibaba, Meta, IBM, Anthropic, Google Deep Mind, Microsoft and Meta. Elon Musk — the owner of Tesla and SpaceX, a co-founder of OpenAI, and the controversial owner of the social network X (formerly Twitter) — and Sam Altman, the CEO of OpenAI, the company behind the revolutionary ChatGPT, are also in the U.K. for the event. British Prime Minister Rishi Sunak has pledged to hold a dialogue with the unpredictable Musk as early as Thursday, at the end of the summit.

“For the first time in history, we are up against something that is much smarter than human beings. We’ve never been stronger or faster than other living things, but we were smarter. And now, for the first time, we’re up against something that’s smarter than us,” Musk said on his arrival at Bletchley Park.

To complete the quintessentially British touch of holding a summit that celebrates the best of the United Kingdom’s recent history, King Charles III delivered a message via pre-recorded video. The monarch, who has the perfect voice and tone to add drama to any speech, compared AI to “the discovery of electricity, the splitting of the atom, the creation of the internet or even the discovery and control of fire,” and called for a “sense of urgency, unity and collective strength” to confront the risks that the new technology brings with it.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_