_
_
_
_
_

Very human questions about artificial intelligence

AI experts tell us we live in unpredictable times. They have no answers and since ordinary people like us don’t even know the right questions to ask, EL PAÍS went to the Institute for Futures Studies in Stockholm for help

Artificial intelligence
PABLO DELCAN
Patricia Gosálvez

A touchscreen hanging in the middle of the exhibition highlighted all the questions for everyone to see. Would you have a chip implanted in your brain to make you smarter? Would you leave your elderly mother or baby in a robot’s care? Should that robot have rights? Would you allow supposedly impartial artificial intelligence (AI) software to judge your legal case? Would you transfer your consciousness to the cloud in order to live forever? A person stands in front of the screen, touching the “Yes” or “No” buttons that appear after each question. It’s a sort of futuristic Ouija board and the person answering the questions is the ghost of the past, an obsolete and frustrated life form who is growing more and more irritated.

Although it makes you feel really old, the Hyper Human exhibition at Sweden’s National Museum of Science and Technology in Stockholm isn’t even that new. It opened a couple of years ago, long before ChatGPT made the front pages of the mainstream media and leaped into the global consciousness. Long before Claude Shannon (1916-2001), known as the father of information theory, uttered the sentence that’s displayed at the end of the exhibition: “I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”

Most experts agree we’re not anywhere near Shannon’s vision yet, but we seem to teeter on the edge of dystopia. After all, a Google vice president just quit his AI job to tell the world about “the existential problem posed by artificial intelligence.” When EL PAÍS asked Geoffrey Hinton how a capitalist system can slow down AI research and development, he simply said, “I don’t know.” We live in a time when 1,000 researchers, entrepreneurs and intellectuals (Elon Musk, Yuval Noah Harari, Steve Wozniak…) signed an open letter calling for a pause on AI research and for regulations on “unpredictable models” beyond ChatGPT4. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” asks the letter. Max Tegmark, a professor of artificial intelligence at the Massachusetts Institute of Technology (MIT) and the president of the Future of Life Institute, which sponsored the open letter, has publicly admitted, “We don’t know how to stop the meteorite we have created.”

When the world’s top experts don’t have answers, ordinary people like us — blissfully ignorant or simply terrified — don’t even know the right questions to ask. In our quest for the questions we should be asking, we went to another part of Stockholm, far from the sci-fi flash of the Hyper Human exhibition.

Discreetly nestled above Centralbadet — a delightful art nouveau spa opened in 1904 — is the Institute for Futures Studies, a brain trust started in the 1960s to envision the most likely long-term scenarios for humanity. We met philosophy professor Gustaf Arrhenius in the leafy, humid greenhouse that houses the Centralbadet’s cafeteria. Arrhenius began by explaining that the institute he heads is “absolutely not” a think tank. “We don’t have an ideology, nor do we sell anything… We are very interdisciplinary and like to challenge ourselves with extraordinary questions.” The biggest question they ponder is: “What can we do to achieve a more desirable future and avoid the worst possible future?” The institute has about 100 researchers from 15 countries and various disciplines — sociologists, philosophers, political scientists, economists, mathematicians and more — working on projects funded by an €8 million ($8.6 million) budget, 25% of which comes from the Swedish government. It produces papers and studies published in scientific journals and presented to politicians and decision-makers in Sweden and other European Union countries. The institute is an island of humanism, a refuge to think about weighty issues like future generations, social justice and the impacts of technology.

The first question is obvious: Is it too late to challenge technology? “Citizens, societies and especially regulators always lag behind technological advances,” said Arrhenius. He offers two non-exclusive timeframes to explain the Institute’s focus on AI. We have dilemmas that “are already here,” said Arrhenius, like a heavily monitored society plagued by frequent privacy violations, deep fakes that are impossible to detect, and government use of AI to allocate pensions, issue permits and parole prisoners. We also have problems that are “still a long way off,” like the potential extinction of humanity at the hands of super-intelligent machines. “I worry less about what the machines are going to do than about what people might do with them,” said Arrhenius.

Karim Jebari, another philosopher and researcher at the Institute for Futures Studies, explains some technological nuances about AI. There is weak or specialized AI, which recommends movies and TV shows based on viewing history, autocompletes Google searches, recognizes faces on smartphones, helps social service agencies decide whether to deny parental custody and converses with us about God on ChatGPT. Then there is strong or general artificial intelligence (known as artificial general intelligence — AGI), which is more humanlike and has digital advantages like the ability to replicate itself and learn at mind-blowing speeds. AGI has infinite potential, including holding our destiny in its hands, but it’s not here yet. “The problems of weak and strong AI are different, but both are worth exploring,” said Jebari, who thinks it’s unfortunate that they are sometimes confused.

eps 2435 central tecnologia preguntas
PABLO DELCAN

AI already poses an immense conundrum — never mind a future world with robots poised to annihilate humanity — and government regulators are struggling to catch up. In early May, the European Union (EU) approved the Artificial Intelligence Act, which will regulate certain AI practices when it becomes effective in 2025. Jebari says that the lag between technology and ethical and legislative questioning is not necessarily bad: “Significant issues usually emerge after there is a specific application, so it makes sense to have these discussions once we can see how the tools are used.”

“Regulators are naturally slow and technology is moving faster and faster,” said economist Pontus Strimling. “General technologies like the combustion engine, computers or AI may cause a lot of short-term problems, but they create a better society in the long term. But when these technological leaps get shorter and shorter, we risk overlapping the problems of one period with the next.”

Perhaps this is why the open letter asked, should we pause giant AI experiments like the large language models? “I think it’s a great idea, but not because I’m worried about the extinction of the species, but because it would help us regain a sense of control,” said Strimling, who is an expert on cultural change and norms. “The public, even the political class, feels that technology is something that happens to us, not something we make happen.” We have forgotten that it’s a human creation and we’re the ones in charge. “In democracies, we can keep the AI that works for us and discard what we don’t want.”

But isn’t technological progress unstoppable? “The dominant discourse over the last year, especially from the tech community, has been marked by technological determinism — it’s coming no matter what. But this is simply not true,” said Jebari. He described several times in history when people changed or stopped technological development — human cloning, genetically engineered food and nuclear power. “When enough people think something is dangerous, politicians act,” said Jebari. So, should we stop the development of AI? “Of course, if we think we should. We live in a democracy — if the people ask for restrictions, it will happen. Even undemocratic states do this. China often steps on the brakes when it feels that something is getting out of hand.”

The call for a moratorium on AI is partially based on the belief that private sector competition fuels risky research and development. Is that the underlying problem — AI development is in private-sector hands? “Having corporations lead the race is problematic,” said Arrhenius. “Incentives to stay within ethical guardrails could be outweighed by profit-seeking and the fear of falling behind the competition.” Strimling delved deeper into this question with an anecdote. “Years ago, a developer at DeepMind [an AI company bought by Google in 2014] told me he was worried about the future of these models. This was long before anyone in the social sciences reacted. I have spoken to several developers since then who feel they are working on the Manhattan Project [the nuclear weapon development project].” Strimling doesn’t believe the industry is resistant to regulation because many AI experts are troubled by their work. In fact, he feels that some technologists have a more exaggerated, dystopian vision of the future. “Maybe it’s because they can see all the possibilities and the resulting problems. But their perspective is bounded by their own bubble,” he said. “I’ve seen engineers worrying about the chaos of everyone having autonomous cars… That’s a misunderstanding of how often normal people buy cars.”

‘The Impossible Statue’ was designed by generative AI and is currently on display in Sweden’s National Museum of Science and Technology in Stockholm.
‘The Impossible Statue’ was designed by generative AI and is currently on display in Sweden’s National Museum of Science and Technology in Stockholm.Anna Gerdén (Museo de Tecnología de Estocolmo)

“Everything surrounding AI — the divide between optimists and dystopians — is increasingly polarized,” said sociologist Moa Bursell. One of her recent research efforts focuses on one of the most significant questions about AI. Can it help avoid human error and bias, be more objective and align better with certain values? Bursell doesn’t specialize in computing, but in inclusion and diversity in the labor market. As a social scientist, she claims to be neutral about how artificial intelligence is used in the job recruitment processes she studies. In theory, she thinks it can be very useful, like algorithms that handle paperwork for human resources departments, so they focus on candidate selection. Or they can go horribly wrong. Machines are “much more consistent” than humans, she says, and when the algorithm fails, it does so in a big way.

Bursell studied a company that purchased an AI system to help in worker recruitment, and compared the candidates it hired before and after. One result was surprising. Using AI reinforced the company’s hiring patterns and decreased workforce diversity — they were hiring more of the same… “But it wasn’t the algorithm’s fault!” she said. The AI system performed a balanced pre-selection process, but the hiring managers imposed their own biases when making the final decisions on candidates. Before acquiring the AI system, when people controlled the entire process, the company was more inclusive in its hiring. The algorithm didn’t distort the process, but its use made the humans skew the result. “The problem was the machine-human interaction,” said Bursell. Perhaps the hiring managers felt less responsibility, or didn’t understand the tool, or felt threatened… “Creating unbiased AI is only the first step,” she said. “The system’s implementation must be well understood and monitored. If we just buy these tools to save effort, things will go wrong.” But this doesn’t mean AI tools shouldn’t be used. “We never had a perfect starting point in the first place that machines then screwed up. People aren’t great at this — job discrimination is an enormous problem. The question is whether AI can improve the process.”

Back at the Institute for Futures Studies, political scientist Ludvig Beckman studies the effects of AI on democracy. It’s another field where the technology is raising complex questions about the super-intelligent machines of the future. Will robots ever be able to vote? Beckman shakes his head. “I don’t think so, but the question forces you to think about the boundaries of inclusion and why we think certain people or others shouldn’t have the right to vote,” like children, people with severe mental impairment, animals… Could robots then have rights? Again, Beckman is doubtful. “AI technology has goals, not interests. Still, I struggle to see the moral harm implicit in disrespecting a machine.” Should we at least have regulations about cruelty to human-like machines? “It’s interesting logic because that’s how the animal rights movement started and led to bans on animal cruelty. But this was not done with the animal in mind, but because our morality dictated that mistreating them brutalized us as human beings.”

When talking about the future, Beckman preferred to focus on the transparency and democratic legitimacy of public decision-making influenced by AI. “This is a more mundane and immediate issue,” he said, mentioning a recent Swedish law that paves the way for using more AI in government functions. “The problem is not that machines make poor decisions, but that these systems are learning through mechanisms that are not transparent, even to the people programming them.” So even if machines can make efficient public decisions, should we allow the practice? “Democratic [government] requires decisions to be publicly justified. People have a right to know why they have been denied a permit or granted a subsidy,” said Beckman. “Laws are established by people in authority who should explain the reasoning behind them.” Beckman offered the analogy of a calculator — you trust the result, but the machine has no authority over you. Then he posed another dilemma. “The most vulnerable democracies could benefit the most from AI in government, countries where corruption or inefficiency bogs down so many public decisions. However, those same countries may also have dubious motives for using it.”

“I don’t see a machine doing my job,” said choreographer Robin Jonsson, whose GetReal project fuses dance and technology.
“I don’t see a machine doing my job,” said choreographer Robin Jonsson, whose GetReal project fuses dance and technology.Instagram @alex_the_robot_dancer (Robin Jonsson)

Social researchers agree that AI is revolutionary, but it’s just a tool we use that has yet to radically change our lives. Pontus Strimling is studying how to predict which AI applications will succeed and which will fall by the wayside. “Pre-ethics,” he calls this line of research. “If we can predict which apps will become most popular, we can identify the most urgent ethical questions.” He finds that an app’s usability or functionality doesn’t determine its success as much as the way it “spreads.” The most effective method of propagation is “infusion.” This is when an innovation sneaks into a tool that everyone already uses. “Netflix, Google and YouTube one day introduced AI and deep learning into their recommendation features and it instantly spread to computers all over the planet without users noticing much,” he said. Strimling offered a more recent example. “When ChatGPT came out, the technophiles first started using it. Then the broader public began fiddling with the app. But the big leap came when Microsoft bought the technology and built it into their search engine.” Strimling asks, “Where is the user’s freedom of choice? How is cultural diversity represented when a tiny group of people in a very particular environment — mostly Silicon Valley developers — make decisions without consulting us about things everyone will use every day in the future?”

Jebari believes “it’s not just a technological question, but also a political one.” Especially when considering the troubling question of how AI will affect the way we work? Will it free us from tedious tasks and improve our quality of life? Or will it exploit us even more? Again, it depends, says Jebari. “Many companies are using it to increase productivity, improve accuracy and help shoulder the burden. The opposite is true of other companies. Their challenge is not technological, but the regulatory, labor union and political problems around it.” Jebari describes a study he and a colleague conducted about Amazon’s most robotized warehouses that found they were indeed more productive, but also had more frequent accidents, more stress and greater job dissatisfaction because workers had to adapt to the frenetic pace of the machines.

In a diaphanous room at the Institute for Futures Studies, choreographer Robin Jonsson connects a human dancer to virtual reality technology that creates a bridge between two worlds. Jonsson’s GetReal dance and technology projects create dreamlike digital dance floors shared by dancers and audiences. He also works with a robot dancer that often frustrates him because it’s not as teachable and responsive as human dancers. It’s much more difficult to give the robot a complex series of prompts than simply explain something to a person. Any professional illustrator who has dabbled with AI tools that generate graphics knows what he’s talking about. Jonsson enjoys exploring the limits of his art form and creating environments that “expand an experience intimately linked to presence, physicality and reactions to others.” The role of technology in dance is still in its infancy, unlike in the visual arts or music, where it’s been part of the creative process for years.

Which brings us to one last question: can AI replace creators? “I don’t know. I’m willing to use it to my advantage, but I don’t think so. The performing arts can be automated to a certain extent, but the body is so important. Even if we someday have AI in realistic androids, I think human curators will still be needed. What will change is how artists work.” As a choreographer, Jonsson believes that the fusion between AI and humans offers a certain richness by bringing their individual qualities to the table. “My main talent is facilitating socialization, bringing out the best in dancers, musicians, technicians… and I don’t see a machine doing that,” he said. Within the arts — the expression that distinguishes us the most from other species — Jonsson says dance may be the last human bastion.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_