AGI: What is Artificial General Intelligence, the next (and possible final) step in AI

The development of AGI could represent a threat to human existence. However, it might take decades to create this type of AI

Artificial intelligence robot analyzing for human brain and think, Technology and Science concept.Vithun Khamsong (GETTY IMAGES)

Before Sam Altman was ousted as OpenAI CEO for a brief span of four days, several staff researchers wrote a letter to the board of directors warning of a “powerful artificial intelligence” that —they claimed— could threaten humanity, according to a report made by Reuters, whose sources cited the letter as one factor that led to Atlman’s temporary firing.

After the report was made public, OpenAI revealed a project called Q* (pronounced Q-Star) to staffers. Following reports claim that some employees of the company believe that Q* could be a breakthrough in the search for what’s been known as Artificial General Intelligence (AGI), which the company defines as autonomous systems that surpass humans in most valuable tasks.

The term has been credited to Mark Gubrud, a physicist and current adjunct professor in the Peace, War and Defense curriculum at the University of North Carolina. He used it as early as 1997 in a discussion about the implications of completely automated military production and operations. Around 2022, the term was later reintroduced and popularized by Shane Legg and Ben Goertzel, two entrepreneurs involved in AI research.

Goertzel is the founder and CEO of SingularityNET, a project that seeks to democratize access to artificial intelligence, and has worked with several organizations linked to AI research. He is also the chairman of the OpenCog foundation, which seeks to build an open-source artificial intelligence network, and managed OpenCog Prime, a system architecture that sought to achieve AGI at “the human level and ultimately beyond.”

But what is AGI exactly?

AGI is a hypothetical type of artificial intelligence that would be capable of processing information at a human-level or even exceeding human capabilities. It would be a machine or a network capable of doing the same range of tasks humans are capable of and it would be able to “learn to do anything a human can do”, for example, engage in “nuanced interactions, understanding contexts and emotions, transferring learning between tasks and adapt to new tasks and environments without programming”. This type of system doesn’t exist, and complete forms of AGI are still speculative. Several researchers are working on developing an AGI, for this, many of them are interested in open-ended learning, which would allow AI systems to continuously learn like humans do.

In 2023, after OpenAI released ChatGPT-4, Microsoft said that the system could be viewed as an early and incomplete version of an AGI system. However, currently, no system has been demonstrated to meet the criteria for AGI, and there are questions about its feasibility. While some experts believe that an AGI system could be achieved in the next few months or years, others think it will take decades, and that it could be the biggest technological advance of the century.

Is Q* an AGI?

Q* is an OpenAI project that allegedly led to Sam Altman’s firing as CEO after some employees raised concerns suggesting that the system might be an AGI. Until now, there have only been reports about Q* performing mathematical reasoning and there is no evidence that the system is a development on AGI. Several other researchers were dismissive of the claims.

How can AGI be achieved?

Most AGI research projects focus on whole brain simulation, in which a cerebral model simulates a biological brain in detail. The goal is to make the simulation faithful to the natural, so it can mimic its behavior. For this to be achieved, research in neuroscience and computer science, including animal brain mapping and simulation, and development of faster machines, as well as other areas, is necessary.

Is AGI a threat to humanity?

Several public figures from Bill Gates to Stephen Hawking have raised concerned about the potential risks of AI for humans, which have been supported by AI Researchers like Stuart J. Russell, who is known for its contributions to AI. A 2021 review of the risks associated with AGI found the following: “AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks”. In 2023, the CEOs of several AI research labs, along with other industry leaders and researchers, issued a statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Another risk posed by AI systems is mass unemployment. Since ChatGPT became popular, several workplaces have cut their workforce and started relying on AI. The arrival of AGI could result in millions of people losing their jobs, with office workers being most exposed.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In