_
_
_
_

Two years of ChatGPT: From utter amazement to the ‘trough of disillusionment’

The tool that ignited the race for generative artificial intelligence has evolved, but less than anticipated. And experts aren’t expecting any immediate breakthrough advances in the field

ChatGPT has still not released the GPT-5 model.
ChatGPT has still not released the GPT-5 model.Inés Arcones
Manuel G. Pascual

“It’s a tremendous innovation, I was also shocked.” “It sounds much more natural than most similar programs.” “It has intuitively learned to hold conversations on almost any subject.” These are some of the early reactions from experts in artificial intelligence (AI) regarding ChatGPT, as published in EL PAÍS. Within just a few days, the tool captivated both professionals and general users, who began sharing excerpts of their conversations with the bot on social media. Suddenly, anyone with an internet connection could engage in a dialogue with a machine that provided coherent, well-written responses — though not always accurate ones. For many, it felt as if they were conversing with a person rather than a machine. This Saturday marks two years since the launch of ChatGPT, which introduced generative AI to the public — a technology capable of producing seemingly original content based on human prompts.

What is the current state of this technology? The initial excitement has given way to a corporate battle for dominance in deploying such tools. Microsoft quickly entered into a collaboration agreement with OpenAI, the developer behind ChatGPT and DALL·E, while Google was not far behind, announcing its own open models within two months.

Today, we find ourselves in what the consultancy firm Gartner refers to as the “trough of disillusionment”: the initial euphoria led to inflated expectations, and the inability to meet them immediately caused interest to wane. This is a natural phase in the lifecycle of technological trends, and, according to Gartner, the slope of expectations will rise again within a few years, though more moderately than the first surge.

“Two years later, artificial brains remain stochastic know-it-alls: they speak with great authority, they seem to know everything, but what they say is not the result of real knowledge, but of their intuitively developed ability to appear wise,” sums up Julio Gonzalo, professor of Computer Languages and Systems at Spain’s National University of Distance Education (UNED) and deputy vice-rector for research.

Andrej Karpathy, one of the creators of the GPT model (who left OpenAI in February), recently acknowledged signs of exhaustion in generative AI. Since the early versions of ChatGPT were already trained on nearly all available texts on the internet, the new versions will not be able to use much more data than their predecessors. This will mean that the models will not be able to significantly improve.

“For a big leap to take place, innovation in algorithmic architecture will be needed, such as the development of transformers [a type of neural network that is key in the development of large language models] in 2017,” says Álvaro Barbero, head of data analysis at the Knowledge Engineering Institute.

There are also concerns on the business front. Investors have yet to see how generative AI can be monetized effectively. OpenAI raised $10 billion in October “to operate flexibly,” in addition to the $13 billion Microsoft pledged in 2023. Yet this funding may not be enough. The much-anticipated GPT-5 model, initially slated for release in late 2023, has yet to arrive, and analysts are beginning to doubt that it will be as groundbreaking as CEO Sam Altman has suggested.

According to OpenAI’s own projections, the company won’t turn a profit until 2029, and in the meantime it’s burning through about $500 million a month. The tech magazine The Information estimates that the cost of training its models will reach $7 billion by 2024, and OpenAI could run out of funds by next summer.

“Within 12 months, the AI bubble will have burst,” said AI expert Gary Marcus last July. “The economics don’t work, the current approach has reached a plateau, there is no killer app, hallucinations [when the system makes things up] remain, boneheaded errors remain, nobody has a moat, and people are starting to realize all of the above.”

The AI revolution

Financial concerns aside, there’s no doubt that the tool launched on November 30, 2022, was groundbreaking. “From my perspective, the emergence of ChatGPT was absolutely revolutionary,” says Carlos Gómez Rodríguez, professor of Computer Science and Artificial Intelligence at the University of La Coruña and an expert in natural language processing, the AI field dedicated to understanding and generating text. “For the first time, a single system could perform a wide range of tasks without specific training. Before, you could create a Spanish-English translator, but by designing it specifically for that. It turned out that by developing these larger models, the model was able to do many things. That has changed everything in my field of research.”

“Generative AI has led to interesting applications, such as summarizing texts, composing letters in other languages, and extracting information from documents, but also problematic uses, such as relying on the system to extract factual information when it is really making predictions not searching, or to draw conclusions when it isn’t reasoning,” explains Ricardo Baeza-Yates, research director of the Experiential AI Institute at Northeastern University in Boston and professor at Pompeu Fabra University in Barcelona. Alongside image and video generators, generative AI is blurring the lines between reality and deception through deepfakes, while also enabling more sophisticated and cost-effective forms of cyberattacks.

Artificial intelligence show a fictitious skirmish with Donald Trump and New York City police officers
Images created using an AI tool show a dispute between Donald Trump and police officers.J. David Ake (AP)

Just three months after the launch of ChatGPT, OpenAI introduced the GPT-4 model, marking a significant leap from the first version of the tool. However, in the nearly two years since then, there have been no major breakthroughs. “It seems that with GPT-4 we have reached the limits of what AI is capable of just by emulating our intuition. It has also been proven that the capacity for rational thought has not appeared by magic just by making bigger brains,” Gonzalo explains.

The road ahead

The latest development in generative AI is multimodal systems, capable of combining different types of media, such as text, image, and audio. For example, the latest versions of ChatGPT or Gemini can analyze a photo of your fridge and suggest what to prepare for dinner. But they generate these results based on intuition, not reasoning. “The next step will be to investigate whether large language models can evolve into autonomous agents — meaning they can operate independently and interact with each other on our behalf. They could book plane tickets or hotel reservations based on our instructions,” says Gómez Rodríguez.

“I think generative AI models are reaching their limits and will need to add other elements, such as true knowledge [Perplexity and others already cite the sources they use], deductive logic [classic AI] and, in the long term, common sense, the rarest of the senses. Only then can we start talking about true reasoning,” says Baeza-Yates.

This is what Altman has promised for next year. He refers to it as artificial general intelligence (AGI), which equals or exceeds human capabilities. It’s clear that such a development will take time to materialize, and, as Baeza-Yates suggests, more than just generative AI will be needed to achieve this goal.

“Large multimodal models of generative AI are going to be a critical part of the overall solution to developing AGI, but I don’t think they’re enough on their own. I think we’re going to need a handful of other big breakthroughs before we get to what we call AGI,” said Demis Hassabis, head of AI research at Google and Nobel Prize winner in Chemistry, last week at a meeting with journalists in which EL PAÍS participated.

“Generative AI not only does not bring us closer to the big scientific questions of AI, such as whether intelligence can exist in non-organic forms, but it actually diverts us from them. These systems are incapable of reasoning, [to achieve that] we would need to turn to symbolic AI [based on mathematical logic],” reflects Ramón López de Mántaras, founder of the CSIC Artificial Intelligence Research Institute and one of Spain’s pioneers in the field.

Alphafold, the tool developed by Hassabis’ team to predict the structure of 200 million proteins — leading to his Nobel Prize — integrates 32 different AI techniques, generative AI being just one of them. “I believe the future will lie in these kinds of hybrid systems,” says López de Mántaras.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_