Steve Jobs said that computers were the bicycles of the mind, because they multiplied the mind’s capacity. Will artificial intelligence (AI) be the airplane of the brain? Or just a distraction? What will be its impact on economic growth, inflation, or inequality? Will it be similar to computers and the internet? Or will this time be different?
The impressive progress shown by AI in recent years suggests that, yes, it may be different this time. AlphaGo was able to beat the world champion of Go, a game so complex that its potential number of positions is 10 and 170 zeros, more than the sum of all the atoms in the universe. ChatGPT has been able to pass the entrance exams for U.S. medical schools and the U.S. bar. DALL-E is capable of generating extremely realistic, high-resolution images from a simple textual description.
Until now, technology was used to automate repetitive activities. Consider, for example, that the word computer comes from “human computers,” the people, mostly women, who computed the necessary calculations armed with paper and pencil or simple calculators. The virtue of the computer was its ability to automate an increasing volume of calculations, replacing increasingly complex tasks. But computers had a limitation: they needed the installation of complementary infrastructure, and humans who knew how to write specific programs to teach them how to perform specific tasks.
The rapidly increasing processing power of microprocessors allowed for significant progress: the first versions of AI based on deep learning and neural networks — computer programs that roughly simulate the way the human brain learns — created programs that learned by themselves, but still required human supervision and specific, structured, and labeled databases. For example, if a neural network was trained with X-rays of cancer patients, the program “learned,” based on repetition and trial and error, to identify them among a variety of other X-rays. These initial versions of AI were limited by the availability of data and the need to structure it.
The most recent versions of AI represent a qualitative leap and use foundational models based on generative AI, among which the large language models (LLMs) stand out. The technique is similar, deep learning with neural networks, but it is qualitatively different because they learn by themselves by being able to process human language — which, by definition, is a database that does not need to be labeled — and can be applied to a large number of tasks, increasing their power and versatility almost without limit. They are models of neural networks with billions of parameters, with a level of complexity that is increasingly close to the human brain, trained with databases that capture a good part of universal knowledge.
The evolution of these models has been very fast. The data used to train them grows exponentially, and artificial data is now being created where data limits are reached. The models’ processing capacity has doubled every six months, four times faster than the famous prediction by Gordon Moore, the co-founder of Intel, that the number of integrated circuit components in microprocessors would double every two years. This allows for an intensity of learning that is hard to imagine: one day of training one of these algorithms is equivalent to 150 years of human video game training.
In addition to its capacity, AI differs qualitatively from computers in two ways. First, they are programs that can be quickly downloaded and installed, not machines that require expensive and complex complementary infrastructure. This has led to extraordinarily fast diffusion: when it was launched, ChatGPT was the fastest application to reach 100 million users.
The second advantage is that these applications do not require sophisticated programming. LLMs are queried in human language and respond in human language or images. For example, ChatGPT can be asked: “Write a 1,200-word article in English on the macroeconomic impact of artificial intelligence in the style of Angel Ubide in El País.”
For now, I am not going to reveal if what you are reading has been written by ChatGPT, or by me.
The way these applications work is “simple”: they are probabilistic models that predict which letter, word, pixel or image comes after the previous one. At all times, models such as ChatGPT, or autonomous driving applications, decide what is the “most reasonable continuation” of the last text or image they have generated, based on the model of understanding the world they have created with the database they have been trained with. As such, any activity that is likely to become a prediction problem falls within the realm of AI.
These new generations of AI are therefore revolutionary: they do not automate repetitive tasks, but creative tasks. It is the democratization of intelligence. Suddenly anyone can write with the style and vocabulary of Nobel laureates, write computer programs like top engineers, design images like artistic geniuses. Among the most repeated verbs in AI patents are recognize, predict, detect, identify, or generate — all creative tasks. With AI, the level of execution of these tasks can be raised to best practices, benefiting all workers and reducing disparities with experts. Consider, for example, that with AI powered navigation systems, the barriers to entry to becoming a taxi driver have collapsed. And now extend this example to all the activities you can think of. Of course, these models are not perfect, and sometimes give incorrect or non-existent answers (the models “hallucinate,” in AI jargon). In part, because the models have been trained in what they have to do, not in what they don’t have to do, one of the pending challenges for this technology.
The macroeconomic impact can be significant. The empirical evidence on the effect of these new generations of AI is that it reduces the time to execute tasks and increases its quality, boosting productivity per hour worked. In addition, it can accelerate the innovative capacity of society — discovering relationships and patterns of behavior in data never imagined before — and, if combined with advances in robotics, the improvement in productivity and potential growth can be exponential, raising neutral interest rates while keeping inflation low. On the other hand, it will be necessary to reconsider models of organization of work, learning, and education, and income distribution policies. And, like any technological transformation, AI carries risks. The dispute over the use and ownership of the data, the potentially malicious use of the algorithms, the ecological impact of its high computational intensity, the inherent bias in datasets, will all require a regulation that puts this process on the right path.
It’s entirely possible that this was written by ChatGPT while I was at the beach. But no, it has not been the case. Hopefully next time.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition