Educational exposure of ideas, assumptions or hypotheses, based on proven facts" (which need not be strictly current affairs) Value in judgments are excluded, and the text comes close to an opinion article, without judging or making forecasts , just formulating hypotheses, giving motivated explanations and bringing together a variety of data

Fear is not an argument for rejecting artificial intelligence

21st-century medicine is informed by 19th-century genetic and embryological discoveries, while AI traces its origins to secret weapons research during World War II

Inteligencia artificial
A user customizes an avatar for the Replika AI personal chatbot; Warsaw, Poland; July 22, 2023. Jaap Arriens (NurPhoto/Getty Images)
Javier Sampedro

Scientific knowledge can progress rapidly, yet its social, economic, and political impacts often unfold at a painstakingly slow pace. The medicine of the 21st century draws upon genetic and embryological breakthroughs of the 19th century. Our current technology is firmly grounded in quantum physics, which was formulated a century ago. And the topic of the day, artificial intelligence (AI), traces its origins to the secret weapons research during World War II.

‌In 1935, the brilliant British mathematician, Alan Turing, envisioned a conceptual computer. His genius would later lead him to crack the Enigma code used by German submarines for secret communications during the war. Turing’s contributions extended beyond cryptography, as he introduced fundamental concepts of AI, including the training of artificial neural networks. Benedict Cumberbatch portrayed Turing in the 2014 film The Imitation Game, which earned a screenplay Oscar that year. All this historical context brings us to the heart of the current AI revolution.

‌AI uses neural networks, also known as artificial neural networks, which are comprised of multiple layers of artificial neurons. Each neuron receives numerous inputs from the lower layer and produces a single output to the upper layer, similar to the dendrites and axon of natural neurons. As information progresses through each layer, it gradually becomes more abstract, resembling the process that occurs in the visual cortex of our brains.

‌Neural networks have a long history, but only recently have we attained the computing power required to add numerous layers. There’s some brute (computing) force involved in producing the truly remarkable outcomes we have seen of late: image recognition, interpretation of spoken language, and, of course, ChatGPT — the globally renowned conversational AI.

‌ChatGPT belongs to a category of systems known as large language models (LLMs) or “generative” models. These models begin by consuming vast amounts of data, such as the entirety of Wikipedia. They then employ simple statistical techniques to analyze patterns such as word associations. It is not the sophistication of their algorithms, but rather the sheer computational power behind these systems, that sets them apart. That is what lies within the core of these AI systems.

‌It’s truly astonishing that, despite using seemingly ordinary raw materials, large language models have far exceeded the so-called Turing test (cue the return of that pioneering genius). This is a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. In other words, if an evaluator cannot tell the machine from the human, the machine has passed the test. ChatGPT and similar models have aced the Turing test. In fact, there are researchers actively exploring ways to differentiate ChatGPT-generated text from human writing. This concern understandably weighs on educators, journalists, writers and others of that ilk.

The New York Times has boldly taken legal action against OpenAI (the creators of ChatGPT) and Microsoft (its principal shareholder). Their powerful legal argument asserts that companies like OpenAI used millions of copyrighted Times articles to train systems that are now emerging as competitors of the newspaper. Others have filed similar lawsuits. The New York Times and other online content creators have not given permission or received compensation for using their articles to train AI systems. This issue affects journalists, screenwriters, actors, novelists, essayists and all who publish their written work in the digital realm.

‌We’ll continue to talk about this issue throughout 2024 and beyond because it’s important. We’ll also delve into the upcoming wave of AI applications set to revolutionize the business world. Last year, Alphabet, Amazon, Apple, Meta, Microsoft and Nvidia saw an 80% increase in stock prices. This growth can largely be attributed to their sale of large language models (LLMs) and supporting infrastructure to all sorts of companies beyond the tech sector.

Businesses are expected to expand their use of AI in 2024. Draft contracts and market strategies already rely on them extensively, while their usage is growing in summarizing meetings, documents, and other tasks that previously fell to Homo sapiens.

According to The Economist, venture capital firms invested $36 billion in generative AI last year. This amount doubled compared to the previous year, and will continue to trend upward. Economic indicators suggest a high demand for AI-trained young professionals. But no one knows whether these gains will offset the inevitable job losses in other sectors.

The rapid evolution of technology presents a compelling case for actively supporting ongoing worker training. If we genuinely aspire to inclusivity and progress, AI provides an exceptional opportunity to demonstrate our commitment to that goal, wouldn’t you agree?

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS