A more humane education in the era of artificial intelligence

The opportunities offered by AI are evident but before we act we must think and debate, both about those aspects that we do not understand well, as well as those matters that we don’t yet know we don’t know

People prepare a seminar on the use of virtual reality headsets for future teachers at the Center for Teacher Training and School Research at the Leipzig University in Germany.picture alliance (dpa/picture alliance via Getty Images)

Should we just say goodbye to the exclusivity of human thought? Artificial generative intelligence calls into question the idea that creativity is the exclusive domain of Homo sapiens. Does this represent a new era of human-machine creation, or is it a threat to human originality? And what role should education play?

Although scientific evidence on the impact of artificial intelligence on education is still insufficient, there are clear examples of how this technology can facilitate administrative tasks and offer complementary resources to expand or enrich learning.

This technology is evolving rapidly: it is already past its stage of being an infant that listens, sees, speaks, and draws to that of reading and writing, programming, analyzing complex data sheets, integrating reports, speaking countless languages and responding to many other functions that emerge from the technological sector at a remarkable speed. Its adoption has taken place at unprecedented scales and paces. In recent studies, young people enthusiastically describe artificial intelligence as an “external brain.”

But every disruption requires readjustments. New frameworks of regulation, guidelines and protection are being adopted by governments at different speeds. Educational institutions are publishing directives and guides to advise teachers and students. This task is as important as it is complex, because offering guides on a technology that we do not fully understand and that is, furthermore, constantly changing, is not an easy task.

Although the opportunities that these technologies offer are evident, it will be important to think and debate (before acting), both about those aspects that we do not understand well, as well as those matters that we don’t even know yet that we don’t know. For instance: What are the implications of ubiquitously adopting machines that think for us? What are the side effects of automating cognition, and how will this impact the development of new generations? Can we do without teaching knowledge and skills that are easily automatable? What happens to data protection and privacy when these machines are programmed to learn and never (unlearn) forget?

Taking into account the extraction of minerals and the carbon footprint that these technologies generate, can we afford to get behind generative AI if we still know little about its impact on the environment? And what lessons can we take from previous technological disruptions to avoid widening the enormous gaps that exist between those of us who have access to digital tools and training and those who do not?

To answer these and other questions, we could interrogate these incorrectly called “intelligent” agents (they lack emotional understanding, self-awareness, or intuition). However, I suggest that, this time, we take the opportunity to think for ourselves about how to act with wisdom and foresight and reflect on four critical vectors.

First, the need for an infrastructure of technology, connectivity and data that is better distributed in different parts of the planet (taking a look at the places that have no internet access would be a good reference point).

Second, governance that is up to the task. It is not just about publishing a framework document, which is very important. The necessary guidance, protection, support, coordination, and safeguards must also be provided. The institutions that exist today will probably have to be examined (or reinvented), as they were most likely created to operate in a very different context than the current one.

Third, protection from the risks that come with this technology. More research is essential. It is necessary to develop the ability to monitor and look out for known risks (as well as those we are yet to discover). We must not see things like the automation of misinformation, manipulation, biases, plagiarism, and violation of privacy as a new informational pandemic, but as an educational agenda to deal with. This agenda must be addressed both through regulation and through the creation of new jobs and profiles that can face these challenges.

And fourth, the generation of capabilities. Technologies evolve quickly, but they soon go out of style. People, on the other hand, have a surprising capacity for adaptation. Technology that 12 months ago seemed like magic is today a simple tool, and we will probably soon stop seeing it as disruptive. This, however, calls for the development of new skills, both in terms of education and in citizenship. For instance, finding out what it means to be literate in this context, what adjustments must be made in the curriculums, as well as adapting the ways of teaching and applying knowledge. How do we put this technology at the service of teachers, and not the other way around?

Embracing the disruption posed by generative AI without hesitation or constraints can be as harmful as ignoring or even prohibiting its use. If we have learned anything in these past months of expansion of artificial intelligence, it is that openness and caution have to go hand in hand. Even if we move in autonomous vehicles, we cannot navigate into the future without looking at the rearview mirror.

Cristóbal Cobo is an education and technology specialist at the World Bank.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In