Josep Munuera, radiologist: ‘Artificial intelligence tools don’t replace doctors; they empower them’

The head of Diagnostic Imaging at the Sant Pau Hospital in Barcelona, an expert in digital technologies applied to health, explains that AI will help medical professionals to be more accurate in their diagnoses

Josep Munuera
Josep Munuera, head of Diagnostic Imaging at the Sant Pau Hospital in Barcelona, in an MRI room.kike rincón
Jessica Mouzo

Artificial intelligence (AI) and big data are revolutionizing modern medicine, at all levels of clinical practice. From prevention, with increasingly refined prediction models that assess the risk of developing a disease, to diagnosis and treatment: for instance, many health centers already use AI tools that help them detect lesions in the images of medical tests. Josep Munuera, head of the Diagnostic Imaging service at the Sant Pau Hospital in Barcelona and an expert in digital technologies applied to health, assures that AI (“it is not a single tool; there are different types,” he points out) will serve to help professionals be more accurate.

And no – he stresses, categorically – the explosion of AI in health will not replace the people in white coats, nor will it dehumanize care. Quite the contrary, predicts the radiologist: it will allow doctors to devote more time to their patients and help improve communication between them.

Question. How mature are AI tools in medicine?

Answer. In some areas they are already mature and being used, and in others they are still being developed. Using the example of medical imaging: where can we use artificial intelligence in a radiology department? In image acquisition it is already well advanced. Today there are already tools with which, when an MRI is taken, AI algorithms can be loaded into the machine to speed up the acquisition of the image: while it used to take 20 minutes to do an examination, now you can cut that time to 10 minutes, because part of the image is generated by or thanks to artificial intelligence algorithms. This generative image allows much quicker scans to be made.

Some centers already use image detection tools, which are trained to look for specific types of pathologies; when they find one, the algorithms identify it and mark it, so that when the specialist is going through the images, some marks appear on the screen indicating that the algorithm has found a pathology, and where. It even helps us to prioritize; sometimes, you get a message telling you to see a certain patient first or that you don’t have to rush so much with another, because they probably have nothing. Prioritization and identification tools are already becoming widespread.

Q. Is saving time the ultimate goal?

A. Not just that. Yes, it is true that an important part is the improvement of the process and, in consequence, surely the times will end up improving. But there is also the matter of diagnostic accuracy. What is known is that with the vast majority of computer vision algorithms, combining a human reader with this algorithm increases accuracy. As a result, we reduce diagnostic errors, both false negatives and false positives. These tools help us prioritize and be more precise.

Q. You mention a combination of the view of the doctor and that of the machine. Could there be biases? Could the machine’s evaluation, for example, modulate or condition yours?

A. Yes, there are discrepancies between the readings, but that is precisely one of the areas of work: how artificial intelligence influences the fact that a person can modify their reading. What is important to know is that the machines, by themselves, have a degree of accuracy in their diagnoses. And what also usually happens is that humans too have a degree of accuracy. And it may happen that if your level of accuracy is lower than that of the machine, you may have to learn from the machine. Or the other way around: your accuracy could be better, so you would train the machine so it improves its learning.

Q. There have always been mistakes, whether you are a doctor or a machine. But who is more prone to error?

A. Right now the machines have advantages: they don’t get tired, they make the same diagnosis at 10.00am or at 3:00am, they are impartial... but they also have more limitations: the algorithms have to be retrained over time. In the end, we have to be aware that it is the combination of the human and the software what has to maintain that accuracy; we have to determine the accuracy with which we want to work in healthcare and, based on that, decide on the best strategy.

Q. Doctors were used to having the last word and not being questioned much. Are professionals prepared to accept all the revolution that is going on around them?

A. Certainly, because this it is not a matter of feeling questioned, but of being as precise as possible. The medicine of the 21st century is precision medicine. If you want to be accurate, you need to make the right decision at the right time. And sometimes one doesn’t have enough knowledge. In the end they are tools that help us all.

Josep Munuera, head of Diagnostic Imaging at the Sant Pau Hospital in Barcelona, pictured in one of the corridors of the health center.
Josep Munuera, head of Diagnostic Imaging at the Sant Pau Hospital in Barcelona, pictured in one of the corridors of the health center.kike rincón

Q. How far can AI go in medicine? How does the future look like?

A. In the end, we will see a lot of changes, but we will not see the AI. What we’ll have will be software tools that will help, in the administrative field, to find the window that fits the patient best for an appointment; we’ll find the right doctor, the one who will be able to come up with a patient’s therapeutic plan in the least number of clicks; or, in our case, we’ll be able to carry out tests that take one tenth of what they took 20 years ago and which, as soon as the patient is done, can point out the areas that need attention.

Q. If there are already tools that read MRIs and can detect an anomaly, and this AI is going to improve, are the days of the radiologist – for example – numbered?

A. No. Quite the opposite. A person will always be necessary. The role of the radiologist goes beyond the simple reading and interpretation of a medical image, just as the role of the surgeon goes beyond the simple surgical act of cutting, suturing or removing a tumor.

Q. But can it somehow modulate the medical practices? Can it result in a reduced need for professionals?

A. I think this is a big misconception. In fact, it will take at least the people that are there now, because in the end, these AI tools don’t replace you; they empower you. Thus, I could be looking at the same number of images, making the same number of decisions or doing biopsies. What I need is a tool that helps me be more precise in this decision-making, but in no case will there be less people.

Q. Geoffrey Hinton resigned recently from Google, where he was vice president of engineering. In an interview with EL PAÍS, he warned that we need to learn to control AI before it is too late, as it can become more intelligent than the human brain. What do you think?

A. I agree with the considerations regarding knowledge, ethics and distribution. This is still a technology that needs some rules. There probably hasn’t been time to define the ethical rules of how to use artificial intelligence. But once they are properly defined, the next step will be to start using them, because we cannot stop technological evolution either.

Q. Is the AI revolution going faster than your capacity to process it?

A. Yes, it is true that technology is advancing fast and its implementation is accelerating. Computer vision algorithms, the ones used for viewing an X-ray and finding a nodule, for example, began to be used as pilot tests in 2017 and 2018: five years have passed. Therefore, we will need, at most, five more so that we can use it daily in as many centers as possible. Testing the technology takes a few years, but in the end it ends up being implemented.

Q. What are the biggest risks of AI?

A. The risks are those of any technology: misuse. That is, being used for something that is not its purpose or is not appropriate. Then there is the harmful use: if someone wants to use it or manipulate it so that it is used in their favor. In the end, it is still just another tool that will have to be controlled within the field of cybersecurity. Q. Could AI end up depersonalizing or dehumanizing medicine?

A. These tools don’t separate us; they have to help us to have more time with the patient, to be able to talk... shortening the time of image acquisition allows you to spend more time with the patient, to accompany them, ask them how they are, see if they have any questions... all these aspects are humanization. Having this technological aid lets you have more time for the patient.

Q. Will we ever see a future in which, instead of a doctor, we will go to a consultation and find a chatbot, an AI, answering the patient’s questions?

A. If society is already demanding that telephone assistance not be exclusively dominated by computers, why would we think that healthcare could be? What can happen in the future is that three entities, a patient, a doctor and a tool, will be interacting. We will use these tools to better communicate with each other.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS