What’s the best way to talk about health with chatbots?
Miriam González, a Spanish engineer, has a rare breast tumor. Her experience illustrates the complexity of using AI effectively, as these tools tend to struggle with basic medical questions


In 2021, Miriam González, a 35-year-old from Murcia, Spain, went to the doctor because she was bleeding from her breast. She was told to relax: everything was normal. But in 2024, she was diagnosed with breast cancer. And, shortly afterward, she discovered it was metastatic, at stage four.
“At first, I thought that the diagnosis was a death sentence… that I had only days or weeks left,” González explains, in an exchange of messages with EL PAÍS. But this wasn’t the case; it turned out that she had some leeway: “I started hearing about chronicity and quality of life. And I saw that the landscape is different today. That mental transition — going from ‘I’m going to die now’ to ‘I’m going to live with this’ — was tough. I needed to understand the situation I was in,” she explains.
To do so, she turned to Perplexity, an artificial intelligence (AI) search engine. That’s when “my engineering side” emerged, González notes, “to break down the problem.”
Miriam’s case is unique. Her tumor is neuroendocrine: “It’s such a rare subtype that standard clinical guidelines simply don’t cover it,” she explains. AI helped her understand it, in order to “organize that complexity and turn an abstract diagnosis into concrete decisions,” she adds.
Millions of people already use AI as a jargon translator, medication consultant… or even as a doctor. But it’s important to differentiate between these uses. This is a warning from Mark Succi, director of Healthcare Innovation at Mass General Brigham, a network of hospitals in Boston, and an associate professor at Harvard. “AI seems most useful in the later, more focused stages of diagnosis,” he points out, “narrowing the field toward an answer once the case is already structured. [But it’s] less useful in generating an initial diagnostic framework that acknowledges the uncertainty.”
A study published a couple of weeks ago — which analyzed five of the most popular models, such as Gemini and ChatGPT — showed that half of the health information provided by these AI assistants lacks scientific rigor… a level of inaccuracy that puts patient safety at risk.
However, a new survey reveals that one in four Americans uses chatbots for health-related questions. The reasons they give are “wanting answers quickly” or “wanting additional information.” There are also people who want to do their own research before or after seeing a doctor. But there’s a sizable group that uses AI assistants instead of consulting doctors, especially people with low incomes (in the United States, healthcare isn’t public). The survey reveals that 32% of users with incomes below $24,000 a year turned to AI because they couldn’t afford a doctor’s visit.
González’s case is different. She prepared a more personalized approach to using AI and pursued it alongside her doctors: “I’ve been lucky enough to find a team that truly includes me. They listen to me, read the evidence I provide and question things with me — [they’re] not against me,” she says. The engineer asserts that, without AI, she probably would have sensed the rarity of her tumor, but she wouldn’t have been able to access all “the data, the trials, the case series, or the technical language [required to draft] a proposal that oncologists could take seriously,” she explains.
To achieve this, she turned to an artificial intelligence specialist whom she already knew. Javi López is also from the city of Murcia and the co-founder of Magnific AI. “There came a point when I needed someone who could handle more advanced tools and take what I was discovering to another level. That’s where Javi came in,” she recalls. González believed that López would be able to give her research an extra boost. They both shared their story on X, which went viral.
🔴 NECESITO TU ATENCIÓN
— Javi López ⛩️ (@javilop) April 8, 2026
Llevo una semana ayudando a Miriam en su caso de cáncer metastásico y quiero compartir la metodología que he estado usando porque es absolutamente replicable.
Pienso que, con suerte, puede ser ÚTIL A OTRAS PERSONAS con cáncer (o con cualquier otra… pic.twitter.com/DXSWJQ05UT
Printed-out copies of text from ChatGPT
Doctors have a bit of a different take from engineers. Oriol Mirallas is a medical oncologist at the Phase 1 Experimental Therapies Unit of the MD Anderson Cancer Center in Houston, part of the University of Texas System. He understands that the use of AI among patients is inevitable, but he also emphasizes how delicate it is: “We’re seeing more and more people coming in with printed-out copies of [text from] ChatGPT or clinicaltrials.gov (a reference database for clinical trials). Here, in the U.S., it’s even more common. It’s reasonable for patients to seek help and AI can certainly provide it… but with the assistance of an expert. If it helps patients understand the pathology and diagnosis, that’s fantastic. But finding feasible and optimal treatments in a field that changes daily is complicated,” he cautions.
Ultimately, the gap between the two worlds — engineering and medicine — isn’t so wide. Those from both professions believe that AI is an inevitable advancement and that it will be used more and more, but also that a human should always have the final say. The problem lies in the relative weight of each of these factors. “It’s exciting that we have more tools to empower and educate patients,” says Arya Rao, a researcher at Harvard University. “I’m optimistic about AI’s potential to personalize patient education… but clinicians are ultimately responsible for clinical AI. Instead of discouraging patients from using these tools, clinicians should initiate the conversation: ask them what they’ve searched for, what the AI has told them and what questions they have,” she details.
AI has its own method
The complexity of using AI in medicine lies in the sophistication that it’s already able to achieve today. Javi López’s response to González’s case was incredibly refined. First, he used the most advanced systems: ChatGPT Pro+ Extended and Claude Opus 4.6 MAX. “These models — in their most powerful versions — cost around [$200] a month,” he says. Second, he converted González’s entire medical history into a text document, to have all the information in one place.
Then, he created a prompt (also generated by AI) consisting of almost 2,000 words, telling the AI that it was a “multidisciplinary tumor committee composed of the world’s leading specialists.” Once he had the response from one model, he passed it on to the other to look for flaws: “This kind of ‘adversarial model’ has always worked. It’s the same as with humans: two parallel research teams that share their discoveries are usually more productive than just one,” he adds.
Would this system work for other types of illnesses, or if doctors used it on their own? For López, it’s obvious that it would: “In the near future, I hope that everyone’s medical history won’t just be digitized, but also ‘digested’ so that it can be processed by AI. [This would allow] any doctor to consult your entire [medical] history and have years of results at their fingertips.”
It’s not an easy path, but it’s already being explored by major Silicon Valley companies. Back in January, OpenAI launched ChatGPT Health, where users can upload their medical records. But today, there are still differences depending on who’s in charge of the AI: “I’m aware that not everyone can do what I can,” González admits. “Having the time to research, knowing how to read scientific literature — even with help — [or] building an international network of contacts while undergoing treatment…”
“That’s why,” she continues, “I think it’s important to talk about [my case] out loud: not to serve as a model, but as an argument in favor of making these tools and this type of support available to everyone.”
He was diagnosed with rare bone cancer.
— Niklas Anzinger 📍 Infinita (@NiklasAnzinger) April 11, 2026
He exhausted the standard of care: surgery, radiation, chemotherapy.
There were no viable trials for his case. No approved treatments. No doctor willing to promise any potential for hope.
That’s where most journeys end.
Not his. pic.twitter.com/Q63lTJ1KDI
On social media, González’s case has been compared to that of Sid Sijbrandij, the co-founder of GitLab, which is a software collaboration platform. He was diagnosed with osteosarcoma, with no medical trials available. He subsequently used AI to analyze 25 terabytes of data from his tumor. After identifying the overexpression of a protein, he traveled to Germany to receive therapy targeting that marker. Today, his cancer is undetectable.
“The logic is the same [for me],” González says. “When the guidelines don’t apply to your case, AI can help you find the path that does. But it’s important to be honest: Sid had access to technology and resources that most patients — myself included — don’t have. If there’s one thing I advocate for, it’s that this way of navigating the disease shouldn’t depend on what you can afford,” she adds.
These are unique cases. González’s example is closer to the norm, but it’s still special. And Mark Succi notes that, while these cases cannot serve as models, they do offer a clue that everything has changed… including in the field of medicine.
“Doctors should treat this as a permanent part of modern healthcare and respond without falling into the trap of dismissing the new tools,” he affirms. “The best response is to explain in which cases [these AI assistants] can be useful and in which they cannot. These systems may sound reliable even when their reasoning is weak, especially in complex cases. That’s why doctors should help patients use AI results as a starting point, not as a diagnostic conclusion.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition







































