Loading...
Loading...
Dangerous cases of AI providing incorrect medical advice, false health information, or misleading diagnostic suggestions. A critical AI safety concern.
4 reports in this category
### What Happened After multiple incidents of real-world harm — including hospitalizations, deaths, and lawsuits — OpenAI finally restricted ChatGPT from giving specific medical, legal, or financial advice in October 2025, reclassifying the bot as an "educational tool, not a consultant." The new terms explicitly stated that ChatGPT would no longer provide treatment guidance, legal strategies, or financial recommendations. ### The AI Response But critics argued the move was far too late. By the time restrictions were implemented, over 40 million Americans were already using ChatGPT daily for health information, according to OpenAI's own data. One in eight Americans consulted it every day for medical advice. One in four used it weekly. And one in 20 messages globally were health-related. A separate study found two-thirds of American doctors had used ChatGPT in at least one case, and nearly half of nurses used AI weekly. The genie was thoroughly out of the bottle. The harm had already been documented extensively: a man hospitalized with chemical poisoning after following ChatGPT's dietary advice; a teenager who died by suicide after extensive conversations with the chatbot; a 19-year-old who fatally overdosed after getting drug information; an 83% misdiagnosis rate in pediatric cases; Google's AI Overviews giving dangerous advice to cancer patients. Each incident showed AI medical advice could be genuinely lethal when users lacked the expertise to evaluate the output. ### The Aftermath The restriction reflected a broader tension at the heart of the AI industry. The same features that made ChatGPT commercially successful — its confident, authoritative tone; its willingness to answer any question; its accessibility to anyone with internet access — were exactly what made it dangerous when applied to high-stakes domains. As one doctor put it: "These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense." But when 40 million people are already treating it as their doctor, that's cold comfort. ---
### What Happened A 60-year-old man was hospitalized with sodium bromide poisoning after following ChatGPT's dietary advice for three months. The man had asked the chatbot for a substitute for table salt (sodium chloride) for health reasons, and ChatGPT suggested sodium bromide — a toxic chemical compound that resembles salt but is primarily used for cleaning, manufacturing, and agricultural purposes. It was once used as a sedative in the early 20th century but is dangerous for human consumption. ### The AI Response By the time the man arrived at the hospital, he was experiencing fatigue, insomnia, poor coordination, facial acne, cherry angiomas (red skin bumps), and excessive thirst — all classic symptoms of bromism, a condition caused by chronic bromide exposure. He also showed signs of paranoia, claiming his neighbor was trying to poison him, and experienced auditory and visual hallucinations. He was placed on a psychiatric hold after attempting to escape the hospital, and was treated with intravenous fluids, electrolytes, and anti-psychotic medication for three weeks. The case, published in the Annals of Internal Medicine, highlighted a critical flaw in how AI handles medical queries. As one expert explained: "The system essentially went, 'You want a salt alternative? Sodium bromide is often listed as a replacement for sodium chloride in chemistry reactions, so therefore it's the highest-scoring replacement here.'" The AI lacked the common sense to understand that a chemistry-context substitution is not the same as a dietary recommendation. Researchers noted it was "highly unlikely" a human doctor would have ever mentioned sodium bromide in this context. ### The Aftermath The incident came as OpenAI's own data showed that 40 million Americans use ChatGPT daily for health information, with one in eight turning to it every day for medical advice. Doctors warned that while ChatGPT can help break down complex medical topics, it is fundamentally "a language prediction tool" that should never substitute for professional medical care. ---
### What Happened A 2024 study published in JAMA Pediatrics found that ChatGPT incorrectly diagnosed more than 8 out of 10 selected pediatric case studies — a failure rate that stunned researchers and raised urgent questions about the millions of Americans already using AI for medical advice. The chatbot misdiagnosed 72 out of 100 cases and offered diagnoses that were too vague to be useful for an additional 11, leaving barely any cases where it provided accurate clinical guidance. ### The AI Response The findings were particularly alarming given the scale of AI medical use. OpenAI's own data revealed that 40 million Americans use ChatGPT daily for health information, with one in four consulting it weekly for medical queries. Rural Americans with limited access to healthcare facilities were especially reliant, sending 600,000 health-related messages per week from "hospital deserts" — areas more than 30 minutes from any hospital. About 70% of health-related messages were sent outside normal clinic hours, when no doctor was available. The pediatric misdiagnosis study was just one data point in a growing body of evidence. A man was hospitalized after ChatGPT advised him to replace table salt with toxic sodium bromide. A 19-year-old died from an overdose after the chatbot provided drug information. Google's AI Overviews served dangerously wrong advice to cancer patients. The pattern was clear: AI medical advice could be genuinely lethal. ### The Aftermath In October 2025, OpenAI finally implemented new restrictions, preventing ChatGPT from giving specific medical, legal, or financial guidance and reclassifying it as an "educational tool, not a consultant." But critics noted the move was too late — the chatbot had already been dispensing medical advice to hundreds of millions of users for years, and two-thirds of American doctors had already used it in at least one case. The barn door was being closed well after the horse had bolted. ---
A Guardian investigation found that Google's AI Overviews — the AI-generated summaries that appear at the top of search results — were serving up dangerously inaccurate health information. In one case described by experts as "really dangerous," Google's AI wrongly advised people with pancreatic cancer to avoid high-fat foods. Medical experts said this was the exact opposite of what should be recommended and could increase the risk of patients dying from the disease. "If someone followed what the search result told them then they might not take in enough calories, struggle to put on weight, and be unable to tolerate either chemotherapy or potentially life-saving surgery," warned Anna Jewell, director at Pancreatic Cancer UK. The problems extended far beyond one condition. A search for liver blood test ranges produced misleading results with "masses of numbers, little context and no accounting for nationality, sex, ethnicity or age," which the British Liver Trust's CEO called "alarming" — warning it could lead people with serious liver disease to wrongly believe they were healthy. A search for vaginal cancer symptoms incorrectly listed a pap test as a test for vaginal cancer. Mental health searches produced what the charity Mind described as "very dangerous advice" that was "incorrect, harmful or could lead people to avoid seeking help." Perhaps most troubling, the Eve Appeal cancer charity found that the same search query produced different AI responses at different times, pulling from different sources — meaning people were getting inconsistent medical information depending on when they searched. Google responded that many examples were "incomplete screenshots" and that its summaries linked to "reputable sources and recommend seeking out expert advice." But health charities pushed back firmly: people turn to Google "in moments of worry and crisis," and inaccurate AI summaries at the top of search results carry enormous weight. ---