
How ChatGPT Cracked a Medical Mystery That Doctors Missed for Four Years
A 23-year-old woman spent years being misdiagnosed with anxiety and epilepsy. Then she typed her symptoms into an AI chatbot — and everything changed.
A Four-Year Struggle for Answers
For most people, a trip to the doctor brings clarity. For Phoebe Tesoriere, it brought confusion, frustration, and a growing sense that no one was truly listening. Over the course of four years, the 23-year-old from Cardiff was handed a revolving door of diagnoses — anxiety, depression, epilepsy — none of which fully explained what she was experiencing.
At one point, medical staff reportedly warned her that if she continued presenting at the emergency department, she would be treated as a mental health patient. For a young woman with no prior history of anxiety, the experience was as isolating as it was frightening.
"I had to fight to be listened to," Phoebe said. "I was a really happy, bubbly person — I had no history of anxiety whatsoever."
A History of Health Complications
Phoebe's medical journey began long before her formal misdiagnoses. She was born without a hip socket and underwent surgery as an infant, leading her to initially attribute many of her physical difficulties to that early condition. As a child, she also struggled with balance and was assessed for dyspraxia, a developmental condition affecting physical coordination.
At 19, she collapsed and had a seizure at work. Doctors attributed the episode to anxiety, a label that was formally entered into her medical records despite no supporting history.
By 2022, her diagnosis had shifted to epilepsy, and she was prescribed medication accordingly. However, in late 2024, her condition deteriorated further. She was unable to keep her epilepsy medication down, triggering additional seizures. Her mobility declined significantly, and she was then misdiagnosed with Todd's Paralysis — a temporary neurological condition sometimes experienced by epilepsy patients following a seizure.
A Coma, a Contradiction, and a Turning Point
In January 2025, Phoebe fell down a flight of stairs, resulting in a three-month hospital stay and a series of inconclusive tests. Then, in July 2025, a severe seizure left her in a coma for three days.
Upon regaining consciousness, she received yet another contradictory verdict: a doctor told her she did not have epilepsy after all — the symptoms, she was told, were rooted in anxiety.
It was at this breaking point that Phoebe decided to take matters into her own hands. She entered her full list of symptoms into ChatGPT, the AI-powered chatbot developed by OpenAI.
What the AI Found
The chatbot returned a list of potential conditions that could explain her symptoms. Among them was hereditary spastic paraplegia (HSP) — a rare, inherited neurological disorder that progressively affects movement and muscle control.
Phoebe was hesitant at first. "I went back and forth with my partner, questioning whether I should even go to the doctors with this," she recalled. "Surely it can't be that simple."
But she did bring the suggestion to her GP, who acknowledged it as a plausible explanation. Genetic testing was ordered — and it confirmed the AI's suggestion. After four years of misdiagnosis, Phoebe finally had a real answer.
Living With Hereditary Spastic Paraplegia
According to the NHS, the true prevalence of hereditary spastic paraplegia is difficult to determine, largely because it is so frequently misdiagnosed. The condition has no cure, but its symptoms can be managed through physiotherapy and supportive care.
The impact on Phoebe's life has been profound. She can no longer work as a special educational needs teacher — a role she loved — and now uses a wheelchair. However, she has not allowed the diagnosis to define her future. She is currently pursuing a master's degree in psychology, determined to build a new career that still allows her to support and uplift others.
A Word of Caution on AI in Healthcare
While Phoebe's story highlights a remarkable outcome, medical professionals urge caution when using AI tools for health-related research. GP Dr. Rebeccah Tomlinson emphasised that any information gathered through AI chatbots should be followed up with a qualified medical professional rather than acted upon independently.
A recent study from the University of Oxford reinforced this concern, finding that AI-generated healthcare advice varied significantly in quality — with a mixture of accurate and inaccurate responses that could make it difficult for patients to distinguish trustworthy guidance from misleading information.
Cardiff and Vale University Health Board responded to Phoebe's case with a brief statement: "We are sorry to hear about Phoebe's experience while in our care."
The Bigger Picture
Phoebe's case raises important questions about patient advocacy, diagnostic gaps within healthcare systems, and the evolving role of technology in medicine. While AI should never replace professional medical judgment, her story demonstrates that for patients who feel unheard, these tools can sometimes offer a vital starting point.
"It was a really lonely experience," Phoebe reflected. "I just needed someone — or something — to take me seriously."


