Artificial intelligence
ChatGPT Chat with AI, Artificial Intelligence. Adult man chatting with a smart AI or artificial intelligence using an artificial intelligence chatbot developed, Futuristic technology transformation. See Less
By ParinPIX

AI in medicine: ChatGPT finds the right diagnosis for a child suffering from chronic pain for three years.

ChatGPT helped a mother finally get her child diagnosed who had been suffering from chronic pain for three years.

Despite seeing no fewer than 17 different specialists over time, none had been able to identify the cause of the symptoms. So the mother turned to ChatGPT, an artificial intelligence-based chatbot, and described in detail all the results of the child’s MRI scans.

The artificial intelligence suggested a possible diagnosis, which was later confirmed by a real surgeon.

Symptoms and 17 consultations

The story follows a mother, Courtney, and her four-year-old child, Alex, during the Covid-19 lockdown in 2020.

During this time, Courtney says, Alex began to suffer from chronic pain and a range of vague symptoms. Three years of searching for the cause of the increasing pain and numerous visits to various specialists followed:

  • A dentist, consulted because the child was grinding his teeth, ‘ruled everything out’ and referred the parents to an orthodontist who specialises in airway obstruction. Airway obstruction does affect the child’s sleep and could explain why he seemed so tired and moody.
  • The orthodontist discovered that Alex’s palate was too small for his mouth and teeth and that this was causing problems with night breathing. Placing an expander in the child’s palate seemed to improve the situation.
  • During a check-up, the paediatrician noted that the child had delayed growth and a left-right imbalance: Alex tended to move with his right foot and drag his left.
  • A neurologist, called in because Alex was suffering from severe headaches, diagnosed him with migraines.
  • An otolaryngologist examined the sleep disturbances to see if they were caused by sinus or airway problems.
  • A physiotherapist hypothesised a ‘Chiari malformation’, a congenital condition that causes abnormalities in the brain where the skull meets the spine.

Courtney also had Alex seen by other doctors: a new paediatrician, a paediatric internist, an adult internist and a musculoskeletal specialist, but none could identify the cause of the child’s symptoms.

“Each specialist was only looking at their own area of expertise. – says Courtney. – Nobody’s willing to solve for the greater problem. Nobody will even give you a clue about what the diagnosis could be.”

AI and the diagnosis issue

In total, they visited 17 different doctors over three years. But Alex still had no diagnosis that explained all his symptoms. Exhausted and frustrated, Courtney decided to seek answers from an unlikely source: ChatGPT.

And she got them!

In desperation, the mother signed up for ChatGPT and began entering his medical information, hoping to find a diagnosis.

“I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she says. “I put the note in there about … how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”

The AI suggested that he might have Anchored Marrow Syndrome, which causes back and leg pain and skeletal changes.

Diagnosis confirmation

Inspired with new hope and finally diagnosed, Courtney began researching, joining a Facebook group of families with children with the syndrome, reading other children’s stories that sounded similar to Alex’s.

She decided to make an appointment with a new neurosurgeon to discuss her suspicions. Looking at the MRI images, the doctor understood exactly what was wrong with Alex: the child had spina bifida occulta and his spine was fused at one point.

And the diagnosis was exactly what ChatGPT suggested.

Magnetic Resonance images of Lumbar spine sagittal T2-weighted images (MRI Lumbar spine) showing mild disc disease. By Mohamed
MRI spine. AdobeStock @By Mohamed

Tethered cord syndrome occurs when the tissue in the spinal cord forms attachments that limit movement of the spinal cord, causing it to stretch abnormally, according to the American Association of Neurological Surgeons. The condition is closely associated with spina bifida, a birth defect where part of the spinal cord doesn’t develop fully and some of the spinal cord and nerves are exposed.

With tethered cord syndrome, “the spinal cord is stuck to something. It could be a tumor in the spinal canal. It could be a bump on a spike of bones. It could just be too much fat at the end of the spinal cord,” Dr. Holly Gilmer, a pediatric neurosurgeon at the Michigan Head Spine Institute, who treated Alex, tells TODAY.com.

After receiving the diagnosis, Alex underwent surgery to fix his tethered cord syndrome a few weeks ago and is still recovering.

ChatGPT in Medicine

ChatGPT is a type of artificial intelligence program that responds based on input that a person enters into it, but it can’t have a conversation or provide answers in the way that many people might expect.

That’s because ChatGPT works by “predicting the next word” in a sentence or series of words based on existing text data on the internet, Andrew Beam, Ph.D., assistant professor of epidemiology at Harvard who studies machine learning models and medicine, tells TODAY.com. “Anytime you ask a question of ChatGPT, it’s recalling from memory things it has read before and trying to predict the piece of text.”

When using ChatGPT to make a diagnosis, a person might tell the program, “I have fever, chills and body aches,” and it fills in “influenza” as a possible diagnosis, Beam explains.

“It’s going to do its best to give you a piece of text that looks like a … passage that it’s read,” he adds.

There are both free and paid versions of ChatGPT, and the latter works much better than the free version, Beam says. But both seem to work better than the average symptom checker or Google as a diagnostic tool. “It’s a super high-powered medical search engine,” Beam says.

It can be especially beneficial for patients with complicated conditions who are struggling to get a diagnosis, Beam says.

These patients are “groping for information,” he adds. “I do think ChatGPT can be a good partner in that diagnostic odyssey. It has read literally the entire internet. It may not have the same blind spots as the human physician has.”

Will AI replace real doctors?

But it’s not likely to replace a clinician’s expertise anytime soon.

For example, ChatGPT fabricates information sometimes when it can’t find the answer. Say you ask it for studies about influenza. The tool might respond with several titles that sound real, and the authors it lists may have even written about flu before — but the papers may not actually exist. This phenomenon is called “hallucination,” and “that gets really problematic when we start talking about medical applications because you don’t want it to just make things up,” Beam says.

Dr. Jesse M. Ehrenfeld, president of leading U.S. physicians’ group the American Medical Association, tells TODAY.com in a statement that the AMA “supports deployment of high-quality, clinically validated AI that is deployed in a responsible, ethical, and transparent manner with patient safety being the first and foremost concern. While AI products show tremendous promise in helping alleviate physician administrative burdens and may ultimately be successfully utilized in direct patient care, OpenAI’s ChatGPT and other generative AI products currently have known issues and are not error free.”

He adds that “the current limitations create potential risks for physicians and patients and should be utilized with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients, and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools.”

“Just as we demand proof that new medicines and biologics are safe and effective, so must we insist on clinical evidence of the safety and efficacy of new AI-enabled healthcare applications,” Ehrenfeld concludes.

Sources and Data