ChatGPT incorrectly diagnosed more than 8 in 10 pediatric case studies, research finds

The popular artificial intelligence (AI) chatbot ChatGPT had a diagnostic error rate of more than 80 percent in a new study looking at the use of artificial intelligence in pediatric case diagnosis.

For the study published in JAMA Pediatrics this week, texts from 100 case challenges found in JAMA and the New England Journal of Medicine were entered into ChatGPT version 3.5. The chatbot was then given the prompt: “List a differential diagnosis and a final diagnosis.”

These pediatric cases were all from the past 10 years.

The accuracy of ChatGPT’s diagnoses was determined by whether they aligned with physicians’ diagnoses. Two physician researchers scored the diagnoses as either correct, incorrect or “did not fully capture diagnosis.”

But CEOs and Vice Chancellors seem to think that AI will replace researchers, doctors and academics any minute now. It’s all about capitalism, as always. How can we get the maximum product with the minimum resourcing – and ensure we really overbake AI while screwing over workers.