VOLUME 28, ISSUE 4 • DECEMBER 2024. Full issue »
Medicine is “slow” science. It takes time to observe, verify, and incorporate, often to the benefit but also to the detriment of patients. As such, we must collaborate, learn, and imbibe validated science from other fields to take the best care of our patients.
There are broadly two aspects to the use of AI: diagnostic accuracy and the consequence of error. The latter perhaps matters more in medicine than it does in other fields. There is a lot of excitement around AI. A quick search using “artificial intelligence” reveals that projects mentioning AI received over $1 billion in NIH funding in 2023 and a recent explosion of published articles on the topic. However, with potential and promise comes noise. Meta launched a large language model called Galactica, which was trained on 48 million examples of data ranging from encyclopedias to scientific articles and textbooks. It was meant to write scientific code and annotate proteins, amongst other things. It lasted three days online after it was found to be “hallucinating” data.1 Indeed, such a mismatch between training data and its use in real life was also demonstrated in a clinical trial at Stanford on the use of AI to enhance colonoscopy results.2 There have been other concerns with AI, including issues with replication, biases, safety, and incorporation of fraudulent or outdated data.3, 4 A recent MDS review showed that only 20% of studies on AI using neuroimaging in Parkinson’s disease passed minimal quality criteria, with only 8% using external test sets.5 The FDA has called for nimble regulation on the use of AI models and the WHO prescribes caution with their use.6, 7
The practice of Movement Disorders involves careful observation and history taking to put the presentation in context and arrive at the diagnosis. It is both an art and a science, and diagnosis may change over time. Technology around wearables is constantly improving but is an inadequate independent source of data. These qualities add uncertainty around the independent use of AI for diagnosis. Our patients share this discomfort, reflected in a recent poll of 1,400 US adults, nearly 70% of whom reported discomfort around the use of AI.8 Complementary results from a study showed that while comprehensibility was similar between a human physician and AI, empathy, reliability, and willingness to follow advice was substantially better with a human physician.9 In essence, we need more meticulous, caring physicians, and not just better analytics of collected data.
As we wonder at the promise of ever-evolving AI, we must remember that it is a tool. We must investigate and validate it with absolute objectivity, before deciding if, when, where, and how to deploy it. I agree with the exciting promise of “boring AI” in improving healthcare.10 It can lower administrative burden and make healthcare systems more sustainable.11 AI must serve our patients, and push the boundaries of care for both patients and physicians. This is not gatekeeping. It is just good patient care.
References
- Heaven WD. Why Meta’s latest large language model survived only three days online. https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/. Artificial Intelligence. MIT Technology review2022.
- Ladabaum U, Shepard J, Weng Y, Desai M, Singer SJ, Mannalithara A. Computer-aided Detection of Polyps Does Not Improve Colonoscopist Performance in a Pragmatic Implementation Trial. Gastroenterology. 2023 Mar;164(3):481-3.e6.
- Evans H, Snead D. Why do errors arise in artificial intelligence diagnostic tools in histopathology and how can we minimize them? Histopathology. 2024 Jan;84(2):279-87.
- Haibe-Kains B, Adam GA, Hosny A, et al. Transparency and reproducibility in artificial intelligence. Nature. 2020 Oct;586(7829):E14-e6.
- Dzialas V, Doering E, Eich H, et al. Houston, We Have AI Problem! Quality Issues with Neuroimaging-Based Artificial Intelligence in Parkinson's Disease: A Systematic Review. Movement disorders : official journal of the Movement Disorder Society. 2024 Sep 5.
- WHO. WHO calls for safe and ethical AI for health. 2023.
- Healthcaredive. FDA calls for ‘nimble’ regulation of ChatGPT-like models to avoid being ‘swept up quickly’ by tech. https://www.healthcaredive.com/news/fda-regulation-chatgpt-like-models/649931/?mc_cid=f5a82c261d&mc_eid=d7392623ba. 2023.
- Salesforce. Bot Docs? Not Likely: 69% of US Adults Uncomfortable Being Diagnosed by AI. https://www.salesforce.com/uk/news/stories/ai-in-healthcare-research/. Digital Transformation2024.
- Fanous A, Steffner K, Daneshjou R. Patient attitudes toward the AI doctor. Nat Med. 2024 Sep 23.
- Abbasi K. In praise of boring AI. BMJ. 2024;386:q1579.
- How to support the transition to AI-powered healthcare. Nat Med. 2024 Mar;30(3):609-10.
Read more Moving Along: