The AI software was able to achieve passing scores for the exam, which usually requires years of medical training.
OpenAI’s ChatGPT can score at or around the approximately 60 percent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights. This is according to a study by Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth, which was published on February 9, 2023, in the open-access journal PLOS Digital Health.
ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes.
Kung and colleagues tested ChatGPT’s performance on the USMLE, a highly standardized and regulated series of three exams (Steps 1, 2CK, and 3) required for medical licensure in the United States. Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.
After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release.
After indeterminate responses were removed, ChatGPT scored between 52.4% and 75.0% across the three USMLE exams. The passing threshold each year is approximately 60%. ChatGPT also demonstrated 94.6% concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9% of its responses. Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8% on an older dataset of USMLE-style questions.
While the relatively small input size restricted the depth and range of analyses, the authors note their findings provide a glimpse of ChatGPT’s potential to enhance medical education, and eventually, clinical practice. For example, they add, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.
“Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation,” say the authors.
Author Dr. Tiffany Kung added that ChatGPT’s role in this research went beyond being the study subject: “ChatGPT contributed substantially to the writing of [our] manuscript… We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress…All of the co-authors valued ChatGPT’s input.”
Reference: “Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models” by Tiffany H. Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo and Victor Tseng, 9 February 2023, PLOS Digital Health.DOI: 10.1371/journal.pdig.0000198
As a test, we asked ChatGPT to summarize this article. Here are the unedited results:
A study by Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth showed that OpenAI’s ChatGPT can perform at the passing level of 60% on the United States Medical Licensing Exam (USMLE). ChatGPT is an artificial intelligence language model that generates text by predicting word sequences, but it does not have the ability to search the internet. The researchers tested ChatGPT’s performance on 350 of 376 public questions from the June 2022 USMLE release and found that it scored between 52.4% and 75.0% across the three exams. The study showed that ChatGPT produced at least one significant insight for 88.9% of its responses and outperformed PubMedGPT, a model trained solely on biomedical literature. The authors believe that ChatGPT has the potential to improve medical education and eventually clinical practice.
Source: SciTechDaily
I may need your help. I tried many ways but couldn’t solve it, but after reading your article, I think you have a way to help me. I’m looking forward for your reply. Thanks.