How AI Became My Medical Tutor
The patient is a 59-year-old male presenting with fatigue on exertion, pallor, and mild numbness and tingling in both feet. No family history of anemia or diabetes, but I still want to rule them out with a blood test. Does that sound reasonable?
This is the 6th or 7th question I’ve asked in today’s session, but my tutor doesn’t mind the repetitiveness.
Yes, performing blood work is a reasonable approach to help rule out conditions like diabetes and anemia, as well as other potential causes of the patient’s symptoms.
Considering the progression of the tingling, I think it’s safe to rule out a lesion at this point. Is there anything I am missing?
My tutor doesn’t mind that I am still unsure, despite spending weeks on these types of exercises together.
While a localized lesion might be less likely due to the bilateral nature and progression of the tingling symptoms, it is still important not to rule it out prematurely. A comprehensive clinical evaluation should include a thorough neurological exam, blood work, and possibly neuroimaging studies such as MRI or CT scans, to identify any structural abnormalities or lesions.
For the past several weeks, I’ve been using my new tutor – Dr. ChatGPT – to help me better develop and refine my clinical reasoning. As a pre-clinical medical student, I have a lot of work to do: my tutor doesn’t mind if I ask one question or thirty. My tutor is always available, infinitely patient, and never condescending. Like billions of others, I thought ChatGPT was a novel and fun technology, but I personally didn’t really have an idea of how best to use it. After seeing GPT-4 used in a clinicopathologic conference (CPC) last year, I sat down to see if it could help me improve my reasoning in clinical settings. It isn’t a perfect tool, but it can propose a diagnosis, and give clear and logical reasoning as to why that diagnosis is best. I knew I had to find a way to incorporate it into my own education.
Challenges and Ethical Considerations of Using AI in Medical Education
Of course, I had some trepidation. Many concerns have already been raised about the phenomenon now called hallucination, the propensity for large language models (LLMs) like GPT to confidently make up information. I experienced this firsthand when I asked ChatGPT to help me with a literature review. The bibliography looked good; it was in APA format and had authors and dates, but they weren’t all real articles or journals. Additionally, concerns have already been raised that LLMs can’t replace human reasoning. However, in actuality research shows LLMs perform as well as or better than humans in many reasoning tasks (source: Nature).
There’s no question that I was learning, but the more I worked with my tutor, the more questions I had: Is it ethical to use AI to organize lecture materials? How about having AI predict test questions based on those materials? Even what I do with my clinical vignettes walks a fine line; it would be easy to just feed the whole case to GPT-4 and ask for the diagnosis. Can AI be used for cheating, or will overreliance on it weaken rather than strengthen my clinical reasoning?
Lessons Learned and the Future of AI in Medicine
I’m certainly not alone in trying to find ways to use AI in my medical education; many of my classmates are doing the exact same thing. In many ways, AI has forced me and my fellow students to have important conversations about the purpose of medical education. No physician can reasonably be expected to hold even a small fraction of all medical knowledge. The existence of products like UpToDate and Micromedex presupposes an accepted limit to the intelligence of a physician. We can’t actually know everything all the time or keep up with all the new science.
While medical students will always need to rely on our intelligence, we already see a need for extelligence, like UpToDate, to hold knowledge for us until we face a situation in which we can apply it. How much will the reasoning abilities of AI play into the discussion of what is expected of a student? We want to have strong reasoning abilities, but is using AI to augment those skills acceptable or even advantageous? These are the debates that we are just beginning to have as we contemplate our future in medicine, conversations that are happening without faculty right now.
Embracing AI to Enhance Medical Education and Patient Outcomes
I am not so bold as to suggest answers to these questions, I only point them out as part of the zeitgeist of modern medicine, debates that me and my fellow students will have to grapple with for our entire careers. We are already grappling with them. Sooner or later, our faculty will need to as well. This technology is in its infancy now, but I will be part of the last generation of medical students who remember medicine before AI. It is vital that we don’t pine for the “good old days,” but instead find the ways AI will improve patient outcomes and our practice of medicine. I want to be part of a generation that embraces AI, not as a shortcut to education but as a tool to augment it.
Right now, GPT-4 is my tutor, points out my weaknesses, suggests questions that I should consider, and helps me strengthen my clinical reasoning. And my story is not unique. I know a composition professor who embraced ChatGPT and has her students compete against it to improve their rhetorical abilities. My 7-year-old son is using AI to learn math this summer, receiving feedback on his computational process, instead of just corrections of his answers.
Like any tool, it depends on how we use it. My time using my machine tutor has helped me tremendously. It is already paying off in simulated patient interviews and early clinical exposures: my knowledge and reasoning have improved dramatically over the past few weeks. The conversation I opened this piece with was from a case in my renal and vitamins unit. While the actual diagnosis was pernicious anemia, GPT helped ensure I didn’t pigeonhole my reasoning too early in the process. It helped me broaden my differential beyond the unit I was studying, instead allowing me to focus on the patient and their symptoms. Asking GPT all of my questions has helped me ask better questions of patients in the clinic and helped me consider factors that I otherwise wouldn’t.
Conclusion: AI in Medical Education – A Tool for Enhancement
Ironically, my tutor is very aware of its own abilities and limitations: My responses should not be used as a substitute for professional medical advice, diagnosis, or treatment. For any health-related concerns, it’s important to consult with a qualified health care provider.
AI is here, and it is going to change medicine and medical education. If we are involved in these conversations, we can ensure that the change is for the better.
Focus Keyphrase: AI in Medical Education
Thank you. Nice article!
Just posted! What are your thoughts?