OP-MED: Impact of AI on Medical Education

June 2025 | Vol. 69, No. 2
Written by Katherine Galluzzi, DO

In April, an interdepartmental meeting at PCOM was scheduled via Zoom. When the admin couldn’t attend, they opted to have ChatGPT generate the transcript and minutes. The result: a 36-page transcript accurately attributing comments based on screen names (except for group room comments, which defaulted to me). As my colleagues joked, I tend to do most of the talking anyway.

But it was the minutes that amazed us: clear, concise, and well-organized. Tasks were assigned, next steps outlined, and the documentation was better than anything we’d seen before. ChatGPT made us appear more efficient than we felt.

AI already permeates our lives—from Siri curating playlists to frustrating chatbot interactions. These ChatGPT-generated minutes weren’t my first experience with generative AI. Last year, the AOBFP explored using ChatGPT for board question writing. Security concerns prevented allowing ChatGPT to "learn" on our content, but we prompted it to create clinical questions. While the output was impressively fast and seemingly accurate, citations were problematic—some nonexistent or incorrect. In AI terms, it was “hallucinating.”

Jia Tolentino in The New Yorker noted that despite 400 million weekly users, many distrust ChatGPT. She warns of a world where people increasingly rely on AI at the cost of critical thinking—a concern reflected in troubling literacy rates and headlines about even elite students struggling to read. With smartphones becoming omnipresent, many learners now consume educational content via online sources instead of textbooks.

NYT columnist David Brooks links declining analytical skills to reduced reading. Books expose readers to diverse perspectives and enhance reasoning—something AI cannot replace.

So, is AI helping or hurting?

To answer that, we must understand the landscape. Clinical Informatics (CI), often likened to healthcare’s central nervous system, has expanded rapidly. It promises relief from administrative tasks like note transcription, scheduling, and patient education. ChatGPT defines AI as the ability to perform human-like tasks—learning, problem-solving, and decision-making. Machine learning adapts from data; deep learning mimics neural networks; Large Language Models (LLMs) learn linguistic patterns. The issue is, if LLMs train on flawed data, the output becomes unreliable—"garbage in, garbage out."

Authenticity in AI output is essential, especially for patient care and medical education. Clinical Decision Support is one promising use of AI, but it requires accuracy. The “hallucinations” we observed during AOBFP’s test-writing experiment highlight ongoing concerns. AI must be monitored and refined over time to ensure it truly augments decision-making without appropriating it.

In October 2023, the National Academy of Medicine (NAM) convened experts to evaluate GenAI in healthcare. A recent NEJM update assessed progress, highlighting “life cycle” evolution in these technologies. While some implementations are successful, the evolving nature of GenAI and CI raises questions about whether these tools already possess a “life” of their own.

According to NAM, near-term GenAI applications include patient education and data synthesis—like messaging, documentation, and chart summarization. Mid-range goals involve precision medicine and genome analysis. Longer-term applications include virtual assistants, disease surveillance, and, finally, medical education.

It makes sense that medical education lags. The AAMC is now reassessing how students are trained and evaluated in the AI era. As GenAI becomes part of the care team, clinicians may shift from knowledge retrieval to information management.

The “House of Medicine” is constantly undergoing renovation and remodeling—new knowledge, systems, and now, AI. From big screens to smartphones to generative AI, technology keeps reshaping practice. It remains our task to guide its integration into clinical practice thoughtfully and ethically, and to prepare future clinicians to use it wisely and responsibly.

References

  • Maddow TM, et al. Generative AI in Medicine – Evaluating Progress and Challenges. NEJM. 2025 Apr 10.

  • Mattias E, Varon J. AI in Critical Care Medicine in 2023: A Global Perspective. Crit Care Shock. 2025; 28(1): 31-34.

  • Sampath S. AI in Healthcare: An Internist’s Perspective. Higher Ed for the Future. 2025; 12(1): 65-75.

  • Tolentino J. My Brain Finally Broke. The New Yorker. May 3, 2025.