Med schools are racing to AI-proof their curricula, but many are stumbling
When ChatGPT achieved passing scores on U.S. medical licensing exam questions in 2023, educators realized that a profound shift in healthcare was underway. For medical student curricula, this marks the beginning of a fundamental rewrite.
“Maybe once every few decades a true revolution occurs in the way we teach medical students and what we expect them to be able to do when they become doctors. This is one of those times,” reflected Bernard Chang, dean for medical education at Harvard Medical School (HMS). He likened this moment to the early days of the internet, right before it became ubiquitous and an indispensable tool.
Past technological revolutions took years to appear in classrooms, but genAI is already embedded in everyday clinical tools, raising urgent questions for medical schools: How should universities incorporate AI into their curricula? What opportunities and challenges does it present? And where can prospective students go if they want to specialize in AI-driven healthcare?
AI: medical students’ newest study buddy
A growing body of research argues that medical education must prepare physicians for an AI-enabled healthcare system. In a recent Frontiers in Education piece, researchers note that AI tools can enhance teaching methodologies and personalize learning. By analyzing coursework and student progress, AI systems could provide tailored feedback, freeing up educators to focus on higher-order teaching. The same review points out that advanced simulation tools can give students a safer, hands-on experience making clinical decisions.
Harvard is already experimenting with these ideas. First-year students on its Health Sciences and Technology track now take a one-month course on AI in healthcare that challenges students to critically evaluate AI’s limitations in diagnosis and decision-making.
Curriculum conundrum: Proceed with caution
The Frontiers article also warns of thorny institutional challenges. Universities must build ethical frameworks to govern AI in exams, research, and clinical training. Because these tools often rely on sensitive patient data, policies must address areas such as data privacy and academic integrity. Infrastructure poses another hurdle: digital simulations and AI-enhanced learning platforms require major investments that could strain budgets.
Curriculum designers face a balancing act around GenAI. Integrating AI tools shouldn’t crowd out core competencies like bedside manner and critical reasoning, and program managers need to restructure courses so that technology complements human interaction, instead of replacing it.
Educators will need training to use AI responsibly. The Frontiers review argues that teaching staff must develop new methodologies and learn how to balance AI with traditional instruction. Tools like virtual tutors and real-time feedback systems could personalize lessons, but their benefits depend on teachers who can integrate them thoughtfully. Improper use could encourage students to over-rely on AI, undermining critical thinking.
Students have the most to gain, and lose, from AI adoption. On the plus side, personalized feedback and adaptive learning can improve comprehension. AI tools can also speed up tedious tasks like literature reviews, giving students more time for higher-level analysis. Yet, over-dependence on AI may erode problem-solving skills and clinical judgement. Academic integrity is another concern; educators need clear policies for AI-assisted assignments and exams.
Graduating (and succeeding) in the age of medical AI
Interest in AI-focused healthcare programs has exploded, and Harvard’s new Artificial Intelligence in Medicine (AIM) PhD track is no exception. The program trains computationally oriented students to harness large biomedical data sets and develop AI tools that can improve patient care. “We didn’t know how much demand there would be, but we ended up with more than 400 applications for the seven spots we’re offering,” said Professor Isaac Kohane, chair of the Department of Biomedical Informatics in the Blavatnik Institute at HMS.
The AIM PhD track requires students to engage in interdisciplinary coursework spanning statistics, computer science, bioinformatics and clinical medicine, and they shadow clinicians to ground their research in practical healthcare challenges.
Harvard isn’t alone. The Association of American Medical Colleges (AAMC) notes that medical schools are shifting from worrying about AI to teaching it. Programs like Stanford’s Biodesign Innovation Fellowship and Mount Sinai’s medical school AI integration with ChatGPT Edu seek to cultivate physicians who can both practice medicine and build AI tools. Several universities offer master’s degrees in health informatics or digital medicine that include coursework on machine learning and data science. These programs reflect a broader recognition that tomorrow’s clinicians will need both medical and computational fluency.
Curricula that sees genAI for what it is
Back at Harvard, the school’s commitment to AI goes beyond teaching students how to use LLM-powered tools. Students have been able to apply for Dean’s Innovation Awards, which provide up to $100,000 for projects that integrate AI into education or clinical practice. One project trains LLMs to act as standardized patients and instructors, allowing students to practice clinical interactions and receive immediate feedback. Another builds an AI-based grading tool that summarizes students’ strengths and weaknesses, enabling instructors to personalize teaching plans. These experiments show how AI can both expand access to practice opportunities and help educators refine their courses.
Despite all this promise around AI, Harvard’s administrators are aware of the technology’s current limitations. Professor Richard Schwartzstein, chair of the Learning Environment Steering Committee, emphasizes that students still need to learn how to think like doctors. While AI can help with data retrieval and pattern recognition, he notes that “AI isn’t good at problem-solving, which is one of the toughest parts of medicine.” The school teaches students to double-check AI results and to view AI-powered tools as companions rather than authorities.
“GenAI is often viewed as taking the humanity out of communication, but I actually see it as being a mechanism to reincorporate a human dimension to clinical practice by taking the burden of many administrative tasks off of doctors,” said Taralyn Tan, Harvard’s assistant dean for educational scholarship and innovation within the Office for Graduate Education. “It’s hard to predict how far this will go, but tomorrow’s most successful physicians and researchers will be the ones who can harness genAI for innovation and strategic planning. The people who come up with solutions will be the ones who are using these tools.”