Main Session
Sep 29
PQA 03 - Central Nervous System, Professional Development/Medical Education

2653 - User Experience with Artificial Intelligence-Generated Research Education Podcast Series

08:00am - 09:00am PT
Hall F
Screen: 30
POSTER

Presenter(s)

John Peterson, MD - H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL

J. Peterson1, L. Zhang2, B. Cao2, A. Gan2, and S. Hoffe1; 1H. Lee Moffitt Cancer Center and Research Institute, Department of Radiation Oncology, Tampa, FL, 2University of South Florida Morsani College of Medicine, Tampa, FL

Purpose/Objective(s): With the growing role of artificial intelligence (AI), novel generative AI (genAI) tools offer ways to enhance education. However, their efficacy remains largely untested. In this study, we created "Research Education Podcast Series" (REPS), a ten-episode AI-generated podcast series designed to teach research skills and gathered feedback on its efficacy from medical students.

Materials/Methods: This was a cross-sectional quality improvement study deemed IRB-exempt. Ten podcast episodes were created in Google AI's NotebookLM for the following topics: time management, authorship rules, conducting a literature search, citing sources, IRB proposals, chart review, survey writing, biostatistics, submitting to a conference, and submitting to a journal. Podcast episodes were created by importing publicly available sources (e.g., articles, guides, and educational videos) into NotebookLM; episodes averaged 10:14 (standard deviation 2:48) in length. Each episode was reviewed by the authors to ensure quality and then uploaded to a private account on an online audio streaming platform. A link to the account was shared with medical students. A survey created in Google Forms collected anonymous feedback on the podcast episodes shared with medical student reviewers. Results from the survey were analyzed in R (version 4.4.1).

Results: 70 medical student reviews of the REPS podcast episodes were submitted; 65 (93%) came from medical students with prior research experience, 39 (56%) from female students, and 68 (97.1%) from first- and second-years. Each episode received at least 2 reviews (median 3, IQR [2, 6.25]). The three most reviewed episodes were "Intro to Chart Review" with 31 (41%), Intro to Biostats with 14 (20%), and "Time Management" with 7 (10%). The episode length was rated as "About Right" by 53 (76%), "Too Long" by 16 (23%), and "Too Short" by 1 (1.4%). 66 (94.3%) found the content easy to access, 57 (81.4%) found it relevant, and 68 (97.1%) felt it was presented clearly. 38 (54.3%) felt comfortable using AI to teach the episode content, while 22 (31%) were undecided, and 10 (14.3%) were uncomfortable. The median rating for the overall experience with the REPS episodes was 4 (IQR [3, 4]) out of 5, 30 (42.9%) reviews were "Likely" or "Very Likely" to recommend REPS to others while 27 (39%) were unsure, and 13 (18.4%) were "Unlikely" or "Very Unlikely" to recommend it.

Conclusion: REPS was well-received, with the majority of medical student reviews finding the content easy to access, relevant, and clearly presented. Additional study may help better understand barriers to adoption so that the quality of AI-generated educational material can be improved.