高级检索
当前位置: 首页 > 详情页

Advancing medical education in cervical cancer control with large language models for multiple-choice question generation

文献详情

资源类型:
Pubmed体系:
机构: [1]School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China. [2]Tencent Sustainable Social Value Inclusive Health Lab, Tencent, Beijing, China. [3]Department of Gynecologic Oncology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, Liaoning Province, China. [4]Department of Diagnosis and Treatment for Cervical Diseases, Chengdu Women's and Children's Central Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China. [5]Department of Gynecology, Shenzhen Maternity and Child Healthcare Hospital, Southern Medical University, Shenzhen, Guangdong Province, China. [6]Department of Gynecology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, China. [7]Wuxi Maternity and Child Health Care Hospital, Wuxi School of Medicine, Jiangnan University, Wuxi, Jiangsu Province, China.
出处:
ISSN:

关键词: Large language models multiple choice question generation cervical cancer medical education

摘要:
To explore the feasibility of using large language models (LLMs) to generate multiple-choice questions (MCQs) for cervical cancer control education and compare them with those created by clinicians.GPT-4o and Baichuan4 generated 40 MCQs each with iteratively refined prompts. Clinicians generated 40 MCQs for comparison. 120 MCQs were evaluated by 12 experts across five dimensions (correctness, clarity and specificity, cognitive level, clinical relevance, explainability) using a 5-point Likert scale. Difficulty and discriminatory power were tested by practitioners. Participants were asked to identify the source of each MCQ.Automated MCQs were similar to clinician-generated ones in most dimensions. However, clinician-generated MCQs had a higher cognitive level (4.00±1.08) than those from GPT-4o (3.68±1.07) and Baichuan4 (3.7±1.13). Testing with 312 practitioners revealed no significant differences in difficulty or discriminatory power among clinicians (59.51±24.50, 0.38±0.14), GPT-4o (61.89±25.36, 0.30±0.19), and Baichuan4 (59.79±26.25, 0.33±0.15). Recognition rates for LLM-generated MCQs ranged from 32% to 50%, with experts outperforming general practitioners in identifying the question setters.LLMs can generate MCQs comparable to clinician-generated ones with engineered prompts, though clinicians outperformed in cognitive level. LLM-assisted MCQ generation could enhance efficiency but requires rigorous validation to ensure educational quality.

基金:
语种:
PubmedID:
中科院(CAS)分区:
出版当年[2025]版:
大类 | 3 区 教育学
小类 | 2 区 卫生保健与服务 3 区 学科教育
最新[2025]版:
大类 | 3 区 教育学
小类 | 2 区 卫生保健与服务 3 区 学科教育
第一作者:
第一作者机构: [1]School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
共同第一作者:
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:65780 今日访问量:0 总访问量:5151 更新日期:2025-12-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号