高级检索
当前位置: 首页 > 详情页

Domain composition and attention network trained with synthesized unlabeled images for generalizable medical image segmentation

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE

机构: [1]Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu, Peoples R China [2]Univ Elect Sci & Technol China, Sichuan Canc Hosp & Inst, Dept Radiat Oncol, Chengdu, Peoples R China [3]Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
出处:
ISSN:

关键词: Domain generalization Image synthesis Test-time augmentation Consistency constraint Attention

摘要:
Despite that deep learning models have achieved remarkable performance in medical image segmentation, their performance is often limited on testing images from new centers with a domain shift. To achieve Domain Generalization (DG) for medical image segmentation, we propose a Domain Composition and Attention-based Network (DCA-Net) combined with structure- and style-based data augmentation that generates unlabeled synthetic images for training. First, DCA-Net represents features in one certain domain by a linear combination of a set of basis representations that are learned by parallel domain preceptors with a divergence constraint. The linear combination is used to calibrate the feature maps of an input image, which enables the model to generalize to unseen domains. Second, considering the number of domains and images for training is limited, we employ generative models to synthesize images with a higher structure diversity, and to leverage the unlabeled synthetic images, we introduce a consistency constraint for their predictions under style augmentation based on frequency amplitude mixture. Additionally, a Test-Time Frequency Augmentation (TTFA) is proposed to neutralize the domain shift from the target to source domains. Experimental results on two multi-domain datasets for fundus structure and nasopharyngeal carcinoma segmentation showed that: (1) our method significantly outperformed several existing DG methods, and (2) the model's generalizability was largely improved by domain composition and attention modules; (3) by leveraging the unlabeled synthetic images and the TTFA, the model could better deal with images from unseen domains.

基金:
语种:
WOS:
中科院(CAS)分区:
出版当年[2023]版:
大类 | 2 区 计算机科学
小类 | 2 区 计算机:人工智能
最新[2023]版:
大类 | 2 区 计算机科学
小类 | 2 区 计算机:人工智能
JCR分区:
出版当年[2023]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
最新[2023]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE

影响因子: 最新[2023版] 最新五年平均 出版当年[2023版] 出版当年五年平均 出版前一年[2023版]

第一作者:
第一作者机构: [1]Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu, Peoples R China
共同第一作者:
通讯作者:
通讯机构: [1]Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu, Peoples R China [3]Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:43370 今日访问量:0 总访问量:3120 更新日期:2024-09-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号