高级检索
当前位置: 首页 > 详情页

Complementary information mutual learning for multimodality medical image segmentation

文献详情

资源类型:
Pubmed体系:
机构: [1]School of Computer Science and Technology, East China Normal University, Shanghai 200062, China. [2]School of Data Science, The Chinese University of Hong Kong, Shenzhen Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518172, China. [3]Huashan Hospital Fudan University, Shanghai 200040, China. [4]School of Software Engineering, Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 200092, China.
出处:
ISSN:

摘要:
Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing subtraction-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the complementary information mutual learning (CIML) framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of addition and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.Copyright © 2024 Elsevier Ltd. All rights reserved.

基金:
语种:
PubmedID:
中科院(CAS)分区:
出版当年[2024]版:
最新[2023]版:
大类 | 1 区 计算机科学
小类 | 2 区 计算机:人工智能 2 区 神经科学
第一作者:
第一作者机构: [1]School of Computer Science and Technology, East China Normal University, Shanghai 200062, China.
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:53080 今日访问量:0 总访问量:4588 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号