高级检索
当前位置: 首页 > 详情页

Multi-modality medical image fusion by edge supervising and multi-scale attention features extraction

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE

机构: [1]Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Peoples R China [2]Sichuan Canc Hosp, Chengdu 610000, Peoples R China
出处:
ISSN:

关键词: Medical image fusion Deep learning Supervised learning Unsupervised learning

摘要:
Since the development of deep learning, multimodal medical image fusion (MMIF) has achieved both efficiency and real-time performance. However, most existing deep learning-based fusion methods primarily focus on the overall network architecture, often overlooking the intrinsic characteristics of the source images. From the perspectives of SPECT and PET imaging and the continuity of biological information, the edge regions of these functional images should receive greater attention during the fusion process. Furthermore, the pseudocolor should be separated before fusion and reintroduced afterward. Since functional images such as SPECT and PET typically suffer from low clarity, directly fusing them without proper processing may obscure texture details in the resulting image. To address these challenges, we propose an end-to-end encoder-decoder network for multimodal medical image fusion, termed EFCNet. The encoder comprises three main components: a smooth edge extraction module (SEEM), a multi-scale attention module (MSAM), and the E-Fusion module. The decoder reconstructs the fused image from these features. Specifically, SEEM extracts and smooths the edge information of functional source images (SPECT and PET), thereby mitigating the issues mentioned above. MSAM captures both local details and global contextual features while adaptively emphasizing more informative channels. E-Fusion then performs effective fusion of the extracted local and global features. Notably, our model is trained on a single dataset to obtain the pretrained weights, yet it achieves impressive results when tested on other datasets, demonstrating the strong generalization capability of the proposed method. The implementation of our proposed method is available on GitHub at https://github.com/VCMHE/EFCNet.

语种:
WOS:
中科院(CAS)分区:
出版当年[2025]版:
大类 | 4 区 计算机科学
小类 | 3 区 工程:电子与电气 4 区 计算机:硬件 4 区 计算机:理论方法
最新[2025]版:
大类 | 4 区 计算机科学
小类 | 3 区 工程:电子与电气 4 区 计算机:硬件 4 区 计算机:理论方法
JCR分区:
出版当年[2024]版:
Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Q2 COMPUTER SCIENCE, THEORY & METHODS Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
最新[2024]版:
Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Q2 COMPUTER SCIENCE, THEORY & METHODS Q2 ENGINEERING, ELECTRICAL & ELECTRONIC

影响因子: 最新[2024版] 最新五年平均 出版当年[2024版] 出版当年五年平均 出版前一年[2024版]

第一作者:
第一作者机构: [1]Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Peoples R China
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:65768 今日访问量:2 总访问量:5150 更新日期:2025-12-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号