高级检索
当前位置: 首页 > 详情页

Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE ◇ EI

机构: [a]School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China [b]SenseTime Research, Shenzhen, China [c]Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China [d]SenseTime Research, Shanghai, China
出处:
ISSN:

关键词: Convolutional neural network Intensity transform Medical image segmentation Uncertainty

摘要:
Accurate segmentation of Organs-at-Risk (OAR) from Head and Neck (HAN) Computed Tomography (CT) images with uncertainty information is critical for effective planning of radiation therapy for Nasopharyngeal Carcinoma (NPC) treatment. Despite the state-of-the-art performance achieved by Convolutional Neural Networks (CNNs) for the segmentation task, existing methods do not provide uncertainty estimation of the segmentation results for treatment planning, and their accuracy is still limited by the low contrast of soft tissues in CT, highly imbalanced sizes of OARs and large inter-slice spacing. To address these problems, we propose a novel framework for accurate OAR segmentation with reliable uncertainty estimation. First, we propose a Segmental Linear Function (SLF) to transform the intensity of CT images to make multiple organs more distinguishable than existing simple window width/level-based methods. Second, we introduce a novel 2.5D network (named as 3D-SepNet) specially designed for dealing with clinic CT scans with anisotropic spacing. Thirdly, we propose a novel hardness-aware loss function that pays attention to hard voxels for accurate segmentation. We also use an ensemble of models trained with different loss functions and intensity transforms to obtain robust results, which also leads to segmentation uncertainty without extra efforts. Our method won the third place of the HAN OAR segmentation task in StructSeg 2019 challenge and it achieved weighted average Dice of 80.52% and 95% Hausdorff Distance of 3.043 mm. Experimental results show that 1) our SLF for intensity transform helps to improve the accuracy of OAR segmentation from CT images; 2) With only 1/3 parameters of 3D UNet, our 3D-SepNet obtains better segmentation results for most OARs; 3) The proposed hard voxel weighting strategy used for training effectively improves the segmentation accuracy; 4) The segmentation uncertainty obtained by our method has a high correlation to mis-segmentations, which has a potential to assist more informed decisions in clinical practice. Our code is available athttps://github.com/HiLab-git/SepNet. © 2021 Elsevier B.V.

基金:

基金编号: 81771921 61901084

语种:
被引次数:
WOS:
中科院(CAS)分区:
出版当年[2021]版:
大类 | 2 区 计算机科学
小类 | 2 区 计算机:人工智能
最新[2023]版:
大类 | 2 区 计算机科学
小类 | 2 区 计算机:人工智能
JCR分区:
出版当年[2021]版:
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
最新[2023]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE

影响因子: 最新[2023版] 最新五年平均 出版当年[2021版] 出版当年五年平均 出版前一年[2020版] 出版后一年[2022版]

第一作者:
第一作者机构: [a]School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:43374 今日访问量:0 总访问量:3120 更新日期:2024-09-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号