Automatic segmentation of prostate magnetic resonance imaging using generative adversarial networks

CLINICAL IMAGING(2021)

引用 11|浏览29
暂无评分
摘要
Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. Traditionally, prostate gland was manually delineated by the clinician in a time-consuming process that requires professional experience of the observer. Thus, we proposed an automatic prostate segmentation method, called SegDGAN, which is based on a classic generative adversarial network model. Material and methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, FCN, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the public datasets. The commonly used segmentation evaluation metrics DSC, VOE, ASD, and HD were used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 15.28%, the lowest ASD values of 0.51 mm and the lowest HD value of 11.58 mm with the clinical dataset. In addition, the highest DSC value, and the lowest VOE, ASD and HD values obtained with the public data set PROMISE12 were 86.24%, 23.60%, 1.02 mm and 7.57 mm, respectively. Conclusions: Our experimental results show that the SegDGAN model have the potential to improve the accuracy of MRI-based prostate gland segmentation. Code has been made available at: https://github.com/w3user/SegDGAN
更多
查看译文
关键词
Automatic segmentation,Generative adversarial networks,Magnetic resonance imaging,Prostate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要