Learning 3D non-rigid deformation based on an unsupervised deep learning for PET/CT image registration.

Proceedings of SPIE(2019)

引用 11|浏览118
暂无评分
摘要
This study proposes a novel method to learn a three-dimensional (3D) non-rigid deformation for automatic image registration between positron emission tomography (PET) and computed tomography (CT) scans obtained from the same patient. The proposed scheme includes two modules: (1) A low-resolution displacement vector field (LR-DVF) estimator, which employs a 3D deep convolutional network (ConvNet) to directly estimate the voxel-wise displacement (a 3D vector field) between PET/CT images; and (2) a 3D spatial transformer and re-sampler, which warps the PET images to match the anatomical structures in the CT images using the estimated 3D vector field. The parameters of the ConvNet are learned from a number of PET/CT image pairs, through an unsupervised learning method. The normalized cross correlation (NCC) between the PET/CT images is employed as the similarity metric to guide an end-to-end learning process, with a constraint (regularization term) to preserve the smoothness of the 3D deformations. A dataset containing 170 PET/CT scans is utilized in experiments, based on 10-fold cross-validation, where 22,338 3D patches are sampled from the dataset. In each fold, 3D patches from 153 patients (90%) are utilized to train the parameters, while the remaining whole-body voxels from 17 patients (10%) are used to test the image registration performance. The experimental results demonstrate that the image registration accuracy (the mean value of NCCs) is increased from 0.402 (the initial situation) to 0.567 on PET/CT scans using the proposed scheme. We also compare the performance of our scheme with previous work (DIRNet), and the advantages of our scheme are confirmed by promising results.
更多
查看译文
关键词
image registration,deep learning,convolutional network,PET/CT imaging
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要