2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于伪影提示的稀疏角度CT金属伪影校正方法

石保顺 舒元飞 姜轲 苏月明

石保顺, 舒元飞, 姜轲, 苏月明. 基于伪影提示的稀疏角度CT金属伪影校正方法. 自动化学报, 2025, 51(8): 1000−1010 doi: 10.16383/j.aas.c240462
引用本文: 石保顺, 舒元飞, 姜轲, 苏月明. 基于伪影提示的稀疏角度CT金属伪影校正方法. 自动化学报, 2025, 51(8): 1000−1010 doi: 10.16383/j.aas.c240462
Shi Bao-Shun, Shu Yuan-Fei, Jiang Ke, Su Yue-Ming. Artifact-prompt based method for simultaneous sparse-view and metal artifact reduction in ct images. Acta Automatica Sinica, 2025, 51(8): 1000−1010 doi: 10.16383/j.aas.c240462
Citation: Shi Bao-Shun, Shu Yuan-Fei, Jiang Ke, Su Yue-Ming. Artifact-prompt based method for simultaneous sparse-view and metal artifact reduction in ct images. Acta Automatica Sinica, 2025, 51(8): 1000−1010 doi: 10.16383/j.aas.c240462

基于伪影提示的稀疏角度CT金属伪影校正方法

doi: 10.16383/j.aas.c240462 cstr: 32138.14.j.aas.c240462
基金项目: 国家自然科学基金(62371414, 62301057), 河北省自然科学基金(F2025203070), 河北省重点实验室(202250701010046)资助
详细信息
    作者简介:

    石保顺:燕山大学信息科学与工程学院副教授. 主要研究方向为医学图像处理, 深度字典网络和计算机视觉. 本文通信作者. E-mail: shibaoshun@ysu.edu.cn

    舒元飞:燕山大学信息科学与工程学院硕士研究生. 主要研究方向为计算机视觉和医学图像处理. E-mail: syfei@stumail.ysu.edu.cn

    姜轲:燕山大学信息科学与工程学院博士研究生. 主要研究方向为医学图像处理和深度字典网络. E-mail: jiangke@stumail.ysu.edu.cn

    苏月明 北京物资学院信息学院讲师. 主要研究方向为数字图像处理和压缩感知. E-mail: suyueming@bwu.edu.cn

Artifact-prompt Based Method for Simultaneous Sparse-view and Metal Artifact Reduction in CT Images

Funds: Supported by National Natural Science Foundation of China (62371414, 62301057), Hebei Natural Science Foundation (F2025203070) and Hebei Key Laboratory Project (202250701010046)
More Information
    Author Bio:

    SHI Bao-Shun Associate professor at the School of Information Science and Engineering, Yanshan University. His research interest covers medical image processing, deep dictionary network and computer vision. Corresponding author of this paper

    SHU Yuan-Fei Master student at the School of Information Science and Engineering, Yanshan University. Her research interest covers computer vision and medical image processing

    JIANG Ke Ph.D. candidate at the School of Information Science and Engineering, Yanshan University. Her research interest covers medical image processing and deep dictionary network

    SU Yue-Ming Lecturer at the School of Information, Beijing Wuzi University. Her research interest covers digital image processing and compressed sensing

  • 摘要: 联合稀疏角度CT重建和金属伪影校正任务旨在通过受金属迹污染的少视角投影数据重建高质量的CT图像. 现有稀疏角度CT重建方法和金属伪影校正方法通常依赖于CT图像或投影数据, 但其存在临床投影数据难以获取和校正精度差的问题. 为解决这些问题, 提出一种基于伪影提示Transformer的图像域方法, 仅利用受伪影影响的CT图像即可同时实现稀疏角度CT重建和金属伪影校正. 该方法将伪影区域作为提示, 并将提示特征融入Transformer提取的特征中, 提出伪影提示Transformer架构. 该架构能够通过伪影区域特征提示, 利用伪影区域和非伪影区域之间的全局上下文相关性提升伪影校正精度. 针对多种伪影校正问题, 在包含伪影的CT图像上构建伪影区域估计网络来估计伪影区域, 并设计由局部信息提取模块、伪影区域注意力模块和通道注意力融合模块构成的局部−全局信息交互网络来融合局部与全局信息. 实验结果表明, 该方法能够同时进行高精度CT重建并有效去除金属伪影.
  • 图  1  不同情况下重建图像的对比 ((a) 原始CT图像; (b) 稀疏角度采样下重建的CT图像; (c) 稀疏角度采样下重建的含金属植入物的CT图像)

    Fig.  1  Comparison of the reconstructed images in different situations ((a) The original CT image; (b) The reconstructed CT image under the sparse-view sampling condition; (c) The reconstructed CT image with metallic implants under the sparse-view sampling condition)

    图  2  基于伪影提示Transformer的网络架构

    Fig.  2  The network architecture of the artifact-prompt based Transformer

    图  3  在ARAM模块中对两个关键点(红、黄)在伪影与非伪影区域之间的相关映射进行可视化 ((a) 伪影图像; (b) 红色点对应的注意力映射; (c) 黄色点对应的注意力映射)

    Fig.  3  Visualization of correlation maps in ARAM for two key points (red and yellow) between artifact and non-artifact regions ((a) Artifact image; (b) Attention map for the red point; (c) Attention map for the yellow point)

    图  4  不同方法在投影角度为60时的性能对比

    Fig.  4  Performance comparison of different methods under 60 projection views

    图  5  不同方法在投影角度为90时的性能对比

    Fig.  5  Performance comparison of different methods under 90 projection views

    图  6  不同方法在投影角度为180时的性能对比

    Fig.  6  Performance comparison of different methods under 180 projection views

    表  1  各模块消融实验的PSNR (dB)和SSIM值对比

    Table  1  Comparison of PSNR (dB) and SSIM values in each module ablation experiments

    网络 大尺寸金属 小尺寸金属 平均值
    PSNR SSIM PSNR SSIM PSNR SSIM
    网络1 35.78 0.963 6 40.81 0.976 5 38.23 0.968 9
    网络2 38.07 0.974 5 42.92 0.983 4 40.30 0.977 8
    网络3 38.82 0.976 9 43.65 0.985 0 41.08 0.980 0
    网络4 39.06 0.977 6 43.60 0.985 2 41.08 0.980 3
    网络5 39.30 0.978 6 43.84 0.985 6 41.41 0.981 2
    网络6 38.99 0.974 1 43.27 0.982 8 41.09 0.977 5
    网络7 39.35 0.979 1 44.07 0.986 3 41.50 0.981 8
    下载: 导出CSV

    表  2  使用PSNR (dB)、SSIM和RMSE值比较三种不同的稀疏角度设置下的金属伪影校正效果

    Table  2  Comparison of metal artifact reduction effects under three different sparse-view settings using PSNR (dB), SSIM and RMSE values

    方法PSNRSSIMRMSE
    × 6× 4× 2× 6× 4× 2× 6× 4× 2
    FBP[4]14.2415.1022.340.120 20.118 00.231 2517.92468.80205.85
    RED-CNN[45]33.7535.5539.100.903 60.924 40.959 154.8644.7830.03
    FBPConvNet[46]33.9936.1438.870.855 30.899 10.943 955.6743.9832.93
    DDNet[5]34.1736.1439.540.907 90.923 70.957 451.8841.3828.11
    CNNMAR[36]36.0137.5640.030.939 10.952 10.970 842.3835.4927.15
    DuDoTrans[11]35.0537.0440.290.908 70.935 70.964 246.8937.5126.05
    MetaInv-Net[9]37.3837.6440.050.960 60.961 30.977 636.0235.2427.65
    FreeSeed[27]34.4535.3437.270.872 70.880 10.911 349.7344.8335.84
    MEPNet[39]38.9339.4341.040.961 10.963 90.960 830.3130.2824.95
    APFormer41.5041.9142.800.981 80.982 00.985 123.4322.1119.86
    下载: 导出CSV

    表  3  不同方法的模型参数量与计算效率比较

    Table  3  Comparison of the model parameters and computational efficiency of different methods

    方法模型参数量 (M)单张图片推断时间 (s)
    RED-CNN[45]1.850.45
    FBPConvNet[46]34.560.41
    DDNet[5]0.170.33
    CNNMAR[36]2.750.48
    DuDoTrans[11]0.601.01
    MetaInv-Net[9]13.391.38
    FreeSeed[27]9.000.58
    MEPNet[39]4.721.25
    APFormer34.130.72
    下载: 导出CSV
  • [1] de Basea M B, Thierry-Chef I, Harbron R, Hauptmann M, Byrnes G, Bernier M O, et al. Risk of hematological malignancies from CT radiation exposure in children, adolescents and young adults. Nature Medicine, 2023, 29(12): 3111−3119
    [2] Lee H, Lee J, Kim H, Cho B, Cho S. Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences, 2019, 3(2): 109−119
    [3] Zhou B, Zhou S K, Duncan J S, Liu C. Limited view tomographic reconstruction using a cascaded residual dense spatial-channel attention network with projection data fidelity layer. IEEE Transactions on Medical Imaging, 2021, 40(7): 1792−1804
    [4] Pan X C, Sidky E Y, Vannier M. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Problems, 2009, 25(12): Article No. 1230009
    [5] Zhang Z C, Liang X K, Dong X, Xie Y Q, Cao G H. A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution. IEEE Transactions on Medical Imaging, 2018, 37(6): 1407−1417
    [6] Sun C, Liu Y T, Yang H W. Degradation-aware deep learning framework for sparse-view CT reconstruction. Tomography, 2021, 7(4): 932−949
    [7] Xia W J, Yang Z Y, Zhou Q Z, Lu Z X, Wang Z X, Zhang Y. A transformer-based iterative reconstruction model for sparse-view CT reconstruction. In: Proceedings of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022). Singapore, Singapore: Springer, 2022. 790−800
    [8] Cheng W L, Wang Y, Li H W, Duan Y P. Learned full-sampling reconstruction from incomplete data. IEEE Transactions on Computational Imaging, 2020, 6: 945−957
    [9] Zhang H M, Liu B D, Yu H Y, Dong B. MetaInv-Net: Meta inversion network for sparse view CT image reconstruction. IEEE Transactions on Medical Imaging, 2021, 40(2): 621−634
    [10] Pan J Y, Zhang H Y, Wu W F, Gao Z F, Wu W W. Multi-domain integrative Swin transformer network for sparse-view tomographic reconstruction. Patterns, 2022, 3(6): Article No. 100498
    [11] Wang C, Shang K, Zhang H M, Li Q, Zhou S K. DuDoTrans: Dual-domain transformer for sparse-view CT reconstruction. In: Proceedings of the 5th International Workshop on Machine Learning for Medical Image Reconstruction. Singapore, Singapore: Springer, 2022. 84−94
    [12] Ghani M U, Karl W C. Fast enhanced CT metal artifact reduction using data domain deep learning. IEEE Transactions on Computational Imaging, 2020, 6: 181−193
    [13] Zhang Y B, Yu H Y. Convolutional neural network based metal artifact reduction in X-ray computed tomography. IEEE Transactions on Medical Imaging, 2018, 37(6): 1370−1381
    [14] Wang H, Li Y X, He N J, Ma K, Meng D Y, Zheng Y F. DICDNet: Deep interpretable convolutional dictionary network for metal artifact reduction in CT images. IEEE Transactions on Medical Imaging, 2022, 41(4): 869−880
    [15] Wang H, Li Y X, Meng D Y, Zheng Y F. Adaptive convolutional dictionary network for CT metal artifact reduction. In: Proceedings of the 31st International Joint Conference on Artificial Intelligence. Vienna, Austria: IJCAI, 2022. 1401−1407
    [16] Wang H, Xie Q, Li Y X, Huang Y W, Meng D Y, Zheng Y F. Orientation-shared convolution representation for CT metal artifact learning. In: Proceedings of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022). Singapore, Singapore: Springer, 2022. 665−675
    [17] Lyu Y Y, Lin W A, Liao H F, Lu J J, Zhou S K. Encoding metal mask projection for metal artifact reduction in computed tomography. In: Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2020). Lima, Peru: Springer, 2020. 147−157
    [18] Wang T, Lu Z X, Yang Z Y, Xia W J, Hou M Z, Sun H Q, et al. IDOL-Net: An interactive dual-domain parallel network for CT metal artifact reduction. IEEE Transactions on Radiation and Plasma Medical Sciences, 2022, 6(8): 874−885
    [19] Li B Y, Liu X, Hu P, Wu Z Q, Lv J C, Peng X. All-in-one image restoration for unknown corruption. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17431−17441
    [20] Potlapalli V, Zamir S W, Khan S, Khan F S. PromptIR: Prompting for all-in-one blind image restoration. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2023. Article No. 3121
    [21] Yazdanpanah A P, Regentova E E. Sparse-view CT reconstruction using curvelet and TV-based regularization. In: Proceedings of the 13th International Conference on Image Analysis and Recognition. Póvoa de Varzim, Portugal: Springer, 2016. 672−677
    [22] Okamoto T, Ohnishi T, Haneishi H. Artifact reduction for sparse-view CT using deep learning with band patch. IEEE Transactions on Radiation and Plasma Medical Sciences, 2022, 6(8): 859−873
    [23] Sun X Q, Li X R, Chen P. An ultra-sparse view CT imaging method based on X-ray2CTNet. IEEE Transactions on Computational Imaging, 2022, 8: 733−742
    [24] Guan B, Yang C L, Zhang L, Niu S Z, Zhang M H, Wang Y H, et al. Generative modeling in sinogram domain for sparse-view CT reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences, 2024, 8(2): 195−207
    [25] Song Y, Shen L Y, Xing L, Ermon S. Solving inverse problems in medical imaging with score-based generative models. In: Proceedings of the 10th International Conference on Learning Representations. Virtual Event: ICLR, 2022. 1−18
    [26] Ding C, Zhang Q C, Wang G, Ye X J, Chen Y M. Learned alternating minimization algorithm for dual-domain sparse-view CT reconstruction. In: Proceedings of the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). Vancouver, Canada: Springer, 2023. 173−183
    [27] Ma C L, Li Z L, Zhang J P, Zhang Y, Shan H M. FreeSeed: Frequency-band-aware and self-guided network for sparse-view CT reconstruction. In: Proceedings of the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). Vancouver, Canada: Springer, 2023. 250−259
    [28] Verburg J M, Seco J. CT metal artifact reduction method correcting for beam hardening and missing projections. Physics in Medicine and Biology, 2012, 57(9): 2803−2818
    [29] Liao H F, Lin W A, Huo Z M, Vogelsang L, Sehnert W J, Zhou S K, et al. Generative mask pyramid network for CT/CBCT metal artifact reduction with joint projection-sinogram correction. In: Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019). Shenzhen, China: Springer, 2019. 77−85
    [30] Shi B S, Zhang S L, Fu Z R. Artifact region-aware transformer: Global context helps CT metal artifact reduction. IEEE Signal Processing Letters, 2024, 31: 1249−1253
    [31] Lin W A, Liao H F, Peng C, Sun X H, Zhang J D, Luo J B, et al. DuDoNet: Dual domain network for CT metal artifact reduction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 10504−10513
    [32] Wang T, Xia W J, Huang Y Q, Sun H, Liu Y, Chen H, et al. DAN-net: Dual-domain adaptive-scaling non-local network for CT metal artifact reduction. Physics in Medicine and Biology, 2021, 66(15): Article No. 155009
    [33] Wang H, Li Y X, Zhang H M, Chen J W, Ma K, Meng D Y, et al. InDuDoNet: An interpretable dual domain network for CT metal artifact reduction. In: Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). Strasbourg, France: Springer, 2021. 107−118
    [34] Wang H, Li Y X, Zhang H M, Meng D Y, Zheng Y F. InDuDoNet+: A deep unfolding dual domain network for metal artifact reduction in CT images. Medical Image Analysis, 2023, 85: Article No. 102729
    [35] Shi B S, Zhang S L, Jiang K, Lian Q S. Coupling model- and data-driven networks for CT metal artifact reduction. IEEE Transactions on Computational Imaging, 2024, 10: 415−428
    [36] Ketcha M D, Marrama M, Souza A, Uneri A, Wu P W, Zhang X X, et al. Sinogram + image domain neural network approach for metal artifact reduction in low-dose cone-beam computed tomography. Journal of Medical Imaging, 2021, 8(5): Article No. 052103
    [37] Zhou B, Chen X C, Zhou S K, Duncan J S, Liu C. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography. Medical Image Analysis, 2022, 75: Article No. 102289
    [38] Zhou B, Chen X C, Xie H D, Zhou S K, Duncan J S, Liu C. DuDoUFNet: Dual-domain under-to-fully-complete progressive restoration network for simultaneous metal artifact reduction and low-dose CT reconstruction. IEEE Transactions on Medical Imaging, 2022, 41(12): 3587−3599
    [39] Wang H, Zhou M H, Wei D, Li Y X, Zheng Y F. MEPNet: A model-driven equivariant proximal network for joint sparse-view reconstruction and metal artifact reduction in CT images. In: Proceedings of the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). Vancouver, Canada: Springer, 2023. 109−120
    [40] Shi B S, Jiang K, Zhang S L, Lian Q S, Qin Y W, Zhao Y S. Mud-Net: Multi-domain deep unrolling network for simultaneous sparse-view and metal artifact reduction in computed tomography. Machine Learning: Science and Technology, 2024, 5(1): Article No. 015010
    [41] Bahnemiri S G, Ponomarenko M, Egiazarian K. Learning-based noise component map estimation for image denoising. IEEE Signal Processing Letters, 2022, 29: 1407−1411
    [42] Wang L G, Dong X Y, Wang Y Q, Ying X Y, Lin Z P, An W, et al. Exploring sparsity in image super-resolution for efficient inference. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4915−4924
    [43] Guo L Q, Huang S Y, Liu D, Cheng H, Wen B H. ShadowFormer: Global context helps shadow removal. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. Washington, USA: AAAI Press, 2023. 710−718
    [44] Yan K, Wang X S, Lu L, Zhang L, Harrison A P, Bagheri M, et al. Deep lesion graphs in the wild: Relationship learning and organization of significant radiology image findings in a diverse large-scale lesion database. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 9261−9270
    [45] Chen H, Zhang Y, Kalra M K, Lin F, Chen Y, Liao P X, et al. Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524−2535
    [46] Jin K H, McCann M T, Froustey E, Unser M. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 2017, 26(9): 4509−4522
  • 加载中
计量
  • 文章访问数:  8
  • HTML全文浏览量:  3
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-07-01
  • 录用日期:  2025-06-18
  • 网络出版日期:  2025-07-17

目录

    /

    返回文章
    返回