• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于物理显著性特征感知的医学小病灶分割网络

任向阳 焦博旸 杨震 贾育衡 郭伟峰 梁静

任向阳, 焦博旸, 杨震, 贾育衡, 郭伟峰, 梁静. 基于物理显著性特征感知的医学小病灶分割网络. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250623
引用本文: 任向阳, 焦博旸, 杨震, 贾育衡, 郭伟峰, 梁静. 基于物理显著性特征感知的医学小病灶分割网络. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250623
Ren Xiang-Yang, Jiao Bo-Yang, Yang Zhen, Jia Yu-Heng, Guo Wei-Feng, Liang Jing. Medical small lesion segmentation network based on physical saliency feature perception. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250623
Citation: Ren Xiang-Yang, Jiao Bo-Yang, Yang Zhen, Jia Yu-Heng, Guo Wei-Feng, Liang Jing. Medical small lesion segmentation network based on physical saliency feature perception. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250623

基于物理显著性特征感知的医学小病灶分割网络

doi: 10.16383/j.aas.c250623 cstr: 32138.14.j.aas.c250623
基金项目: 国家自然科学基金 (62206251, 62476253, 62576094, U23A20340, 625B2066), 河南省自然科学基金 (262300421218, 262300421635), 河南省科技研发联合基金 (产业类) 重大项目 (245101610001), 河南省医学科技人才海外研修项目 (HNMOT2025063) 资助
详细信息
    作者简介:

    任向阳:郑州大学第一附属医院高级工程师. 主要研究方向为医学图像处理, 计算机视觉, 模式识别和目标检测.E-mail: xyren@zzu.edu.cn

    焦博旸:郑州大学第一临床医学院硕士研究生. 主要研究方向为医学图像处理, 计算机视觉和模式识别. E-mail: jiaoboyang@gs.zzu.edu.cn

    杨震:湖南大学人工智能与机器人学院博士研究生. 主要研究方向为计算机视觉, 模式识别和目标检测. E-mail: zhenyangssy@gmail.com

    贾育衡 东南大学计算机科学与工程学院副教授. 主要研究方向为机器学习, 数据表示. E-mail: yhjia@seu.edu.cn

    郭伟峰:郑州大学电气与信息工程学院特聘教授. 主要研究方向为计算智能, 图信号机器学习和生物医学大数据挖掘. E-mail: guowf@zzu.edu.cn

    梁静:郑州大学电气与信息工程学院教授. 主要研究方向为进化计算, 群集智能, 机器学习和集成学习. 本文通信作者. E-mail: liangjing@zzu.edu.cn

Medical Small Lesion Segmentation Network Based on Physical Saliency Feature Perception

Funds: Supported by National Natural Science Foundation of China (62206251, 62476253, 62576094, U23A20340, 625B2066), Natural Science Foundation of Henan (262300421218, 262300421635), Henan Province Science and Technology Research Joint Fund (Industrial Category) Major Project (245101610001), and Henan Medical Researcher Overseas Training Program (HNMOT2025063)
More Information
    Author Bio:

    REN Xiang-Yang Senior engineer at the First Affiliated Hospital of Zhengzhou University. His research interests include medical image processing, computer vision, pattern recognition, and object detection

    JIAO Bo-Yang Master student at the First Clinical Medical College, Zhengzhou University. His research interests include medical image processing, computer vision, and pattern recognition

    YANG Zhen Ph.D. candidate at the School of Artificial Intelligence and Robotics, Hunan University. His research interests include computer vision, pattern recognition, and object detection

    JIA Yu-Heng Associate professor at the School of Computer Science and Engineering, Southeast University. His research interests include machine learning and data representation

    GUO Wei-Feng Distinguished professor at the School of Electrical and Information Engineering, Zhengzhou University. His main research interests include computational intelligence, machine learning for graph signals, and biomedical big data mining

    LIANG Jing Professor at the School of Electrical and Information Engineering, Zhengzhou University. Her research interests include evolutionary computation, swarm intelligence, machine learning, and ensemble learning. Corresponding author of this paper

  • 摘要: 精准分割早期小病灶对疾病诊疗至关重要, 但现有方法面对特征稀疏、易受背景干扰的小病灶时性能显著下降. 受计算机断层扫描成像中X射线衰减物理原理启发, 发现小病灶在影像中呈现出中心亮、边缘渐弱的强度分布, 其轮廓与二维高斯分布高度吻合. 为此, 提出一种辐射衰减原理引导的显著性感知分割网络(RAP-Net). RAP-Net将射线衰减导致的二维高斯分布作为相关性滤波卷积核融入深度学习架构, 并通过专为小病灶设计的多尺度感知特征网络, 其核心显著特征感知提取模块利用多尺度高斯空洞卷积精确建模衰减原理以实现稀疏特征的深度挖掘与背景抑制. 实验表明, RAP-Net在小肾结石与小肝脏钙化灶分割任务中的Dice系数和IoU提升至少18.05%与19.55%, 显著超越现有主流方法.
  • 图  1  本文提出的小病灶分割任务及研究动机

    Fig.  1  The small lesion segmentation task proposed in this paper and its research motivation

    图  2  CT成像中的X射线衰减原理示意图

    Fig.  2  Schematic diagram of X-ray attenuation principles in CT imaging

    图  3  本文所提出的RAP-Net架构示意

    Fig.  3  The architecture diagram of the proposed RAP-Net

    图  4  显著特征提取原理图

    Fig.  4  Schematic diagram of salient feature extraction

    图  5  MSL数据集样本示例图

    Fig.  5  Sample diagram of the MSL dataset

    图  6  采用不同方法对MSL数据集中小肾结石进行分割的结果

    Fig.  6  Results of segmenting small kidney stones in the MSL dataset by using different methods

    图  7  采用不同方法对MSL数据集中小肝脏钙化灶进行分割的结果

    Fig.  7  Results of segmenting small liver calcifications in the MSL dataset by using different methods

    图  8  本文方法在复杂CT图像中对小病灶的分割结果

    Fig.  8  The segmentation results for small lesions in complex CT images by using the method described in this paper

    图  9  RAP-Net在MSL数据集两个医学小病灶分割任务中的像素级混淆矩阵

    Fig.  9  Pixel-level confusion matrices of RAP-Net in the MSL dataset for two medical small lesion segmentation tasks

    图  10  RAP-Net在肝/肾肿瘤的分割结果

    Fig.  10  RAP-Net segmentation results for liver/kidney tumors

    图  11  消融实验的可视化结果

    Fig.  11  The visualization results of ablation experiments

    图  12  消融实验的可视化结果

    Fig.  12  The visualization results of ablation experiments

    表  1  不同方法在小肾结石分割任务中的Dice系数与IoU

    Table  1  The Dice coefficient and IoU of different methods in small kidney stone segmentation task

    方法 Dice(%) IoU(%) 参数量(M)
    nnU-Net[30]63.8551.3430.60
    FA-Net[33]61.3546.1934.15
    SwinUnet[36]64.7949.3227.20
    STC-UNet[37]61.4752.2941.06
    VM-UNet[38]58.9044.1227.43
    Zig-RiR[39]65.4855.1524.58
    MedSAM[40]59.6447.7193.74
    本文方法(3-layer MSPF)86.4477.910.34
    本文方法(5-layer MSPF)86.6578.05
    本文方法(7-layer MSPF)87.4378.51
    下载: 导出CSV

    表  2  不同方法在小肝脏钙化灶分割任务中的Dice系数与IoU

    Table  2  The Dice coefficient and IoU of different methods in small liver calcification segmentation task

    方法 Dice(%) IoU(%) 参数量(M)
    nnU-Net[30]53.0344.4530.60
    FA-Net[33]44.1235.7034.15
    Swin-UNet[36]37.8138.4427.20
    STC-UNet[37]43.1231.8541.06
    VM-UNet[38]27.8120.1927.43
    Zig-RiR[39]66.5956.0224.58
    MedSAM[40]55.1345.0693.74
    本文方法(3-layer MSPF)84.6475.570.34
    本文方法(5-layer MSPF)84.9675.84
    本文方法(7-layer MSPF)85.1576.02
    下载: 导出CSV

    表  3  小肾结石分割任务中消融实验的Dice系数与IoU

    Table  3  The Dice coefficient and IoU of ablation experiment in small kidney stone segmentation task

    方法 Dice(%) IoU(%)
    VGG-16[59] 44.25 31.40
    ResNet-101[60] 51.02 42.67
    DenseNet-121[61] 51.49 39.81
    常规卷积 40.97 28.13
    本文方法 85.35 76.86
    下载: 导出CSV

    表  4  小肝脏钙化灶分割任务中消融实验的Dice系数与IoU

    Table  4  The Dice coefficient and IoU of ablation experiment in small liver calcification segmentation

    方法 Dice(%) IoU(%)
    VGG-16[59] 41.32 30.13
    ResNet-101[60] 51.12 40.13
    DenseNet-121[61] 54.73 41.51
    常规卷积 39.97 27.23
    本文方法 83.81 74.46
    下载: 导出CSV
  • [1] Zhang K J, Zhang L, Pan H W. UoloNet: Based on Multi-tasking enhanced small target medical segmentation model. Artificial Intelligence Review, 2024, 57(2): 31 doi: 10.1007/s10462-023-10671-5
    [2] Koleilat T, Asgariandehkordi H, Rivaz H, Xiao Y M. MedCLIP-SAMv2: Towards universal Text-Driven medical image segmentation. Medical Image Analysis, 2025, 106: 103749 doi: 10.1016/j.media.2025.103749
    [3] He A, Wang K, Li T, Du C, Xia S, Fu H. H2Former: An efficient hierarchical hybrid transformer for medical image segmentation. IEEE Transactions on Medical Imaging, 2023, 42(9): 2763−2775 doi: 10.1109/TMI.2023.3264513
    [4] Zhou X, Chen T. BSBP-RWKV: Background suppression with boundary preservation for efficient medical image segmentation. In: Proceedings of 32nd ACM International Conference on Multimedia. New York, USA: Association for Computing Machinery, 2024. 4938-4946
    [5] Mou L, Zhao Y T, Fu H Z, Liu Y H, Cheng J, Zheng Y L, et al. CS2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Medical Image Analysis, 2021, 67: 101874 doi: 10.1016/j.media.2020.101874
    [6] Gu Z W, Cheng J, Fu H Z, Zhou K, Hao H Y, Zhao Y T, et al. CE-Net: Context encoder network for 2D medical image segmentation. IEEE Transactions on Medical Imaging, 2019, 38(10): 2281−2292 doi: 10.1109/TMI.2019.2903562
    [7] Zhang N, Yu L, Zhang D, Wu W, Tian S, Kang X, et al. CT-Net: Asymmetric compound branch transformer for medical image segmentation. Neural Networks, 2024, 170: 298−311 doi: 10.1016/j.neunet.2023.11.034
    [8] Lei Y J, Wu Z J, Li Z Y, Yang Y R, Liang Z M. BP-CapsNet: An image-based deep learning method for medical diagnosis. Applied Soft Computing, 2023, 146: 110683 doi: 10.1016/j.asoc.2023.110683
    [9] Patro K K, Allam J P, Neelapu B C, Tadeusiewicz R, Acharya U R, Hammad M, et al. Application of kronecker convolutions in deep learning technique for automated detection of kidney stones with coronal CT images. Information Sciences, 2023, 640: 119005 doi: 10.1016/j.ins.2023.119005
    [10] Lin J, Huang X R, Zhou H Y, Wang Y Q, Zhang Q N. Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images. Medical Image Analysis, 2023, 89: 102929 doi: 10.1016/j.media.2023.102929
    [11] Shaker A, Maaz M, Rasheed H, Khan S, Yang M H, Khan F S. UNETR++: Delving into efficient and accurate 3D medical image segmentation. IEEE Transactions on Medical Imaging, 2024, 43(9): 3377−3390 doi: 10.1109/TMI.2024.3398728
    [12] Wei J H, Zhu G J, Fan Z, Liu J C, Rong Y B, Mo J J, et al. Genetic U-Net: Automatically designed deep networks for retinal vessel segmentation using a genetic algorithm. IEEE Transactions on Medical Imaging, 2022, 41(2): 292−307 doi: 10.1109/TMI.2021.3111679
    [13] Fang X, Yan P K. Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction. IEEE Transactions on Medical Imaging, 2020, 39(11): 3619−3629 doi: 10.1109/TMI.2020.3001036
    [14] Zhang T L, Wang X Z. Anchorwise fuzziness modeling in convolution–transformer neural network for left atrium image segmentation. IEEE Transactions on Fuzzy Systems, 2024, 32(2): 398−408 doi: 10.1109/TFUZZ.2023.3298904
    [15] Kumar A, Fulham M, Feng D, Kim J. Co-Learning feature fusion maps from PET-CT images of lung cancer. IEEE Transactions on Medical Imaging, 2020, 39(1): 204−217 doi: 10.1109/TMI.2019.2923601
    [16] Zhou Y, Ahmed T S, Wang M, Newman E A, Schmetterer L, Fu H Z, et al. Masked vascular structure segmentation and completion in retinal images. IEEE Transactions on Medical Imaging, 2025, 44(6): 2492−2503 doi: 10.1109/TMI.2025.3538336
    [17] Wu Y C, Xia Y. Vessel-Net: Retinal vessel segmentation under multi-path supervision. In: Proceedings of Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Shenzhen, China: Springer, 2019. 264-272
    [18] Xiang D H, Zhang B, Lu Y X, Deng S M. Modality-specific segmentation network for lung tumor segmentation in PET-CT images. IEEE Journal of Biomedical and Health Informatics, 2023, 27(3): 1237−1248 doi: 10.1109/JBHI.2022.3186275
    [19] Yuan Y C, Zhang L, Wang L T, Huang H Y. Multi-level attention network for retinal vessel segmentation. IEEE Journal of Biomedical and Health Informatics, 2022, 26(1): 312−323 doi: 10.1109/JBHI.2021.3089201
    [20] Ma Y X, Wang S, Hua Y, Ma R H, Song T, Xue Z G, et al. perceptual data augmentation for biomedical coronary vessel segmentation. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2023, 20(4): 2494−2505 doi: 10.1109/TCBB.2022.3188148
    [21] Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole C C, Hoeferlin C M, et al. RAVIR: A dataset and methodology for the semantic segmentation and quantitative analysis of retinal arteries and veins in infrared reflectance imaging. IEEE Journal of Biomedical and Health Informatics, 2022, 26(7): 3272−3283 doi: 10.1109/JBHI.2022.3163352
    [22] Hu K, Jiang S, Zhang Y, Li X Y, Gao X P. Joint-Seg: treat foveal avascular zone and retinal vessel segmentation in OCTA Images as a joint task. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1−13
    [23] McKetty M H. The AAPM/RSNA physics tutorial for residents. X-ray attenuation. RadioGraphics, 1998, 18(1): 151−163 doi: 10.1148/radiographics.18.1.9460114
    [24] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 3431-3440
    [25] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Proceedings of Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer, 2015. 234-241
    [26] Zhou Z W, Rahman Siddiquee M M, Tajbakhsh N, Liang J M. UNet++: A nested U-Net architecture for medical image segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 20183−11
    [27] Alom M Z, Hasan M, et al. Nuclei segmentation with recurrent residual convolutional neural networks based U-Net (R2U-Net). In: Proceedings of IEEE National Aerospace and Electronics Conference. Dayton, USA: IEEE, 2018. 228-233.
    [28] Jha D, Riegler M A, Johansen D, Halvorsen P, Johansen H D. DoubleU-Net: A deep convolutional neural network for medical image segmentation. In: Proceedings of IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). Rochester, USA: IEEE, 2020. 558-564
    [29] Xiao X, Lian S, Luo Z M, Li S Z. Weighted Res-UNet for high-quality retina vessel segmentation. In: Proceedings of 9th International Conference on Information Technology in Medicine and Education (ITME). Hangzhou, China: IEEE, 2018. 327-331
    [30] Isensee F, Jaeger P F, Kohl S A A, Petersen J, Maier-Hein K H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 2021, 18(2): 203−211 doi: 10.1038/s41592-020-01008-z
    [31] Oktay O, Schlemper J, et al. Attention U-Net: Learning where to look for the pancreas. arXiv: 1804.03999, 2018.
    [32] Fan D P, Ji G P, Zhou T, Chen G, Fu H Z, Shen J B, et al. PraNet: Parallel reverse attention network for polyp segmentation. In: Proceedings of Medical Image Computing and Computer Assisted Intervention. Lima, Peru: Springer, 2020. 263-273
    [33] Tomar N K, Jha D, Riegler M A, Johansen H D, Johansen D, Rittscher J, et al. FANet: A feedback attention network for improved biomedical image segmentation. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(11): 9375−9388 doi: 10.1109/TNNLS.2022.3159394
    [34] Ding W P, Wang H P, Huang J S, Ju H R, Geng Y, Lin C T, et al. FTransCNN: Fusing transformer and a CNN based on fuzzy logic for uncertain medical image segmentation. Information Fusion, 2023, 99: 101880 doi: 10.1016/j.inffus.2023.101880
    [35] Wang H N, Cao P, Wang J Q, Zaiane O R. UCTransNet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2022. 36: 2441-2449
    [36] Cao H, Wang Y Y, Chen J, Jiang D S, Zhang X P, Tian Q, et al. Swin-Unet: Unet-like pure transformer for medical image segmentation. In: Proceedings of European Conference on Computer Vision. Tel Aviv, Israel: Springer, 2022. 205-218
    [37] Hu W, Yang S, Guo W, Xiao N, Yang X, Ren X. STC-UNet: Renal tumor segmentation based on enhanced feature extraction at different network levels. BMC Med Imaging, 2024, 24(1): 179 doi: 10.1186/s12880-024-01359-5
    [38] Ruan J C, Xiang S C, Xie M Y, Liu T, Fu Y Z. VM-UNet: Vision mamba unet for medical image segmentation. ACM Transactions on Multimedia Computing, Communications and Applications, 2025.
    [39] Chen T X, Zhou X D, Tan Z T, Wu Y, Wang Z Y, Ye Z, et al. Zig-RiR: Zigzag RWKV-in-RWKV for efficient medical image segmentation. IEEE Transactions on Medical Imaging, 2025, 44(8): 3245−3257 doi: 10.1109/TMI.2025.3561797
    [40] Ma J, He Y T, Li F F, Han L, You C Y, Wang B. Segment anything in medical images. Nature Communications, 2024, 15(1): 654 doi: 10.1038/s41467-024-44824-z
    [41] Jiao R S, Liu Q P, Zhang Y, Pu B Z, Xue B S, Cheng Y, et al. RECISTSurv: Hybrid multi-task transformer for hepatocellular carcinoma response and survival evaluation. IEEE Transactions on Image Processing, 2025, 34: 3873−3888 doi: 10.1109/TIP.2025.3579200
    [42] Zhao L, Wang T, Chen Y, Zhang X, Tang H, Lin F, et al. A novel framework for segmentation of small targets in medical images. Scientific Reports, 2025, 15(1): 9924 doi: 10.1038/s41598-025-94437-9
    [43] Ahmadi M, Biswas D, Lin M, Vrionis F D, Hashemi J, Tang Y. Physics-informed machine learning for advancing computational medical imaging: integrating data-driven approaches with fundamental physical principles. Artificial Intelligence Review, 2025, 58(10): 1−49 doi: 10.1007/s10462-025-11303-w
    [44] Wang X L, Girshick R, Gupta A, He K M, Non-local neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7794-7803
    [45] Huang Z L, Wang X G, Huang L C, Huang C, Wei Y C, Liu W Y. CCNet: Criss-cross attention for semantic segmentation. In: Proceedings of IEEE/CVF International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 603-612
    [46] Ruan J C, Xiang S C, Xie M Y, Liu T, Fu Y Z. MALUNet: A multi-attention and light-weight unet for skin lesion segmentation. In: Proceedings of IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Las Vegas, USA: IEEE, 2022. 1150-1156
    [47] Wu Y, Liao K, Chen J, Wang J, Chen D Z, Gao H, et al. D-Former: A u-shaped dilated transformer for 3D medical image segmentation. Neural Computing and Applications, 2023, 35(2): 1931−1944 doi: 10.1007/s00521-022-07859-1
    [48] Imtiaz T, Fattah S A, Kung S Y. BAWGNet: Boundary aware wavelet guided network for the nuclei segmentation in histopathology images. Computers in Biology and Medicine, 2023, 165: 107378 doi: 10.1016/j.compbiomed.2023.107378
    [49] Bernard O, Lalande A, Zotti C, Cervenansky F, Yang X, Heng P A, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the oroblem solved?. IEEE Transactions on Medical Imaging, 2018, 37(11): 2514−2525 doi: 10.1109/TMI.2018.2837502
    [50] Xu Y S, Tang J Q, Men A D, Chen Q C. EviPrompt: A training-free evidential prompt generation method for adapting segment anything model in medical images. IEEE Transactions on Image Processing, 2024, 33: 6204−6215 doi: 10.1109/TIP.2024.3482175
    [51] Zhou S H, Nie D, Adeli E, Yin J P, Lian J, Shen D G. High-resolution encoder–decoder networks for low-contrast medical image segmentation. IEEE Transactions on Image Processing, 2020, 29: 461−475 doi: 10.1109/TIP.2019.2919937
    [52] Qiu Z X, Hu Y, Chen X S, Zeng D, Hu Q Y, Liu J. Rethinking dual-stream super-resolution semantic learning in medical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(1): 451−464 doi: 10.1109/TPAMI.2023.3322735
    [53] Zhang T Y, Zheng S M, Cheng J, Jia X, Bartlett J, Cheng X X, et al. Structure and intensity unbiased translation for 2D medical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 10060−10075 doi: 10.1109/TPAMI.2024.3434435
    [54] Bilic P, Christ P, Li H B, Vorontsov E, Ben-Cohen A, Kaissis G, et al. The liver tumor segmentation benchmark (LiTS). Medical Image Analysis, 2023, 84: 102680 doi: 10.1016/j.media.2022.102680
    [55] Lyu F, Ye M, Ma A J, Yip T C, Wong G L, Yuen P C. Learning from synthetic CT images via test-time training for liver tumor segmentation. IEEE Transactions on Medical Imaging, 2022, 41(9): 2510−2520 doi: 10.1109/TMI.2022.3166230
    [56] Heller N, Sathianathen N. The KiTS19 challenge lata: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv: 1904.00445, 2019.
    [57] 廖苗, 杨睿新, 赵于前, 邸拴虎, 杨振. 基于CE-TransNet的腹部CT图像多器官分割. 自动化学报, 2025, 51(6): 1371−1387

    Liao Miao, Yang Rui-Xin, Zhao Yu-Qian, Di Shuan-Hu, Yang Zhen. Multi-organ segmentation from abdominal CT images based on CE TransNet. Acta Automatica Sinica, 2025, 51(6): 1371−1387
    [58] 贾熹滨, 郭雄, 王珞, 杨大为, 杨正汉. 一种迭代边界优化的医学图像小样本分割网络. 自动化学报, 2024, 50(10): 1988−2001 doi: 10.16383/j.aas.c220994

    Jia Xi-Bin, Guo Xiong, Wang Luo, Yang Da-Wei, Yang Zheng-Han. A few-shot medical image segmentation network with iterative boundary refinement. Acta Automatica Sinica, 2024, 50(10): 1988−2001 doi: 10.16383/j.aas.c220994
    [59] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition, arXiv: 1409.1556, 2014.
    [60] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, USA: IEEE, 2016. 770-778
    [61] Huang G, Liu Z, Van Der Maaten L, Weinberger K Q. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, USA: IEEE, 2017. 4700-4708
  • 加载中
计量
  • 文章访问数:  10
  • HTML全文浏览量:  3
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-11-11
  • 录用日期:  2026-03-19
  • 网络出版日期:  2026-04-27

目录

    /

    返回文章
    返回