• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于视觉特征提示的跨类别航天器关键点检测方法

周栋 马炜钊 孙光辉 胡瑀晖 贺子鹏 张兵

周栋, 马炜钊, 孙光辉, 胡瑀晖, 贺子鹏, 张兵. 基于视觉特征提示的跨类别航天器关键点检测方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250472
引用本文: 周栋, 马炜钊, 孙光辉, 胡瑀晖, 贺子鹏, 张兵. 基于视觉特征提示的跨类别航天器关键点检测方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250472
Zhou Dong, Ma Wei-Zhao, Sun Guang-Hui, Hu Yu-Hui, He Zi-Peng, Zhang Bing. Cross-category spacecraft keypoints detection method with visual feature prompts. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250472
Citation: Zhou Dong, Ma Wei-Zhao, Sun Guang-Hui, Hu Yu-Hui, He Zi-Peng, Zhang Bing. Cross-category spacecraft keypoints detection method with visual feature prompts. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250472

基于视觉特征提示的跨类别航天器关键点检测方法

doi: 10.16383/j.aas.c250472 cstr: 32138.14.j.aas.c250472
基金项目: 国家自然科学基金青年科学基金项目(62403162)资助
详细信息
    作者简介:

    周栋:哈尔滨工业大学航天学院副研究员. 2023年获得哈尔滨工业大学控制科学与工程系博士学位. 主要研究方向为空间视觉感知, 具身智能与深度强化学习.E-mail: dongzhou@hit.edu.cn

    马炜钊:哈尔滨工业大学航天学院硕士研究生. 2024年获得西南交通大学学士学位. 主要研究方向为航天器视觉位姿估计. E-mail: weizhaoma@stu.hit.edu.cn

    孙光辉:哈尔滨工业大学航天学院教授. 2010年获得哈尔滨工业大学控制科学与工程系博士学位. 主要研究方向为空间机器人视觉、轨迹规划与柔顺控制. 本文通信作者.E-mail: guanghuisun@hit.edu.cn

    胡瑀晖:哈尔滨工业大学航天学院博士研究生. 2022年获得哈尔滨工业大学航天学院控制科学与工程系硕士学位. 主要研究方向为航天器视觉位姿估计与空间机器人轨迹规划. E-mail: huyuhui@hit.edu.cn

    贺子鹏:中国空间技术研究院助理工程师. 主要研究方向为航天器视觉位姿估计. E-mail: hzp_rocket@163.com

    张兵:深空探测实验室助理工程师. 主要研究方向为智能深空探测.E-mail: bz154964@gmail.com

Cross-category Spacecraft Keypoints Detection Method with Visual Feature Prompts

Funds: Supported by National Natural Science Foundation of China for Young Scientists Project (62403162)
More Information
    Author Bio:

    ZHOU Dong Associate Professor at the School of Astronautics, Harbin Institute of Technology. He received his Ph.D. degree from the Department of Control Science and Engineering, Harbin Institute of Technology in 2023. His research interests include space visual perception, embodied intelligence and deep reinforcement learning

    MA Wei-Zhao Master student at the School of Astronautics, Harbin Institute of Technology. He received his bachelor degree from the Southwest Jiaotong University in 2024. His main research interest is spacecraft visual pose estimation

    SUN Guang-Hui    Professor at the School of Astronautics, Harbin Institute of Technology. He received his Ph.D. degree from the Department of Control Science and Engineering, Harbin Institute of Technology in 2010. His research interests include space robotic vision, trajectory planning and compliance control. Corresponding author of this paper

    HU Yu-Hui Ph.D. candidate at the School of Astronautics, Harbin Institute of Technology. He received his master degree in Control Science and Engineering from the School of Astronautics, Harbin Institute of Technology, in 2022. His research interests include spacecraft visual pose estimation and space robot trajectory planning

    HE Zi-Peng Assistant Engineer at the China Academy of Space Technology. His main research interest is spacecraft visual pose estimation

    ZHANG Bing Assistant Engineer of the Deep Space Exploration Laboratory. His main research interest is intelligent deep space exploration

  • 摘要: 航天器视觉位姿估计是空间智能在轨服务的技术核心, 其往往采用关键点检测与位姿解算相结合的两阶段方案. 然而, 现有的航天器关键点检测方法通常利用单一航天器的视觉图像数据进行训练, 因此, 它们无法适用于其他类型的航天器目标, 这严重限制了空间在轨服务的推广与应用. 为此, 提出一种基于视觉特征提示的跨类别航天器关键点检测方法. 当针对未知类别的新目标航天器时, 该方法仅需要给定一张支持图像和对应的关键点提示, 便可以准确预测出目标航天器关键点在查询图像中的位置. 为进一步验证所提方法的有效性, 依托虚拟仿真平台构建一个包含多种航天器、二维关键点标注以及三维姿态标注的多航天器位姿估计数据集. 在该数据集上进行的大量实验结果表明, 所提方法在跨类别航天器关键点检测任务中表现出色, 显著优于当前主流的关键点检测方法. 此外, 该方法与传统PnP算法相结合, 可以实现对任意航天器的高精度位姿估计. 本文方法的代码和数据集均已开源, 详见https://github.com/Dongzhou-1996/CSKDet.
  • 图  1  CSKDet整体架构

    Fig.  1  The overall architecture of the CSKDet method

    图  2  SPE数据集中航天器关键点的定义

    Fig.  2  The definition of keypoints of spacecraft in the SPE dataset

    图  3  关键点检测可视化图

    Fig.  3  The visualization diagram of keypoints detection

    图  4  CSKDet在SPEED数据集上的检测效果

    Fig.  4  The keypoints detection results of CSKDet on SPEED dataset

    表  1  不同视觉传感器的特点比较

    Table  1  The Characteristics comparison of different visual sensors

    视觉传感器 观测距离 能量消耗 感知精度 硬件成本
    激光雷达
    单目相机
    双目相机
    RGB-D相机
    RGB-T相机
    下载: 导出CSV

    表  2  CSKDet的训练超参数

    Table  2  The training hyperparameters of CSKDet

    参数
    输入图像尺寸 256像素 × 256像素
    关键点热力图尺寸 64像素 × 64像素
    支持图像数目 1, 3, 5
    编码器 ResNet50
    残差解码层层数 3层
    优化器 AdamW
    余弦退火周期 20轮
    初始学习率 0.005
    骨干网络冻结训练轮次 60轮
    骨干网络解冻训练轮次 20轮
    解冻后学习率 0.001
    下载: 导出CSV

    表  3  航天器关键点检测评估结果

    Table  3  The evaluation results of spacecraft keypoints detection

    方法 1号航天器 2号航天器 3号航天器 4号航天器 平均估计误差
    HRNet[25] 68.88 70.04 70.67 70.33 69.99
    ResUNet[26] 5.86 38.29 33.99 32.39 25.42
    CSKDet
    ($K$=1)
    11.25 28.34 25.16 19.02 19.02
    CSKDet
    ($K$=3)
    9.35 27.63 24.23 14.25 17.52
    CSKDet
    ($K$=5)
    7.33 25.27 17.51 11.95 14.17
    下载: 导出CSV

    表  4  航天器位姿估计评估结果

    Table  4  The evaluation results of spacecraft pose estimation

    方法 1号航天器 2号航天器 3号航天器 4号航天器 平均值
    位置误差 姿态误差 位置误差 姿态误差 位置误差 姿态误差 位置误差 姿态误差 位置误差 姿态误差
    (无量纲) (rad) (无量纲) (rad) (无量纲) (rad) (无量纲) (rad) (无量纲) (rad)
    HRNet[25] 3.34 2.21 5.82 2.33 2.48 2.74 3.57 2.18 3.59 2.36
    ResUNet[26] 0.02 0.14 0.31 1.11 0.18 1.07 0.18 0.85 0.15 0.72
    DMANet[30] 1.13 0.99 2.63 1.82 1.40 2.74 3.02 2.10 2.05 1.91
    CSKDet(K=1) 0.04 0.23 0.19 0.68 0.17 0.72 0.06 0.35 0.10 0.46
    CSKDet(K=3) 0.03 0.21 0.15 0.66 0.15 0.67 0.05 0.30 0.09 0.43
    CSKDet(K=5) 0.02 0.16 0.12 0.54 0.09 0.48 0.04 0.24 0.06 0.33
    下载: 导出CSV

    表  5  消融实验结果

    Table  5  The ablation study results

    方法 平均估计误差
    原始CSKDet($K$=3) 17.52
    去除视觉特征提示模块 36.95
    去除交叉融合层 28.36
    1层残差解码器 24.55
    下载: 导出CSV

    表  6  模型算力消耗情况

    Table  6  The computational cost of models

    方法 平均估计误差 参数量 (MB) 推理时间 (ms)
    HRNet[25] 69.99 28.54 22.83
    ResUNet[26] 25.42 97.28 7.52
    CSKDet($K$=1) 19.02 109.65 25.85
    CSKDet($K$=3) 17.52 109.65 49.01
    CSKDet($K$=5) 14.17 109.65 64.28
    下载: 导出CSV
  • [1] 丁希仑, 陈一同, 王成才, 等. 空间机器人操作技术研究现状与展望. 航空学报, 2025, 46(6): 531556 doi: 10.7527/S1000-6893.2024.31556

    Ding X L, Chen Y T, Wang C C, et al. Research status and prospect of space robot operation technology. Acta Aeronautica et Astronautica Sinica, 2025, 46(6): 531556 doi: 10.7527/S1000-6893.2024.31556
    [2] 赵亮亮, 李雪皑, 赵京东, 等. 面向航天器自主维护的空间机器人发展战略研究. 中国工程科学, 2024, 26(01): 149−159 doi: 10.15302/J-SSCAE-2024.01.014

    Zhao L L, Li X A *, Zhao J D, Liu H. Development Strategy of Space Robots for Autonomous Repair and Maintenance of Spacecraft. Chinese Engineering Sciences, 2024, 26(01): 149−159 doi: 10.15302/J-SSCAE-2024.01.014
    [3] Zhou D, Sun G, Lei W, et al. Space noncooperative object active tracking with deep reinforcement learning. IEEE Transactions on Aerospace and Electronic Systems, 2022, 58(6): 4902−4916 doi: 10.1109/TAES.2022.3211246
    [4] 朱文山, 牟金震, 李爽, 等. 基于深度学习的航天器位姿估计研究进展. 宇航学报, 2023, 44(11): 1633−1644 doi: 10.3873/j.issn.1000-1328.2023.11.002

    Wenshan Zhu, Jinzhen Mou, Shuang Li, et al. A review of spacecraft pose estimation based on deep learning. Journal of Astronautics, 2023, 44(11): 1633−1644 doi: 10.3873/j.issn.1000-1328.2023.11.002
    [5] Pauly L, Rharbaoui W, Shneider C, et al. A survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects. Acta Astronautica, 2023, 212: 339−360 doi: 10.1016/j.actaastro.2023.08.001
    [6] Liu J, Lu Z, Chen L, et al. Occlusion-Aware 6D Pose Estimation with Depth-Guided Graph Encoding and Cross-Semantic Fusion for Robotic Grasping[C]. 2025 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2025: 5011-5017.
    [7] Wang Z, Zhang Z, Sun X, et al. Revisiting monocular satellite pose estimation with transformer. IEEE Transactions on Aerospace and Electronic Systems, 2022, 58(5): 4279−4294 doi: 10.1109/TAES.2022.3161605
    [8] Chang L, Liu J, Chen Z, et al. Stereo vision-based relative position and attitude estimation of non-cooperative spacecraft. Aerospace, 2021, 8(8): 230 doi: 10.3390/aerospace8080230
    [9] AlDahoul N, Karim H A, Momo M A. RGB-D based multimodal convolutional neural networks for spacecraft recognition. 2021 IEEE international conference on image processing challenges (ICIPC). IEEE, 2021: 1-5.
    [10] Rondao D, Aouf N, Richardson M A. ChiNet: Deep recurrent convolutional learning for multimodal spacecraft pose estimation. IEEE Transactions on Aerospace and Electronic Systems, 2022, 59(2): 937−949 doi: 10.1109/taes.2022.3193085
    [11] Li P, Wang M, Zhou D, et al. A pose measurement method of a non-cooperative spacecraft based on point cloud feature. 2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020: 4977−4982.
    [12] Vasconcelos J, Gaggi S, Amaral T, et al. Close-Proximity Operations Design, Analysis, and Validation for Non-Cooperative Targets with an Application to the ClearSpace-1 Mission. Aerospace, 2025, 12(1): 67 doi: 10.3390/aerospace12010067
    [13] Rehman K, Fareed N, Chu H J. NASA ICESat-2: Space-Borne LiDAR for Geological Education and Field Mapping of Aeolian Sand Dune Environments. Remote Sensing, 2023, 15(11): 2882 doi: 10.3390/rs15112882
    [14] Pensado E A, de Santos L M G, Sanjurjo-Rivo M, et al. Deep Learning Based Target Pose Estimation Using LiDAR Measurements in Active Debris Removal Operations. IEEE Transactions on Aerospace and Electronic Systems, 2023, 59(5): 5658−5670 doi: 10.1109/taes.2023.3262505
    [15] Sharma S, Beierle C, D'Amico S. Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks. 2018 IEEE Aerospace Conference. IEEE, 2018: 1-12.
    [16] Wang J, Li Z, Sun C, et al. Satellite Pose Set Estimation by Uncertainty-Guided Conformal Keypoint Detection. IEEE Transactions on Neural Networks and Learning Systems, 2025.
    [17] Thomas Despond F, Ulrich S. Real-time stereovision-based spacecraft pose determination using convolutional neural networks. Journal of Spacecraft and Rockets, 2025, 62(1): 269−279 doi: 10.2514/1.A35973
    [18] Zhang H, Zheng Y, Wang Y. A pose estimation method based on RGB-D system in the process of attaching asteroids. 2024 36th Chinese Control and Decision Conference (CCDC). IEEE, 2024: 4674−4680.
    [19] Zhang Z, Zhou D, Sun G, et al. DFTI: Dual-branch fusion network based on transformer and inception for space noncooperative objects. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1−11 doi: 10.1109/tim.2024.3403182
    [20] Lowe D G. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 2004, 60(2): 91−110 doi: 10.1023/B:VISI.0000029664.99615.94
    [21] Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. European conference on computer vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006: 404-417.
    [22] Zhang X, Jiang Z, Zhang H, et al. Vision-based pose estimation for textureless space objects by contour points matching. IEEE Transactions on Aerospace and Electronic Systems, 2018, 54(5): 2342−2355 doi: 10.1109/TAES.2018.2815879
    [23] Sharma S, Ventura J, D’Amico S. Robust model-based monocular pose initialization for noncooperative spacecraft rendezvous. Journal of Spacecraft and Rockets, 2018, 55(6): 1414−1429 doi: 10.2514/1.A34124
    [24] Chen B, Cao J, Parra A, et al. Satellite pose estimation with deep landmark regression and nonlinear pose refinement. Proceedings of the IEEE/CVF international conference on computer vision workshops. 2019
    [25] Sun K, Xiao B, Liu D, et al. Deep high-resolution representation learning for human pose estimation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 20195693−5703
    [26] Cosmas K, Kenichi A. Utilization of FPGA for onboard inference of landmark localization in CNN-based spacecraft pose estimation. Aerospace, 2020, 7(11): 159 doi: 10.3390/aerospace7110159
    [27] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention. Cham: Springer international publishing, 2015: 234-241.
    [28] Gao X, Liao Y, Zhou H. Pose Estimation and Simulation of Non-Cooperative Spacecraft Based on Feature Points Detection. 2024 IEEE 25th China Conference on System Simulation Technology and its Application (CCSSTA). IEEE, 2024: 12−16.
    [29] Proenca P F, Gao Y. Deep learning for spacecraft pose estimation from photorealistic rendering. 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 6007-6013.
    [30] Zhang Z, Hu Y, Zhou D, et al. DMANet: Dense Multi-scale Attention Network for Space Non-cooperative Object Pose Estimation. Transactions of Nanjing University of Aeronautics & Astronautics, 2024, 41(1).
    [31] Kisantal M, Sharma S, Park T H, et al. Satellite pose estimation challenge: Dataset, competition design, and results. IEEE Transactions on Aerospace and Electronic Systems, 2020, 56(5): 4083−4098 doi: 10.1109/TAES.2020.2989063
  • 加载中
计量
  • 文章访问数:  8
  • HTML全文浏览量:  5
  • 被引次数: 0
出版历程
  • 网络出版日期:  2026-03-16

目录

    /

    返回文章
    返回