• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于“形态−感知−动作”仿生机理的机器人自适应力控抓取方法

赵洲 耿明强 何秋实 何赟鑫 蔡明达 周翔宇 罗晶

赵洲, 耿明强, 何秋实, 何赟鑫, 蔡明达, 周翔宇, 罗晶. 基于“形态−感知−动作”仿生机理的机器人自适应力控抓取方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250453
引用本文: 赵洲, 耿明强, 何秋实, 何赟鑫, 蔡明达, 周翔宇, 罗晶. 基于“形态−感知−动作”仿生机理的机器人自适应力控抓取方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250453
Zhao Zhou, Geng Ming-Qiang, He Qiu-Shi, He Yun-Xin, Cai Ming-Da, Zhou Xiang-Yu, Luo Jing. Robot adaptive force control grasping method based on bionic mechanism of “shape-perception-action”. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250453
Citation: Zhao Zhou, Geng Ming-Qiang, He Qiu-Shi, He Yun-Xin, Cai Ming-Da, Zhou Xiang-Yu, Luo Jing. Robot adaptive force control grasping method based on bionic mechanism of “shape-perception-action”. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250453

基于“形态−感知−动作”仿生机理的机器人自适应力控抓取方法

doi: 10.16383/j.aas.c250453 cstr: 32138.14.j.aas.c
基金项目: 国家自然科学基金(62203341), 鹭江创新实验室自主部署科技项目(25FV0CZZ03), 湖北省自然基金(2024AFB245, 2024AFB614), 中央高校基本科研业务费专项资金(CCNU25ai023)资助
详细信息
    作者简介:

    赵洲:华中师范大学计算机学院讲师. 2023年获得法国巴黎索邦大学计算机科学与技术专业博士学位. 主要研究方向为机器人智能感知. E-mail: zhaozhou@ccnu.edu.cn

    耿明强:华中师范大学计算机学院硕士研究生. 2025年获得华中师范大学软件工程专业学士学位. 主要研究方向为机器人技术. E-mail: gengmq@mails.ccnu.edu.cn

    何秋实:中国人民武装警察部队警官学院教员. 2025年获得四川大学管理学专业硕士学位. 主要研究方向为军事指挥, 管科科学与工程, 无人系统智能感知与决策. E-mail: 365618827@qq.com

    何赟鑫:华中师范大学计算机学院硕士研究生. 2023年获得华东交通大学计算机科学与技术专业学士学位. 主要研究方向为机器人技术. E-mail: heyunxin@mails.ccnu.edu.cn

    蔡明达:武汉理工大学自动化学院硕士研究生. 2024年获得长沙理工大学测控技术与仪器专业学士学位. 主要研究方向为人机协作. E-mail: 1977503459@qq.com

    周翔宇:武汉理工大学自动化学院硕士研究生. 2021年获得武汉理工大学电气工程及其自动化专业学士学位. 主要研究方向为人机协作. E-mail: xiangyuzhou@whut.edu.cn

    罗晶:武汉理工大学自动化学院教授. 2020年获得中国广州华南理工大学和英国伦敦帝国理工学院联合博士学位. 主要研究方向为机器人, 远程操作, 可穿戴设备和人机交互. 本文通信作者. E-mail: ljing_ac@whut.edu.cn

Robot Adaptive Force Control Grasping Method Based on Bionic Mechanism of “Shape-Perception-Action”

Funds: Supported by National Natural Science Foundation of China (62203341), Fujian Ocean Innovation Center (25FV0CZZ03), Hubei Provincial Natural Science Foundation (2024AFB245, 2024AFB614), and Fundamental Research Funds for the Central Universities (CCNU25ai023)
More Information
    Author Bio:

    ZHAO Zhou Lecturer at the School of Computer Science, Central China Normal University. He received his Ph.D. degree in computer science and technology from the Sorbonne University in Paris, France in 2023. His main research interest is robot intelligent perception

    GENG Ming-Qiang Master student at the School of Computer Science, Central China Normal University. He received his bachelor degree in software engineering from Central China Normal University in 2025. His main research interest is robotic technology

    HE Qiu-Shi Instructor at the Officers College of PAP. He received his master degree in management from Sichuan University in 2025. His research interests include military command, management science and engineering, intelligent perception and decision-making of unmanned systems

    HE Yun-Xin Master student at the School of Computer Science, Central China Normal University. He received his bachelor degree in computer science and technology from East China Jiaotong University in 2023. His main research interest is robotic technology

    CAI Ming-Da Master student at the School of Automation, Wuhan University of Technology. He received his bachelor degree in instrumentation and measurement from Changsha University of Science and Technology in 2024. His main research interest is human-robot collaboration

    ZHOU Xiang-Yu Master student at the School of Automation, Wuhan University of Technology. He received his bachelor degree in electrical engineering and automation from Wuhan University of Technology in 2021. His main research interest is human-robot collaboration

    LUO Jing Professor at the School of Automation, Wuhan University of Technology. He received his joint Ph.D. degree from South China University of Technology in Guangzhou, China and Imperial College London, UK in 2020. His research interests include robotics, teleoperation, wearable devices, and human-machine interaction. Corresponding author of this paper

  • 摘要: 随着机器人技术快速发展, 其对精细感知能力需求日益增长. 然而, 现有机器人仍难以具备如人类般灵活的操作能力. 在精细抓取任务中, 机器人恒力抓取策略存在局限性: 抓取力过大易损伤物体, 抓取力过小则导致抓取不稳. 为应对上述问题, 提出一种基于视觉与触觉融合的机器人自适应力控抓取方法. 该方法由视觉模块、触觉模块和抓取策略组成: 视觉模块用于预测目标抓取位置; 在接触阶段, 触觉模块借助视触觉传感器恢复触觉深度并估算接触面积与法向力; 随后, 通过最大深度变化率和帧间均方差进行形变判定, 并触发抓取力调整策略, 从而实现“渐进增力–形变检测–力回退”的仿生反馈抓取机制. 实验结果表明, 该方法将多种日常物体的整体抓取成功率由87.50% 提升至98.75%, 在易碎物体抓取中实现零损坏.
  • 图  1  人类手指与视触觉传感器

    Fig.  1  Human finger and visual tactile sensor

    图  2  基于“形态−感知−动作”仿生机理的机器人自适应力控抓取方法

    Fig.  2  Robot adaptive force control grasping method based on bionic mechanism of “shape-perception-action”

    图  3  平面抓取位姿表示

    Fig.  3  Plane grasping pose representation

    图  4  视觉模块神经网络结构

    Fig.  4  Neural network structure of visual module

    图  5  基于触觉反馈的抓取策略状态转移

    Fig.  5  Grasping strategy state transition based on tactile feedback

    图  6  基于触觉反馈的自适应力控抓取策略流程图

    Fig.  6  Flow chart of adaptive force control grasping strategy based on tactile feedback

    图  7  实验平台

    Fig.  7  Experimental platform

    图  8  以固定力抓取几何规则物体的触觉图像((a)纸杯; (b)塑料瓶; (c)易拉罐;(d)玻璃瓶; (e)橡皮; (f)胶棒)

    Fig.  8  Grasping tactile images of geometrically regular objects with the fixed force ((a) Paper cup; (b) Plastic bottle; (c) Can; (d) Glass bottle; (e) Rubber; (f) Glue stick)

    图  9  不同材料瓶子在不同抓取力下的深度值与估计法向力

    Fig.  9  Depth value and estimated normal force of bottles of different materials under different grasping forces

    图  10  以固定力抓取几何不规则物体的触觉图像((a) $ \sim $ (c)从三个位置抓取螺丝刀; (d) $ \sim $ (f)从三个位置抓取弹力球)

    Fig.  10  Grasping tactile images of geometrically irregular objects with the fixed force ((a) $ \sim $ (c) Grasp the screwdriver at three positions; (d) $ \sim $ (f) Grasp the elastic ball at three positions)

    图  11  实物抓取流程

    Fig.  11  Physical grasping process

    图  12  多物体连续抓取

    Fig.  12  Multi-object continuous grasping

    图  13  误差热力图

    Fig.  13  Error heatmap

    表  1  抓取过程状态表

    Table  1  Grasp process state table

    状态 描述
    $ {\boldsymbol{APPROACH}} $ 按视觉规划, 机械臂靠近目标位姿.
    $ {\boldsymbol{CONTACT}}\_{\boldsymbol{DETECT}} $ 夹爪逐步闭合至检测到触觉传感器接触.
    $ {\boldsymbol{FORCE}}\_{\boldsymbol{BUILDUP}} $ 继续逐步闭合夹爪并实时估计抓取力$ F_{est} $与变形指标.
    $ {\boldsymbol{ADJUST}} $ 若检测到形变或力异常, 执行力调整(降力/保持)并必要时调整抓姿或撤回.
    $ {\boldsymbol{HOLD}} $ 达到稳定抓取($ F_{est}\approx F_d $且无形变)后进入保持态, 执行搬运任务.
    $ {\boldsymbol{RELEASE/ABORT}} $ 完成任务或发生异常时释放或放弃抓取.
    下载: 导出CSV

    表  2  不同数据集下视觉模块的性能比较(%)

    Table  2  Performance comparison of vision module under different datasets (%)

    指标 Cornell RC_49 Jacquard
    J@1 93.26 90.32 87.10
    J@5 96.77 93.54 93.55
    下载: 导出CSV

    表  3  实验采用的八类不同物体

    Table  3  Eight different objects used in the experiment

    几何规则 几何不规则
    纸杯 塑料瓶 易拉罐 弹力球
    玻璃瓶 橡皮 胶棒 螺丝刀
    下载: 导出CSV

    表  4  抓取几何规则物体的触觉最大深度值与接触面积

    Table  4  Maximum tactile depth value and contact area of grasping geometrically regular objects

    (a) (b) (c) (d) (e) (f)
    $ d\_2 $ 1.11 1.30 1.21 1.53 1.21 1.53
    $ d\_5 $ 1.17 2.04 1.50 2.14 1.37 2.14
    $ {area}\_2 $ 116 12 723 4 107 18 165 73 171 23 148 42
    $ {area}\_5 $ 495 91 137 89 131 95 232 68 366 30 195 72
    注: 对应图8中的触觉图像, “$ d/{area}\rule{0.32em}{0.42pt}2/5 $”中, $ d $代表有效接触区域的深度平均值, $ {area} $代表接触面积(以深度值大于阈值的像素数量表示), 2/5代表抓握力(单位: N).
    下载: 导出CSV

    表  5  抓取几何不规则物体的触觉最大深度值与接触面积

    Table  5  Maximum tactile depth value and contact area of grasping geometrically irregular objects

    (a) (b) (c) (d) (e) (f)
    $ d\_2 $ 1.00 1.50 3.86 0.91 1.65 1.04
    $ d\_5 $ 1.34 1.65 2.53 1.71 1.74 1.45
    $ {area}\_2 $ 136 6 520 8 673 4 236 58 106 29 360 63
    $ {area}\_5 $ 410 5 812 1 255 67 258 40 225 29 576 48
    注: 对应图10中的触觉图像, “$ d/{area}\rule{0.32em}{0.42pt}2/5 $”中, $ d $代表有效接触区域的深度平均值, area代表接触面积(以深度值大于阈值的像素数量表示), 2/5代表抓握力(单位: N).
    下载: 导出CSV

    表  6  自适应力控与固定力抓取策略的实验结果对比

    Table  6  Comparison of experimental results between adaptive force control and fixed force grasping strategy

    物体 成功率 (%) 施加力 (N) 损坏率 (%) 滑动率 (%) 视觉模块预测失败率 (%)
    自适应力 固定力 自适应力 固定力 自适应力 固定力 自适应力 固定力
    橡皮 100 100 4 4 0 0 0 0 30
    胶棒 100 100 4 4 0 0 0 0 0
    弹力球 100 100 1 4 0 0 0 0 0
    纸杯 100 0 1 0 100 0 0 10
    玻璃瓶 100 100 4 4 0 0 0 0 10
    塑料瓶 100 100 2.58 4 0 0 0 0 10
    易拉罐 90 100 2.44 4 0 0 10 0 20
    螺丝刀 100 100 4 4 0 0 0 0 10
    总计 98.75 87.5 0 12.5 1.25 0 11.25
    注: 表中“–”表示由于固定力方法对纸杯的抓取完全失败, 未产生有效施加力数据; 自适应力指代自适应力控抓取方法, 固定力指使用夹爪默认抓取力; 所有实验中每个物体均进行10次抓取尝试, 统计结果以百分比形式呈现.
    下载: 导出CSV
  • [1] 刘华平, 郭迪, 孙富春, 张新钰. 基于形态的具身智能研究: 历史回顾与前沿进展. 自动化学报, 2023, 49(6): 1131−1154 doi: 10.16383/j.aas.c220564

    Liu Hua-Ping, Guo Di, Sun Fu-Chun, Zhang Xin-Yu. Morphology-based embodied intelligence: Historical retrospect and research progress. Acta Automatica Sinica, 2023, 49(6): 1131−1154 doi: 10.16383/j.aas.c220564
    [2] Liu Z, Hu X N, Bo R H, Yang Y Z, Cheng X, Pang W B, et al. A three-dimensionally architected electronic skin mimicking human mechanosensation. Science, 2024, 384(6699): 987−994 doi: 10.1126/science.adk5556
    [3] Li J F, Xu Z, Zhu D J, Dong K, Yan T, Zeng Z, et al. Bio-inspired intelligence with applications to robotics: A survey. Intelligence & Robotics, 2021, 1(1): 58−83 doi: 10.20517/ir.2021.08
    [4] Chatziparaschis D, Zhong S, Christopoulos V, Karydis K. Adaptive environment-aware robotic arm reaching based on a bio-inspired neurodynamical computational framework. In: Proceedings of the 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN). Pasadena, CA, USA: IEEE, 2024. 510−515
    [5] An B S, Geng Y R, Chen K, Li X Q, Dou Q, Dong H. RGBManip: Monocular image-based robotic manipulation through active object pose estimation. In: Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA). Yokohama, Japan: IEEE, 2024. 7748−7755
    [6] Fang H S, Wang C X, Fang H J, Gou M H, Liu J R, Yan H X, et al. AnyGrasp: Robust and efficient grasp perception in spatial and temporal domains. IEEE Transactions on Robotics, 2023, 39(5): 3929−3945 doi: 10.1109/TRO.2023.3281153
    [7] Zhu L F, Wang Y C, Mei D Q, Wu X. Highly sensitive and flexible tactile sensor based on porous graphene sponges for distributed tactile sensing in monitoring human motions. Journal of Microelectromechanical Systems, 2019, 28(1): 154−163 doi: 10.1109/JMEMS.2018.2881181
    [8] Xiang G Y, Wang X X, Cheng N Y, Hu L, Zhang H W, Liu H H. A flexible piezoelectric-based tactile sensor for dynamic force measurement. In: Proceedings of the 2022 International Conference on High Performance Big Data and Intelligent Systems (HDIS). Tianjin, China: IEEE, 2022. 207−211
    [9] Yuan W Z, Dong S Y, Adelson E H. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors, 2017, 17(12): 2762−2782 doi: 10.3390/s17122762
    [10] Lambeta M, Chou P W, Tian S, Yang B, Maloon B, Most V R, et al. Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters, 2020, 5(3): 3838−3845 doi: 10.1109/LRA.2020.2977257
    [11] Saxena A, Driemeyer J, Kearns J, Ng A Y. Robotic grasping of novel objects. In: Proceedings of the 20th Annual Conference on Neural Information Processing Systems. Vancouver, Canada: MIT Press, 2007. 1209−1216
    [12] Morrison D, Corke P, Leitner J. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. In: Proceedings of Robotics: Science and Systems. Pittsburgh, USA: MIT Press, 2018.
    [13] Wang S C, Zhou Z L, Kan Z. When Transformer meets robotic grasping: Exploits context for efficient grasp detection. IEEE Robotics and Automation Letters, 2022, 7(3): 8170−8177 doi: 10.1109/LRA.2022.3187261
    [14] Fang H S, Gou M H, Wang C X, Lu C W. Robust grasping across diverse sensor qualities: The graspnet-1billion dataset. The International Journal of Robotics Research, 2023, 42(12): 1094−1103 doi: 10.1177/02783649231193710
    [15] Jiang J Q, Cao G Q, Butterworth A, Do T T, Luo S. Where shall I touch? Vision-guided tactile poking for transparent object grasping. IEEE/ASME Transactions on Mechatronics, 2022, 28(1): 233−244 doi: 10.1109/tmech.2022.3201057
    [16] Zhang B Y, Andrussow I, Zell A, Martius G. The role of tactile sensing for learning reach and grasp. In: Proceedings of the 2025 IEEE International Conference on Robotics and Automation (ICRA). Atlanta, USA: IEEE, 2025. 11817−11824
    [17] 薛腾, 刘文海, 潘震宇, 王伟明. 基于视觉感知和触觉先验知识学习的机器人稳定抓取. 机器人, 2021, 43(1): 1−8 doi: 10.13973/j.cnki.robot.200040

    Xue Teng, Liu Wen-Hai, Pan Zhen-Yu, Wang Wei-Ming. Stable robotic grasping based on visual perception and prior tactile knowledge learning. Robot, 2021, 43(1): 1−8 doi: 10.13973/j.cnki.robot.200040
    [18] 王瑞东, 王睿, 张天栋, 王硕. 机器人类脑智能研究综述. 自动化学报, 2024, 50(8): 1485−1501 doi: 10.16383/j.aas.c230705

    Wang Rui-Dong, Wang Rui, Zhang Tian-Dong, Wang Shuo. A survey of research on robotic brain-inspired intelligence. Acta Automatica Sinica, 2024, 50(8): 1485−1501 doi: 10.16383/j.aas.c230705
    [19] Cui S W, Wang R, Wei J H, Li F R, Wang S. Grasp state assessment of deformable objects using visual-tactile fusion perception. In: Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, France: IEEE, 2020. 538−544
    [20] Calandra R, Owens A, Jayaraman D, Lin J, Yuan W Z, Malik J, et al. More than a feeling: Learning to grasp and regrasp using vision and touch. IEEE Robotics and Automation Letters, 2018, 3(4): 3300−3307 doi: 10.1109/LRA.2018.2852779
    [21] Li S J, Yu H X, Ding W B, Liu H D, Ye L Q, Xia C K, et al. Visual-tactile fusion for transparent object grasping in complex backgrounds. IEEE Transactions on Robotics, 2023, 39(5): 3838−3856
    [22] Tang J X, Yuan X G, Li S D. Visual-tactile fusion and sac-based learning for robot peg-in-hole assembly in uncertain environments. Machines, 2025.
    [23] Lee Y, Hong S, Kim M G, Kim G, Nam C. Grasping deformable objects via reinforcement learning with cross-modal attention to visuo-tactile inputs. arXiv preprint arXiv: 2504.15595, 2025.
    [24] Fu L, Huang H, Berscheid L, Li H, Goldberg K, Chitta S. Safe self-supervised learning in real of visuo-tactile feedback policies for industrial insertion. In: Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA). London, United Kingdom: IEEE, 2023. 10380−10386
    [25] Matak M, Hermans T. Planning visual-tactile precision grasps via complementary use of vision and touch. IEEE Robotics and Automation Letters, 2022, 8(2): 768−775 doi: 10.1109/lra.2022.3231520
    [26] Peng Z C, Cui T, Chen G Y, Lu H Y, Yang Y, Yue Y F. High-precision object pose estimation using visual-tactile information for dynamic interactions in robotic grasping. In: Proceedings of the 2025 IEEE International Conference on Robotics and Automation (ICRA). Atlanta, USA: IEEE, 2025. 14799−14805
    [27] Galaiya V R, de Oliveira T E A, Jiang X T, da Fonseca V P. Grasp approach under positional uncertainty using compliant tactile sensing modules and reinforcement learning. In: Proceedings of the 2024 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). Kingston, Canada: IEEE, 2024. 424−428
    [28] Siméoni O, Vo H V, Seitzer M, Baldassarre F, Oquab M, Jose C, et al. Dinov3. arXiv preprint arXiv: 2508.10104, 2025.
    [29] Jiang Y, Moseson S, Saxena A. Efficient grasping from RGBD images: Learning using a new rectangle representation. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, 2011. 3304−3311
    [30] Drögemüller J, Garcia C X, Gambaro E, Suppa M, Steil J, Roa M A. Automatic generation of realistic training data for learning parallel-jaw grasping from synthetic stereo images. In: Proceedings of the 20th International Conference on Advanced Robotics (ICAR). Ljubljana, Slovenia: IEEE, 2021. 730−737
    [31] Depierre A, Dellandréa E, Chen L M. Jacquard: A large scale dataset for robotic grasp detection. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE, 2018. 3511−3516
    [32] Ainetter S, Fraundorfer F. End-to-end trainable deep neural network for robotic grasp detection and semantic segmentation from RGB. In: Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, 2021. 13452−13458
  • 加载中
计量
  • 文章访问数:  24
  • HTML全文浏览量:  12
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-09-04
  • 录用日期:  2025-12-24
  • 网络出版日期:  2026-01-16

目录

    /

    返回文章
    返回