2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

EPnL: 一种高效且精确的PnL问题求解算法

王平 何卫隆 张爱华 姚鹏鹏 徐贵力

王平, 何卫隆, 张爱华, 姚鹏鹏, 徐贵力. EPnL: 一种高效且精确的PnL问题求解算法. 自动化学报, 2022, 48(10): 2600−2610 doi: 10.16383/j.aas.c200927
引用本文: 王平, 何卫隆, 张爱华, 姚鹏鹏, 徐贵力. EPnL: 一种高效且精确的PnL问题求解算法. 自动化学报, 2022, 48(10): 2600−2610 doi: 10.16383/j.aas.c200927
Wang Ping, He Wei-Long, Zhang Ai-Hua, Yao Peng-Peng, Xu Gui-Li. EPnL: An efficient and accurate algorithm to the PnL problem. Acta Automatica Sinica, 2022, 48(10): 2600−2610 doi: 10.16383/j.aas.c200927
Citation: Wang Ping, He Wei-Long, Zhang Ai-Hua, Yao Peng-Peng, Xu Gui-Li. EPnL: An efficient and accurate algorithm to the PnL problem. Acta Automatica Sinica, 2022, 48(10): 2600−2610 doi: 10.16383/j.aas.c200927

EPnL: 一种高效且精确的PnL问题求解算法

doi: 10.16383/j.aas.c200927
基金项目: 国家自然科学基金(62001198, 62073161, 61866021), 流程工业综合自动化国家重点实验室开放基金 (PAL-N201808), 甘肃省国际合作科技计划(18YF1WA068), 甘肃省青年科技基金(20JR10RA-186)资助
详细信息
    作者简介:

    王平:兰州理工大学电气工程与信息工程学院讲师. 2019年获南京航空航天大学博士学位. 主要研究方向为计算机视觉, 机器学习和信号处理. 本文通信作者. E-mail: pingwangsky@163.com

    何卫隆:兰州理工大学电气工程与信息工程学院硕士研究生. 主要研究方向为机器视觉和视觉测量. E-mail: heweilongd@163.com

    张爱华:兰州理工大学电气工程与信息工程学院教授. 2005年获西安交通大学博士学位. 主要研究方向为检测技术和模式识别与智能系统. E-mail: zhangaihua@lut.edu.cn

    姚鹏鹏:中国香港理工大学纺织与制衣学系博士研究生. 主要研究方向为多光谱颜色测量, 相机矫正和图像检索. E-mail: p.p.yao@connect.polyu.hk

    徐贵力:南京航空航天大学自动化学院教授. 2002年获江苏大学博士学位. 主要研究方向为光电检测, 计算机视觉和智能系统. E-mail: guilixu2002@163.com

EPnL: An Efficient and Accurate Algorithm to the PnL Problem

Funds: Supported by National Natural Science Foundation of China (62001198, 62073161, 61866021), State Key Laboratory of Synthetical Automation for Process Industries (PAL-N201808), International Cooperation Science and Technology Program of Gansu Province (18YF1WA068), and Gansu Province Science Foundation for Youths (20JR10RA186)
More Information
    Author Bio:

    WANG Ping Lecturer at the College of Electrical and Information Engineering, Lanzhou University of Technology. He received his Ph.D. degree from Nanjing University of Aeronautics and Astronautics in 2019. His research interest covers computer vision, machine learning and signal processing. Corresponding author of this paper

    HE Wei-Long Master student at the College of Electrical and Information Engineering, Lanzhou University of Technology. His research interest covers machine vision and vision measurement

    ZHANG Ai-Hua Professor at the College of Electrical and Information Engineering, Lanzhou University of Technology. She received her Ph.D. degree from Xi'an Jiaotong University in 2005. Her research interest covers detection technology, pattern recognition and intelligent system

    YAO Peng-Peng Ph.D. candidate at the Institute of Textile and Clothing, Hong Kong Polytechnic University, China. His research interest covers multi-spectral color measurement, camera calibration and image retrieval

    XU Gui-Li Professor at the College of Automation Engineering, Nanjing University of Aeronautics and Astronautics. He received his Ph.D. degree from Jiangsu University in 2002. His research interest covers photoelectric detection, computer vision and intelligent system

  • 摘要: 现有Perspective-n-line (PnL)问题求解算法无法在获得高求解精度的同时保证高求解效率. 为解决这个缺点, 提出了同时兼具求解效率和求解精度算法EPnL. 该方法首先将PnL问题转换为求二次曲面方程组交点的问题, 然后利用单位四元数中变量不同时为零的特性, 分类参数化PnL问题中的旋转矩阵. 最后, 为克服常规优化方法可靠性和效率较低的问题, 同时兼具求解效率和求解精度算法利用二次曲面方程组自身的结构信息, 采用低次项参数化高次项的方式将二次曲面方程组的求解问题转换为单变量多项式的求解问题. 实验表明, 相比于现有算法, 该算法在具有高求解精度的同时也兼具有高求解效率.
    1)  1 https://sites.google.com/view/ping-wang-homepage
    2)  2 http://www.robots.ox.ac.uk/~vgg/
  • 图  1  PnL问题

    Fig.  1  PnL problem

    图  2  当直线数目变化时旋转和平移误差的均值和中值

    Fig.  2  The mean and median of rotation and translation errors when the number of lines varies

    图  3  当噪声等级变化时旋转和平移误差的均值和中值

    Fig.  3  The mean and median of rotation and translation errors when the noise level varies

    图  4  最小情况下(n = 3)旋转和平移误差的均值和中值

    Fig.  4  The mean and median of rotation and translation errors in the minimal case (n = 3)

    图  5  对比算法的计算效率

    Fig.  5  The computational efficiency of compared the methods

    图  6  VGG数据集中的图片

    Fig.  6  Images from the VGG dataset

    表  1  解的个数对比

    Table  1  Comparison of the number of solutions

    文献 [9]文献 [12]文献 [13]文献 [14]本文方法
    1527156014
    下载: 导出CSV

    表  2  各算法在VGG数据集上的旋转和平移误差均值

    Table  2  The mean of rotation and translation errors for each method on the VGG dataset

    数据集名称Model-HouseCorridorMerton-College- ⅠMerton-College- ⅡMerton-College-ⅢUniversity-LibraryWadham-College
    图像数量101133335
    AlgLS$\Delta \theta [ \circ ]$0.42200.19833.620055.80373.74951.883860.0517
    $\Delta T[m]$0.03840.08881.150414.18791.36830.95199.8801
    DLT-Lines$\Delta \theta [ \circ ]$0.86510.11040.08690.21170.17510.17360.1343
    $\Delta T[m]$0.08340.04150.02740.12240.06250.07510.0809
    LPnL-Bar-LS$\Delta \theta [ \circ ]$0.41350.11780.02410.02610.06520.36420.1526
    $\Delta T[m]$0.04030.04400.00990.01490.02330.16320.0909
    RPnL$\Delta \theta [ \circ ]$0.55210.36521.08700.32491.75282.97310.4200
    $\Delta T[m]$0.06310.11500.32150.16600.91211.56130.1909
    ASPnL$\Delta \theta [ \circ ]$0.22650.09110.11410.15151.55843.66620.4227
    $\Delta T[m]$0.01620.02980.03140.06000.55711.66830.1955
    SRPnL$\Delta \theta [ \circ ]$0.2258158.9520.43810.115136.40344.18480.0880
    $\Delta T[m]$0.016017.5570.10640.04953.93982.06320.0407
    EPnL$\Delta \theta [ \circ ]$0.22650.09690.03060.01700.05040.08710.0808
    $\Delta T[m]$0.01620.02520.00970.01230.01470.03430.0375
    下载: 导出CSV
  • [1] 于鲲, 从明煜, 段佳佳, 李向宇. 星箭对接环抓捕点单目视觉导航方法. 仪器仪表学报, 2018, 39(12): 228--236.

    Yu Kun, Cong Min-Yu, Duan Jia-Jia, Li Xiang-Yu. Monocular visual navigation method for capture point of docking ring. Chinese Journal of Scientific Instrument, 2018, 39(12): 228--236.
    [2] 马艳阳, 叶梓豪, 刘坤华, 陈龙. 基于事件相机的定位与建图算法: 综述. 自动化学报, 2020, 46(x): 1--11.

    Ma Yan-Yang, Ye Zi-Hao, Liu Kun-Hua, Chen Long. Event-based visual localization and mapping algorithms: a survey. Acta Automatica Sinica, 2020, 46(x): 1--11.
    [3] 俞毓锋, 赵卉菁. 基于相机与摇摆激光雷达融合的非结构化环境定位. 自动化学报, 2019, 45(9): 1791--1798.

    YU Yu-Feng, ZHAO Hui-Jing. Off-road localization using monocular camera and nodding LiDAR. Acta Automatica Sinica, 2019, 45(9): 1791--1798.
    [4] 郝洁, 李高峰, 孙雷, 卢翔, 张森, 刘景泰. 基于视觉标志间相对位姿的可变形臂标定方法. 自动化学报, 2018, 44(8): 1413--1424.

    HAO Jie, LI Gao-Feng, SUN Lei, LU Xiang, ZHANG Sen, LIU Jing-Tai. Relative-pose-of-markers based calibration method for a deformable manipulator. Acta Automatica Sinica, 2018, 44(8): 1413--1424.
    [5] Alhaija H A, Mustikovela S K, Mescheder L, Geiger A, Rother C. Augmented reality meets computer vision: efficient data generation for urban driving scenes. International Journal of Computer Vision, 2018, 126(9): 961--972. doi: 10.1007/s11263-018-1070-x
    [6] Griva I, Nash S G, Sofer A. Linear and Nonlinear Optimization. Philadelphia: Siam, 2009. 355−478
    [7] Abdel-Aziz Y I, Karara H M, Hauck M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Photogrammetric Engineering and Remote Sensing, 2015, 81(2): 103--107. doi: 10.14358/PERS.81.2.103
    [8] Přibyl B, Zemčík P, Čadík M. Camera pose estimation from lines using Plücker coordinates. In: Proceedings of the 2015 British Machine Vision Conference, Swansea, United Kingdom: 2015. 45: 1−12
    [9] Xu C, Zhang L, Cheng L, Koch R. Pose estimation from line correspondences: a complete analysis and a series of solutions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1209--1222. doi: 10.1109/TPAMI.2016.2582162
    [10] Přibyl B, Zemčík P, Čadík M. Absolute pose estimation from line correspondences using direct linear transformation. Computer Vision and Image Understanding, 2017, 161: 130--144. doi: 10.1016/j.cviu.2017.05.002
    [11] Ansar K, Daniilidis K. Linear pose estimation from points or lines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 578--589. doi: 10.1109/TPAMI.2003.1195992
    [12] Mirzaei F M, Roumeliotis S I. Globally optimal pose estimation from line correspondences. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China: 2011. 5581−5588
    [13] Zhang L, Xu C, Lee K M, Koch R. Robust and efficient pose estimation from line correspondences. In: Proceedings of the 2012 Asian Conference on Computer Vision, Heidelberg, Berlin: 2012. 217−230
    [14] Wang P, Xu G L, Cheng Y H, Yu Q D. Camera pose estimation from lines: a fast, robust and general method. Machine Vision and Applications, 2019, 30: 603--614. doi: 10.1007/s00138-019-01012-0
    [15] Yu, Q D, Xu G L, Zhang L M, Shi J C. A consistently fast and accurate algorithm for estimating camera pose from point correspondences, Measurement, 2020, 172: 108914.
    [16] Zheng Y Q, Kuang Y B, Sugimoto S, Astrom K, Okutomi M. Revisiting the PnP problem: A Fast, general and optimal solution. In: Proceedings of the 2013 International Conference on Computer Vision, Sydney, Australia: 2013. 2344−2351
    [17] Kukelova Z, Bujnak M, Pajdla T. Automatic generator of minimal problem solvers. In: Proceedings of the 2008 European Conference on Computer Vision, Heidelberg, Berlin: 2008. 302−315
    [18] Press W, Teukolsky S, Vetterling W, Flannery B. Numerical recipes 3rd edition: The art of scientific computing. Cambridge: Cambridge University Press, 2007.
    [19] Wang P, Xu G L, Cheng Y H. A novel algebraic solution to the perspective-three-line pose problem, Computer Vision and Image Understanding, 2020, 191: 102711. doi: 10.1016/j.cviu.2018.08.005
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  1893
  • HTML全文浏览量:  798
  • PDF下载量:  148
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-11-09
  • 录用日期:  2021-03-05
  • 网络出版日期:  2021-05-12
  • 刊出日期:  2022-10-14

目录

    /

    返回文章
    返回