2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

目标跟踪中基于IoU和中心点距离预测的尺度估计

李绍明 储珺 冷璐 涂序继

李绍明, 储珺, 冷璐, 涂序继. 目标跟踪中基于IoU和中心点距离预测的尺度估计. 自动化学报, 2021, 48(x): 1001−1014 doi: 10.16383/j.aas.c210356
引用本文: 李绍明, 储珺, 冷璐, 涂序继. 目标跟踪中基于IoU和中心点距离预测的尺度估计. 自动化学报, 2021, 48(x): 1001−1014 doi: 10.16383/j.aas.c210356
Li Shao-Ming, Chu Jun, Leng Lu, Tu Xu-Ji. Accurate scale estimation with IoU and distance between centroids for object tracking. Acta Automatica Sinica, 2021, 48(x): 1001−1014 doi: 10.16383/j.aas.c210356
Citation: Li Shao-Ming, Chu Jun, Leng Lu, Tu Xu-Ji. Accurate scale estimation with IoU and distance between centroids for object tracking. Acta Automatica Sinica, 2021, 48(x): 1001−1014 doi: 10.16383/j.aas.c210356

目标跟踪中基于IoU和中心点距离预测的尺度估计

doi: 10.16383/j.aas.c210356
基金项目: 国家自然科学基金(62162045)资助, 江西省科技支撑计划项目20192BBE50073 资助
详细信息
    作者简介:

    李绍明:南昌航空大学软件学院硕士研究生. 主要研究方向为计算机视觉, 目标跟踪. E-mail: thorn_mo1905@163.com

    储珺:南昌航空大学软件学院教授. 主要研究方向为计算机视觉, 模式识别. E-mail: chuj@nchu.edu.cn

    冷璐:南昌航空大学软件学院教授. 主要研究方向为图像处理,生物特征模板保护和生物特征识别. E-mail: leng@nchu.edu.cn

    涂序继:南昌航空大学软件学院讲师. 主要研究方向为计算机视觉和图像处理. E-mail: 71068@nchu.edu.cn

Accurate Scale Estimation with IoU and Distance between Centroids for Object Tracking

Funds: Supported by National Natural Science Foundation of P. R. China (62162045), supported by Jiangxi Provincial Science and Technology Key Project 20192BBE50073
More Information
    Author Bio:

    LI Shao-Ming Master student at the School of Software, Nanchang Hangkong University. His research interests include computer vision and object tracking

    Chu Jun Professor at the School of Software, Nanchang Hangkong University. His research interest include computer vision and pattern recognition

    LENG Lu Professor at the School of Software, Nanchang Hangkong University. His research interests include image processing, biometric template protection and biometric recognition

    TU Xu-Ji Lecturer at the School of Software, Nanchang Hangkong University. His research interest include computer vision and image processing

  • 摘要: 目标跟踪中基于IoU (Intersection over union, IoU)预测的尺度估计方法, 通过估计视频帧中候选框与真实目标框的重叠度训练尺度回归模型, 推理阶段通过最大化IoU对初始化边界框进行微调, 取得目标的尺度. 本文详细分析了基于IoU预测的尺度估计模型的梯度更新过程, 发现其在训练和推理过程仅将IoU作为度量, 缺乏对预测框和真实目标框中心点距离的约束, 导致外观模型更新过程中模板受到污染, 前景和背景分类时定位出现偏差. 基于此发现, 本文构建了一种结合IoU和中心点距离的新度量NDIoU (Normalization distance IoU), 在此基础上提出一种新的尺度估计方法, 并将其嵌入判别式跟踪框架. 即在训练阶段以NDIoU为标签, 设计了具有中心点距离约束的损失函数监督网络的学习, 在线推理期间通过最大化NDIoU微调目标尺度, 以帮助外观模型更新时获得更加准确的样本. 在七个数据上与相关主流方法进行对比, 本文方法在七个数据集上的综合性能优于所有对比算法. 特别是在GOT-10k数据集上, 本文方法的AO、$ S{R}_{0.5} $$ S{R}_{0.75} $三个指标达到了65.4%、78.7%和53.4%, 分别超过基线模型4.3%、7.0%和4.2%.
    1)  收稿日期 2021-04-24 录用日期 2021-11-02 Manuscript received April 24, 2021; accepted November 2, 2021 国家自然科学基金 (62162045) 资助, 江西省科技支撑计划项目20192BBE50073 资助 Supported by National Natural Science Foundation of P. R. China (62162045), supported by Jiangxi Provincial Science and Technology Key Project 20192BBE50073 本文责任编委 Recommended by Associate Editor 1. 南昌航空大学软件学院计算机视觉研究所 南昌 330063 2.江西省图像处理与模式识别重点实验室 南昌 330063
    2)  1. School of Software, Nanchang Hangkong University, Nanchang 330063 2. Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063
  • 图  1  IoU相同, 但中心点距离不同的情况. 其中, 红色代表候选的边界框, 绿色代表真实边界框

    Fig.  1  Same IoU while different distances between centroids. Red represents the candidate bounding box, and green represents the ground-truth

    图  2  标准化中心点之间的距离

    Fig.  2  Normalized distance between centroids

    图  3  IoU和中心点距离对应视频帧数的统计

    Fig.  3  The number statistics of video frame corresponding to IoU and distances between centroids

    图  4  在视频序列dinosaur上跟踪的结果可视化

    Fig.  4  Visualization of tracking results on the video sequence dinosaur

    图  5  本文方法(ASEID)在OTB-100上与相关方法的比较

    Fig.  5  Comparison of the proposed method (ASEID) with related algorithms on OTB-100

    图  6  OTB-100 数据集不同挑战性因素影响下的成功率图

    Fig.  6  Success plots on sequences with different challenging attributes on OTB-100 dataset

    图  7  OTB-100 数据集不同挑战性因素影响下的准确率图

    Fig.  7  Precision plots on sequences with different challenging attributes on OTB-100 dataset

    图  8  本文方法与相关方法的可视化比较

    Fig.  8  Visualization comparison of the proposed method and related trackers

    图  9  OTB-100数据集中的失败案例. 绿色框代表真实框, 红色框代表本文算法的预测框.

    Fig.  9  Failure cases in OTB-100. The green bounding box is ground truth, and the red box represents the prediction box.

    图  10  GOT-10k数据集中的失败案例. 在GOT-10k的测试集中, 由于只能拿到测试视频序列的第一帧的真实框, 因此第一帧的标记代表被跟踪目标.

    Fig.  10  Failure cases in GOT-10k. In GOT-10k test set, only the ground truth in the first frame of the test video sequence can be obtained. Therefore, the ground truth of the first frame represents the tracked target.

    表  1  OTB-100上的消融实验

    Table  1  Ablation study on OTB-100

    AUC (%)Precision (%)Norm.Pre (%)FPS
    多尺度搜索68.488.883.821
    IoU68.489.484.235
    NDIoU69.891.387.335
    下载: 导出CSV

    表  2  在UAV123上和SOTA算法的比较

    Table  2  Compare with SOTA trackers on UAV123

    SiamBAN[33]CGACD[34]POST[35]MetaRTT[36]ECO[28]UPDT[32]DaSiamRPN[37]ATOM[7]DiMP50 (baseline)[14]ASEID (ours)
    AUC (%)63.163.362.956.952.454.256.963.264.364.5
    Precision (%)83.383.380.080.974.176.878.184.485.086.1
    Norm.Pre (%)66.870.974.279.180.581.6
    下载: 导出CSV

    表  3  在VOT2018上与SOTA方法的比较

    Table  3  Compare with SOTA trackers on VOT2018

    DRT[38]RCO[39]UPDT[32]DaSiamRPN[37]MFT[39]LADCF[40]ATOM[9]SiamRPN++[16]Dimp50 (baseline)[14]PrDiMP50[15]ASEID (ours)
    EAO0.3560.3760.3780.3830.3850.3890.4010.4140.4400.4420.454
    Robustness0.2010.1550.1840.2760.1400.1590.2040.2340.1530.1650.153
    Accuracy0.5190.5070.5360.5860.5050.5030.5900.6000.5970.6180.615
    下载: 导出CSV

    表  4  在GOT-10k上与SOTA方法的比较

    Table  4  Compare with SOTA trackers on GOT-10k

    DCFST[30]PrDiMP50[15]KYS[17]SiamFC++[13]D3S[41]Ocean[12]ROAM[31]ATOM[7]DiMP50 (baseline)[14]ASEID (ours)
    $ \mathit{S}{\mathit{R}}_{0.50}\left (\mathbf{\%}\right) $68.373.875.169.567.672.146.663.471.778.7
    $ \mathit{S}{\mathit{R}}_{0.75} $ (%)44.854.351.547.946.216.440.249.253.4
    $ \mathit{A}\mathit{O}\left (\mathbf{\%}\right) $59.263.463.659.559.761.143.655.661.165.4
    下载: 导出CSV

    表  5  在LaSOT上与SOTA方法的比较

    Table  5  Compare with SOTA trackers on LaSOT

    ASRCF[6]POST[35]Ocean[12]GlobalT[42]SiamRPN++[19]ROAM[31]ATOM[9]DiMP50 (baseline)[14]ASEID (ours)
    Precision (%)33.746.356.652.756.944.550.556.957.5
    Success (AUC) (%)35.948.156.052.149.644.751.456.957.2
    下载: 导出CSV

    表  6  在TrackingNet上与SOTA方法的比较

    Table  6  Compare with SOTA trackers on TrackingNet

    MDNet[29]ECO[28]DaSiamRPN[37]D3S[41]ROAM[31]CGACD[34]ATOM[9]Dimp50 (baseline)[14]ASEID (ours)
    AUC (%)60.655.463.872.867.071.170.374.075.3
    Precision (%)56.549.259.166.462.369.364.868.771.1
    Norm.Pre (%)70.561.873.377.180.181.9
    下载: 导出CSV

    表  7  在TC128上与SOTA算法比较

    Table  7  Compare with SOTA trackers on TC128

    POST[35]MetaRTT[36]ASRCF[6]UDT[43]TADT[44]Re2EMA[45]RTMDNet[46]MLT[47]DiMP50 (baseline)[14]ASEID (ours)
    AUC (%)56.359.760.354.156.252.156.349.861.263.2
    Precision (%)78.180.082.571.769.578.881.084.2
    下载: 导出CSV
  • [1] Wu Y, Lim J, and Yang M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226
    [2] 孟琭, 杨旭. 目标跟踪算法综述[J]. 自动化学报, 2019, 45(07): 1244-1260

    Meng Lu, Yang Xu. A review of target tracking algorithms[J]. ACTA AUTOMATICA SINICA, 2019, 45(07): 1244-1260
    [3] 尹宏鹏, 陈波, 柴毅, 刘兆栋. 基于视觉的目标检测与跟踪综述[J]. 自动化学报, 2016, 42(10): 1466-1489

    Yin Hong-Peng, Chen Bo, Chai Yi, Liu Zhao-Dong. A review of object detection and tracking based on vision[J]. ACTA AUTOMATICA SINICA, 2016, 42(10): 1466-1489
    [4] 谭建豪, 郑英帅, 王耀南, 马小萍. 基于中心点搜索的无锚框全卷积孪生跟踪器[J]. 自动化学报, 2021, 47(04): 801-812

    Tan Jian-Hao, Zheng Ying-Shuai, Wang Yao-Nan, Ma Xiao-Ping. AFST: Anchor-free fully convolutional siamese tracker with searching center[J]. ACTA AUTOMATICA SINICA, 2021, 47(04): 801-812
    [5] Danelljan M, Hager G, Khan F S. Learning spatially regularized correlation filters for visual tracking[C]. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 4310–4318.
    [6] Dai K, Wang D, Lu H, Sun C, and Li J. Visual tracking via adaptive spatially-regularized correlation filters[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4670–4679.
    [7] Danelljan M, Hager G, Khan F S, and Felsberg M. Discriminative scale space tracking[J]. IEEE Transaction Pattern Analysis Machine Intelligence, 2017, 39(8): 1561–1575. doi: 10.1109/TPAMI.2016.2609928
    [8] Li Y and Zhu J. A scale adaptive kernel correlation filter tracker with feature integration[C]. In: Proceedings of the 13th European Conference on Computer Vision. Switzerland, Zurich: Springer, 2014. 254–265.
    [9] Danelljan M, Bhat G, Khan F S, and Felsberg M. ATOM: Accurate tracking by overlap maximization[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4660–4669.
    [10] Li B, Yan J, Wu W, Zhu Z, and Hu X. High performance visual tracking with siamese region proposal network[C]. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 8971–8980.
    [11] Wang Q, Bertinetto L, Hu W, and Torr P. Fast online object tracking and segmentation: a unifying approach[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1328–1338.
    [12] Zhang Z, Peng H, Fu J, Li B, Hu W. Ocean: Object-aware anchor-free tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 771–787.
    [13] Xu Y, Wang Z, Li Z, Yuan Y, Yu G. SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 12549–12556.
    [14] Bhat G, Danelljan M, Gool L, and Timofte R. Learning discriminative model prediction for tracking[C]. In: Proceedings of the 2019 International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 6181–6190.
    [15] Danelljan M, Gool L, Timofte R. Probabilistic regression for visual tracking[C]. In: Proceedings of the 2020 IEEE Conference Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 7183–7192.
    [16] Li B, Wu W, Wang Q, Zhang F, Xing J, and Yan J. SiamRPN++: Evolution of siamese visual tracking with very deep networks[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4282–4291.
    [17] Bhat G, Danelljan M, Gool L, Timofte R. Know your surroundings: exploiting scene information for object tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 205–221.
    [18] Girshick R. Fast R-CNN[C]. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1440–1448.
    [19] Ren S, He K, Girshick R, and Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. In: IEEE Transaction Pattern Analysis Machine Intelligence, 2015, 39(6): 1137–1149.
    [20] Jiang B, Luo R, Mao J, Xiao T, and Jiang Y. Acquisition of localization confidence for accurate object detection[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 816–832.
    [21] Mueller M, Smith N, and Ghanem B. A benchmark and simulator for UAV tracking[C]. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, Netherlands: Springer, 2016. 445–461.
    [22] Kristan M, Leonardis A, Matas J, Felsberg M, Pfugfelder R, Zajc L, Vojir T, Bhat G, Lukezic A, Eldesokey A, Fernandez G, and et al. The sixth visual object tracking VOT2018 challenge results. In: Proceedings of the 15th European Conference on Computer Vision workshop. Munich, Germany: Springer, 2018. 3–53.
    [23] Huang L, Zhao X, and Huang K. Got-10k: A Large High-diversity Benchmark for Generic Object Tracking in the Wild[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, pp. 1562–1577.
    [24] Fan H, Lin L, Yang F, Chu P, Deng G, Yu S, Bai H, Xu X, Liao C, and Ling H. LaSOT: A high-quality benchmark for large-scale single object tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 5374–5383.
    [25] Muller M, Bibi A, Giancola S, Subaihi S, and Ghanem B. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 310–327.
    [26] Liang P, Blasch E, Lin H. Encoding color information for visual tracking: algorithms and benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5630–5644. doi: 10.1109/TIP.2015.2482905
    [27] Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D. Distance-IoU loss: faster and better learning for bounding box regression[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020, 34(7): 12993–13000.
    [28] Danelljan M, Bhat G, Khan F S, and Felsberg M. ECO: Efficient convolution operators for tracking[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 6931–6939.
    [29] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking[C]. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 4293–4302.
    [30] Zheng L, Tang M, Chen Y, Wang J, Lu H. Learning feature embeddings for discriminant model based tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 759–775.
    [31] Yang T, Xu P, Hu R, Chai H, Chan A. ROAM: Recurrently optimizing tracking model[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6718–6727.
    [32] Bhat G, Johnander J, Danelljan M, Khan F S, and Felsberg M. Unveiling the power of deep tracking[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 493–509.
    [33] Chen Z, Zhong B, Li G, Zhang S, and Ji R. Siamese box adaptive network for visual tracking[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6668–6677.
    [34] Du F, Liu P, Zhao W, Tang X. Correlation-guided attention for corner detection based visual tracking[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6836–6845.
    [35] Wang N, Zhou W, Qi G, Li H. POST: POlicy-based switch tracking[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 12184–12191.
    [36] Jung I, You K, Noh H, Cho M, Han B. Real-Time object tracking via meta-learning: efficient model adaptation and one-shot channel pruning[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 11205–11212.
    [37] Zhu Z, Wang Q, Li B, Wei W, Yan J, Hu W. Distractor-aware siamese networks for visual object tracking[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 101–117.
    [38] Sun C, Wang D, Lu H, Yang M. Correlation tracking via joint discrimination and reliability learning[C]. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 489–497.
    [39] Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman Pfugfelder, Luka Cehovin Zajc, Tomas Vojir, Goutam Bhat, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernandez, and et al. The sixth visual object tracking VOT2018 challenge results. In: Proceedings of the 15th European Conference on Computer Vision Workshop. Munich, Germany: Springer, 2018.
    [40] Xu T, Feng Z, Wu X, and Kittler J. Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual tracking[Online]. ArXiv Preprint ArXiv: 1807.11348, 2018.
    [41] Lukezic A, Matas J, Kristan M. D3S – A discriminative single shot segmentation tracker[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 7133–7142.
    [42] Huang L, Zhao X, Huang K. GlobalTrack: A simple and strong baseline for long-term tracking[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 11037–11044.
    [43] Wang N, Song Y, Ma C, Zhou W, Liu W. Unsupervised deep tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1308–1317.
    [44] Li X, Ma C, Wu B, He Z, Yang M. Target-aware deep tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1369–1378.
    [45] Huang J, and Zhou W. Re2EMA: Regularized and reinitialized exponential moving average for target model update in object tracking[C]. In: Proceedings of the 2019 AAAI Conference on Artificial Intelligence, 2019. 8457–8464.
    [46] Jung I, Song J, Baek M, Han B. Real-time MDNet[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 89–104.
    [47] Choi J, Kwon J, Lee K. Deep meta learning for real-time target-aware visual tracking[C]. In: Proceedings of the 2019 International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 911–920.
    [48] Paszke A, Gross S, Massa F, and et. al. Pytorch: An imperative style, high-performance deep learning library [C]. In: Proceedings of the 2019 Neural Information Processing Systems. Vancouver, Canada: MIT Press, 2019.
    [49] Martin Danelljan, Goutam Bhat. PyTracking: Visual tracking library based on PyTorch.https://github.com/visionml/pytracking,2019.
    [50] Lin T, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, Perona P, Ramanan D, Dollar P, and Zitnick C. Microsoft COCO: Common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. Switzerland, Zurich: Springer, 2014: 740–755.
  • 加载中
计量
  • 文章访问数:  592
  • HTML全文浏览量:  370
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-04-24
  • 录用日期:  2021-11-02
  • 网络出版日期:  2021-11-29

目录

    /

    返回文章
    返回