2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于自适应融合网络的跨域行人重识别方法

郭迎春 冯放 阎刚 郝小可

郭迎春, 冯放, 阎刚, 郝小可. 基于自适应融合网络的跨域行人重识别方法. 自动化学报, 2022, 48(11): 2744−2756 doi: 10.16383/j.aas.c220083
引用本文: 郭迎春, 冯放, 阎刚, 郝小可. 基于自适应融合网络的跨域行人重识别方法. 自动化学报, 2022, 48(11): 2744−2756 doi: 10.16383/j.aas.c220083
Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744−2756 doi: 10.16383/j.aas.c220083
Citation: Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744−2756 doi: 10.16383/j.aas.c220083

基于自适应融合网络的跨域行人重识别方法

doi: 10.16383/j.aas.c220083
基金项目: 国家自然科学基金(60302018, 61806071, 62102129), 河北省自然科学基金(F2019202381, F2019202464)资助
详细信息
    作者简介:

    郭迎春:河北工业大学人工智能与数据科学学院副教授. 2006年获得天津大学信号与信息处理专业博士学位. 主要研究方向为数字图像处理和计算机视觉. E-mail: gyc@scse.hebut.edu.cn

    冯放:河北工业大学人工智能与数据科学学院硕士研究生. 主要研究方向为行人重识别. E-mail: fengfang0901@163.com

    阎刚:河北工业大学人工智能与数据科学学院副教授. 2019年获得河北工业大学微电子学与固体电子学专业博士学位. 主要研究方向为图像处理和智能监控. 本文通信作者. E-mail: yangang@scse.hebut.edu.cn

    郝小可:河北工业大学人工智能与数据科学学院副教授. 2017年获得南京信息工程大学计算机与技术专业博士学位. 主要研究方向为机器学习, 医学图像分析和生物信息学. E-mail: haoxiaoke@scse.hebut.edu.cn

Cross-domain Person Re-identification on Adaptive Fusion Network

Funds: Supported by National Natural Science Foundation of China (60302018, 61806071, 62102129) and Natural Science Foundation of Hebei Province (F2019202381, F2019202464)
More Information
    Author Bio:

    GUO Ying-Chun Associate professor at the School of Artificial Intelligence, Hebei University of Technology. She received her Ph.D. degree in signal and information processing from Tianjin University in 2006. Her research interest covers digital image processing and computer vision

    FENG Fang Master student at the School of Artificial Intelligence, Hebei University of Technology. His main research interest is person re-identification

    YAN Gang Associate professor at the School of Artificial Intelligence, Hebei University of Technology. He received his Ph.D. degree in microelectronics and solid state electronics from Hebei University of Technology in 2019. His research interest covers image processing and intelligent surveillance. Corresponding author of this paper

    HAO Xiao-Ke Associate professor at the School of Artificial Intelligence, Hebei University of Technology. He received his Ph.D. degree in computer science and technology from Nanjing University of Information Science and Technology in 2017. His research interest covers machine learning, medical image analysis and bioinformatics

  • 摘要: 无监督跨域的行人重识别旨在将从有标签的源域中学习到的知识迁移到无标签的目标域, 具有实用性和有效性而得到广泛关注. 基于聚类的跨域行人重识别可以生成伪标签并对模型进行优化使得其表现较其他方法更优, 然而这类方法由于过于依赖聚类伪标签的准确性, 忽略了对伪标签噪声的处理, 导致噪声随着网络迭代而不断扩大, 影响模型的鲁棒性. 针对这个问题, 提出了基于自适应融合网络的方法, 利用双网络结构共同学习, 并将学习到的知识进行融合得到融合网络; 为了区分两个网络的学习能力, 设计了自适应融合策略; 同时, 利用细粒度风格转换模块对目标域数据集进行处理, 降低行人图像对相机变换的敏感度. 在行人重识别基准数据集Market1501、DukeMTMC-ReID和MSMT17上, 通过评估指标平均精度均值和Rank-n与主流的方法进行了对比实验, 验证了该方法的有效性.
  • 图  1  自适应融合网络模型

    Fig.  1  Adaptive fusion network model

    图  2  细粒度风格转换模块

    Fig.  2  Fine-grained style conversion module

    图  3  $ \lambda_{id}$取值评估实验

    Fig.  3  Evaluation of different values of $ \lambda_{id}$

    图  4  $ \lambda_{t r i}$取值评估实验

    Fig.  4  Evaluation of different values of $ \lambda_{t r i}$

    图  5  聚类数量取值评估实验

    Fig.  5  Evaluation of different numbers of clustering

    表  1  本文的自适应融合网络模型参数量表

    Table  1  The model parameter number of the proposed adaptive fusion network

    参数取值
    总参数23512128 × 2
    可训练参数23512128 × 2
    参数大小 (MB)89.69 × 2
    估计总大小 (MB)1199.45 × 2
    下载: 导出CSV

    表  2  在Market1501和DukeMTMC-ReID上与主流方法比较 (%)

    Table  2  Comparison with the state-of-the-art methods on Market1501 and DukeMTMC-ReID (%)

    方法Duke-to-MarketMarket-to-Duke
    mAPRank-1Rank-5Rank-10mAPRank-1Rank-5Rank-10
    BUC[34]38.366.279.684.527.547.462.668.4
    SSL[35]37.871.787.437.828.652.563.568.9
    MMFA[36]27.456.775.081.824.745.359.866.3
    TJ-AIDL[11]26.558.274.881.123.044.359.665.0
    D-MMD[12]75.189.595.697.162.779.389.392.0
    TAL-MIRN[37]40.073.186.341.363.576.7
    ATNet[13]25.655.773.279.424.945.159.564.2
    SPGAN + LMP[14]26.757.775.882.426.246.462.368.0
    HHL[16]31.462.278.884.027.246.961.066.7
    ECN[17]43.075.187.691.640.463.375.880.4
    MAR[18]67.781.987.340.067.179.884.248.0
    UDAP[38]53.775.889.593.249.068.480.183.5
    PCB-PAST[39]54.678.454.372.4
    SSG[20]58.380.090.092.453.473.080.683.2
    AD-Cluster[21]68.386.794.496.554.172.682.585.5
    MMT-500[23]71.287.794.996.963.176.888.092.2
    MEB-Net[40]76.089.996.097.566.179.688.392.2
    SILC[41]61.880.790.193.050.368.580.285.4
    DRDL[42]42.776.888.591.643.265.376.982.2
    PREST[43]62.482.592.194.956.174.483.785.9
    SpCL[44]76.790.396.297.768.882.990.192.5
    MLOL[45]70.986.693.195.169.883.190.893.0
    UNRN[46]78.191.996.197.869.182.090.793.5
    本文方法79.191.897.198.268.580.790.192.6
    本文方法 + 不确定性79.992.397.498.369.882.190.593.1
    下载: 导出CSV

    表  3  在MSMT17上与主流方法比较 (%)

    Table  3  Comparison with the state-of-the-art methods on MSMT17 (%)

    方法Duke-to-MSMT17Market-to-MSMT17
    mAPRank-1Rank-5Rank-10mAPRank-1Rank-5Rank-10
    ECN[17]10.230.241.546.88.525.336.342.1
    SSG[20]13.332.251.213.231.649.6
    MMT-1500[23]23.350.163.969.822.949.263.168.8
    SILC[41]12.633.145.248.010.927.838.145.8
    TAL-MIRN[37]14.239.051.511.230.943.5
    DRDL[42]14.942.053.759.114.738.651.457.1
    PREST[43]18.543.857.563.615.937.851.857.8
    SpCL[44]26.553.165.870.525.451.664.369.7
    MLOL[45]22.448.360.766.121.746.959.464.7
    UNRN[46]26.254.967.370.625.352.464.769.7
    本文方法30.260.473.377.929.459.672.877.5
    本文方法 + 不确定30.861.073.978.330.661.073.778.0
    下载: 导出CSV

    表  4  在Market1501和DukeMTMC-ReID上的消融实验 (%)

    Table  4  Ablation experiments on Market1501 and DukeMTMC-ReID (%)

    方法Duke-to-MarketMarket-to-Duke
    mAPRank-1Rank-5Rank-10mAPRank-1Rank-5Rank-10
    直接转换31.861.976.482.229.946.261.968.0
    基线53.576.088.191.948.266.479.884.0
    本文方法 w/F74.390.295.897.662.977.187.991.5
    本文方法 w/(F + T)77.691.596.898.166.379.089.692.3
    本文方法 w/(F + T + A)78.291.796.998.166.979.989.792.2
    本文方法 w/(F + T + S)78.991.296.898.067.580.389.992.4
    本文方法 w/(F + T + A + S)79.191.897.198.268.580.790.192.6
    下载: 导出CSV

    表  5  聚类算法对比

    Table  5  Comparison of clustering algorithms

    方法Duke-to-MarketMarket-to-Duke
    mAP (%)R-1 (%)R-5 (%)R-10 (%)运行时间 (s)mAP (%)R-1 (%)R-5 (%)R-10 (%)运行时间 (s)
    Mini-Batch k-means79.191.897.198.281168.580.790.192.6908
    k-means79.391.897.298.1147268.880.990.192.61669
    DBSCAN80.192.397.498.4322469.982.190.792.93643
    下载: 导出CSV
  • [1] 叶钰, 王正, 梁超, 韩镇, 陈军, 胡瑞敏. 多源数据行人重识别研究综述. 自动化学报, 2020, 46(9): 1869-1884

    Ye Yu, Wang Zheng, Liang Chao, Han Zhen, Chen Jun, Hu Rui-Min. A survey on multi-source person re-identification. Acta Automatica Sinica, 2020, 46(9): 1869-1884
    [2] Ye M, Shen J B, Lin G J, Xiang T, Shao L, Hoi S C H. Deep learning for person re-identification: A survey and outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(6): 2872-2893 doi: 10.1109/TPAMI.2021.3054775
    [3] 李幼蛟, 卓力, 张菁, 李嘉锋, 张辉. 行人再识别技术综述. 自动化学报, 2018, 44(9): 1554-1568

    Li You-Jiao, Zhuo Li, Zhang Jing, Li Jia-Feng, Zhang Hui. A survey of person re-identification. Acta Automatica Sinica, 2018, 44(9): 1554-1568
    [4] Bai S, Bai X, Tian Q. Scalable person re-identification on supervised smoothed manifold. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (ICCV). Honolulu, USA: IEEE, 2017. 3356−3365
    [5] 罗浩, 姜伟, 范星, 张思朋. 基于深度学习的行人重识别研究进展. 自动化学报, 2019, 45(11): 2032-2049

    Luo Hao, Jiang Wei, Fan Xing, Zhang Si-Peng. A survey on deep learning based person re-identification. Acta Automatica Sinica, 2019, 45(11): 2032-2049
    [6] 张云鹏, 王洪元, 张继, 陈莉, 吴琳钰, 顾嘉晖, 等. 近邻中心迭代策略的单标注视频行人重识别. 软件学报, 2021, 32(12): 4025-4035

    Zhang Yun-Peng, Wang Hong-Yuan, Zhang Ji, Chen Li, Wu Lin-Yu, Gu Jia-Hui, et al. One-shot video-based person re-identification based on neighborhood center iteration strategy. Journal of Software, 2021, 32(12): 4025-4035
    [7] 刘一敏, 蒋建国, 齐美彬, 刘皓, 周华捷. 融合生成对抗网络和姿态估计的视频行人再识别方法. 自动化学报, 2020, 46(3): 576-584

    Liu Yi-Min, Jiang Jian-Guo, Qi Mei-Bin, Liu Hao, Zhou Hua-Jie. Video-based person re-identification method based on GAN and pose estimation. Acta Automatica Sinica, 2020, 46(3): 576-584
    [8] Wang M L, Lai B S, Huang J Q, Gong X J, Hua X S. Camera-aware proxies for unsupervised person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 2764-2772 doi: 10.1609/aaai.v35i4.16381
    [9] Wu Y M, Wu X T, Li X, Tian J. MGH: Metadata guided hypergraph modeling for unsupervised person re-identification. In: Proceedings of the 29th ACM International Conference on Multimedia. Virtual Event China: 2021. 1571−1580
    [10] Chen H, Lagadec B, Bremond F. ICE: Inter-instance contrastive encoding for unsupervised person re-identification. In: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE, 2021. 14940−14949
    [11] Wang J Y, Zhu X T, Gong S G, Li W. Transferable joint attribute-identity deep learning for unsupervised person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 2275−2284
    [12] Mekhazni D, Bhuiyan A, Ekladious G, Granger E. Unsupervised domain adaptation in the dissimilarity space for person re-identification. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: 2020. 159−174
    [13] Liu J W, Zha Z J, Chen D, Hong R C, Wang M. Adaptive transfer network for cross-domain person re-identification. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 7195−7204
    [14] Deng W J, Zheng L, Ye Q X, Kang Q L, Yi Y, Jiao J B. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 994−1003
    [15] Wei L H, Zhang S L, Wen G, Tian Q. Person transfer GAN to bridge domain gap for person re-identification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 79−88
    [16] Zhong Z, Zheng L, Li S Z, Yang Y. Generalizing a person retrieval model hetero- and homogeneously. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 172−188
    [17] Zhong Z, Zheng L, Luo Z M, Li S Z, Yang Y. Invariance matters: Exemplar memory for domain adaptive person re-identification. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 598−607
    [18] Yu H X, Zheng W S, Wu A C, Guo X W, Gong S G, Lai J H. Unsupervised person re-identification by soft multilabel learning. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 2143−2152
    [19] Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3723−3732
    [20] Fu Y, Wei Y C, Wang G S, Zhou Y Q, Shi H H, Uiuc U, et al. Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. In: Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE, 2019. 6111−6120
    [21] Zhai Y P, Lu S J, Ye Q X, Shan X B, Chen J, Ji R R, et al. AD-Cluster: Augmented discriminative clustering for domain adaptive person re-identification. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 9018−9027
    [22] Yang F X, Li K, Zhong Z, Luo Z M, Sun X, Cheng H, et al. Asymmetric co-teaching for unsupervised cross-domain person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12597-12604 doi: 10.1609/aaai.v34i07.6950
    [23] Ge Y X, Chen D P, Li H S. Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification. arXiv: 2001.01526, 2020
    [24] Wang W H, Zhao F, Liao S C, Shao L. Attentive WaveBlock: Complementarity-enhanced mutual networks for unsupervised domain adaptation in person re-identification and beyond. IEEE Transactions on Image Processing, 2022, 31: 1532-1544 doi: 10.1109/TIP.2022.3140614
    [25] Bertocco G C, Andaló F, Rocha A. Unsupervised and self-adaptative techniques for cross-domain person re-identification. IEEE Transactions on Information Forensics and Security, 2021, 16: 4419-4434 doi: 10.1109/TIFS.2021.3107157
    [26] Sheng K K, Li K, Zheng X W, Liang J, Dong W M, Huang F Y, et al. On evolving attention towards domain adaptation. arXiv: 2103.13561, 2021
    [27] Zheng Z D, Yang X D, Yu Z D, Zheng L, Yang Y, Kautz J. Joint discriminative and generative learning for person re-identification. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 2133−2142
    [28] Hermans A, Beyer L, Leibe B. In defense of the triplet loss for person re-identification. arXiv: 1703.07737, 2017
    [29] Zheng L, Shen L Y, Tian L, Wang S J, Wang J D, Tian Q. Scalable person re-identification: A benchmark. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1116−1124
    [30] Zheng Z D, Zheng L, Yang Y. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 3774−3782
    [31] Felzenszwalb P F, Girshick R B, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645 doi: 10.1109/TPAMI.2009.167
    [32] Ristani E, Solera F, Zou R, Cucchiara R, Tomasi C. Performance measures and a data set for multi-target, multi-camera tracking. In: Proceedings of the European Conference on Computer Vision. Amsterdam, Netherlands: Springer, 2016. 17−35
    [33] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211-252 doi: 10.1007/s11263-015-0816-y
    [34] Lin Y T, Dong X Y, Zheng L, Yan Y, Yang Y. A bottom-up clustering approach to unsupervised person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 8738-8745 doi: 10.1609/aaai.v33i01.33018738
    [35] Lin Y T, Xie L X, Wu Y, Yan C G, Tian Q. Unsupervised person re-identification via softened similarity learning. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 3387−3396
    [36] Lin S, Li H L, Li C T, Kot A C. Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification. In: Proceedings of the 29th British Machine Vision Conference. Newcastle, UK: 2018.
    [37] Li H F, Dong N, Yu Z T, Tao D P, Qi G Q. Triple adversarial learning and multi-view imaginative reasoning for unsupervised domain adaptation person re-identification. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(5): 2814-2830 doi: 10.1109/TCSVT.2021.3099943
    [38] Song L C, Wang C, Zhang L F, Du B, Zhang Q, Huang C, et al. Unsupervised domain adaptive re-identification: Theory and practice. Pattern Recognition, 2020, 102: Article No. 107173 doi: 10.1016/j.patcog.2019.107173
    [39] Zhang X Y, Cao J W, Shen C H, You M Y. Self-training with progressive augmentation for unsupervised cross-domain person re-identification. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision. Seoul, Korea (South): IEEE, 2019. 8221−8230
    [40] Zhai Y P, Ye Q X, Lu S J, Jia M X, Ji R R, Tian Y H. Multiple expert brainstorming for domain adaptive person re-identification. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: 2020. 594−611
    [41] Ainam J P, Qin K, Owusu J W, Lu G M. Unsupervised domain adaptation for person re-identification with iterative soft clustering. Knowledge-Based Systems, 2021, 212: Article No. 106644 doi: 10.1016/j.knosys.2020.106644
    [42] Li H F, Xu K X, Li J X, Lu G M, Xu Y, Yu Z T, et al. Dual-stream reciprocal disentanglement learning for domain adaptation person re-identification. arXiv: 2106.13929, 2021
    [43] Zhang H, Cao H H, Yang X, Deng C, Tao D C. Self-training with progressive representation enhancement for unsupervised cross-domain person re-identification. IEEE Transactions on Image Processing, 2021, 30: 5287-5298 doi: 10.1109/TIP.2021.3082298
    [44] Ge Y X, Zhu F, Chen D P, Zhao R, Li H S. Self-paced contrastive learning with hybrid memory for domain adaptive object re-ID. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. 11309−11321
    [45] Sun J, Li Y F, Chen H J, Peng Y H, Zhu J L. Unsupervised cross domain person re-identification by multi-loss optimization learning. IEEE Transactions on Image Processing, 2021, 30: 2935-2946 doi: 10.1109/TIP.2021.3056889
    [46] Zheng K C, Lan C L, Zeng W J, Zhan Z Z, Zha Z J. Exploiting sample uncertainty for domain adaptive person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3538-3546 doi: 10.1609/aaai.v35i4.16468
    [47] Zheng Z D, Yang Y. Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. International Journal of Computer Vision, 2021, 129(4): 1106-1120 doi: 10.1007/s11263-020-01395-y
  • 加载中
图(5) / 表(5)
计量
  • 文章访问数:  695
  • HTML全文浏览量:  188
  • PDF下载量:  261
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-28
  • 录用日期:  2022-07-30
  • 网络出版日期:  2022-08-29
  • 刊出日期:  2022-11-22

目录

    /

    返回文章
    返回