2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于边缘计算的工业视频网络智能感知: 挑战与进展

涂静正 温晓婧 陈彩莲 关新平

涂静正, 温晓婧, 陈彩莲, 关新平. 基于边缘计算的工业视频网络智能感知: 挑战与进展. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240668
引用本文: 涂静正, 温晓婧, 陈彩莲, 关新平. 基于边缘计算的工业视频网络智能感知: 挑战与进展. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240668
Tu Jing-Zheng, Wen Xiao-Jing, Chen Cai-Lian, Guan Xin-Ping. Edge computing based intelligent perception for industrial video network: Challenge and progress. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240668
Citation: Tu Jing-Zheng, Wen Xiao-Jing, Chen Cai-Lian, Guan Xin-Ping. Edge computing based intelligent perception for industrial video network: Challenge and progress. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240668

基于边缘计算的工业视频网络智能感知: 挑战与进展

doi: 10.16383/j.aas.c240668 cstr: 32138.14.j.aas.c24066
基金项目: 国家自然科学基金(62025305, 62432009, 62227811)资助
详细信息
    作者简介:

    涂静正:上海交通大学自动化系博士研究生. 主要研究方向为工业网络系统的视频智能分析与边缘计算. E-mail: tujingzheng@sjtu.edu.cn

    温晓婧:上海交通大学自动化系助理研究员. 主要研究方向为感知−通信−计算协同设计, 工业网络系统, 边缘计算及工业网络切片. E-mail: xiaojingwen@sjtu.edu.cn

    陈彩莲:上海交通大学自动化系特聘教授. 主要研究方向为无线传感器网络与工业应用, 计算智能, 分布式状态感知与优化, 智能交通中车联网及应用. E-mail: cailianchen@sjtu.edu.cn

    关新平:上海交通大学自动化系讲席教授. 主要研究方向为工业信息物理融合系统, 智能工厂中无线网络及应用, 水下传感器网络. 本文通信作者. E-mail: xpguan@sjtu.edu.cn

Edge Computing Based Intelligent Perception for Industrial Video Network: Challenge and Progress

Funds: Supported by National Natural Science Foundation of China (62025305, 62432009, 62227811)
More Information
    Author Bio:

    TU Jing-Zheng Ph. D. candidate at the Department of Automation, Shanghai Jiao Tong University. Her research interest covers video intelligent analytics and edge computing of industrial network systems

    WEN Xiao-Jing Assistant professor at the Department of Automation, Shanghai Jiao Tong University. Her research interest covers codesign of perception, communication and computation, industrial network systems, edge computing and industrial network slicing

    CHEN Cai-Lian Distinguished professor at the Department of Automation, Shanghai Jiao Tong University. Her research interest covers wireless sensor networks and industrial applications, computational intelligence, distributed situation awareness and optimization, internet of vehicles and applications in intelligent transportation

    GUAN Xin-Ping Chair Professor in the Department of Automation, Shanghai Jiao Tong University. His research interest covers industrial cyber-physical systems, wireless networking and applications in smart factory, and underwater sensor networks. Corresponding author of this paper

  • 摘要: 工业视频网络是由工业网络系统现场层的视觉感知终端组成的网络, 是实现工业网络系统泛在感知的重要基石. 通过支持边缘计算层和现场设备层之间的交互和物联, 工业视频网络将独立的视觉传感器单元无线连接、边缘处理, 以实现空间分散下的协作监控和精确感知. 它具有感知维度高, 网络动态性强, 感知与传输、计算、存储紧密耦合等突出特性. 如何在计算、网络、存储资源受限环境下实现终端压缩提纯、边缘协作处理、云端敏捷分析, 是这类系统研究的新挑战. 本文首先简述工业视频网络的定义和主要特征; 其次分析工业视频网络智能感知面临的挑战和关键问题; 然后综述基于边缘计算的工业视频网络感知关键技术的研究进展; 最后对工业视频网络智能感知的未来研究方向和潜在应用前景进行总结和展望.
  • 图  1  工业视频网络的整体架构

    Fig.  1  The overall architecture of industrial video network

    图  2  工业视频网络关键技术

    Fig.  2  The crucial technologies of industrial video network

    图  3  视频边缘协作缓存机制的示意图

    Fig.  3  Illustration of collaborative video edge caching mechanism

    图  4  基于感知敏感值的视频空间分片步骤

    Fig.  4  The steps of perception sensitivity based video tiling

    图  5  基于OMAF标准的全景视频主要处理流程

    Fig.  5  Main procedure of 360-degree videos based on OMAF standard

    图  6  三元组上的角度约束示意图

    Fig.  6  Illustration of the angular constraint on a triplet

    图  7  网络分解策略对比

    Fig.  7  Network decomposition strategy comparison

    图  8  卷积层和全连接层的并行运行

    Fig.  8  Parallel running of convolutional layers and fully-connected layers

    图  9  不同视频分辨率对边缘感知算法精度 和响应时延的影响

    Fig.  9  Effect of different video resolutions on accuracy and response latency of edge perception algorithm

    图  10  EdgeLeague框架示意图

    Fig.  10  Illustration of the overall architecture of EdgeLeague

    图  11  不同时延限制对精度的影响

    Fig.  11  The impact of different latency restrictions on the accuracy

    图  12  不同带宽对时延的影响

    Fig.  12  The impact of different bandwidths on the latency

    图  13  工业视频网络在飞机总装产线现场监控中的应用

    Fig.  13  Application of industrial video network to field monitoring of aircraft assembly line

    图  14  工业视频网络的技术趋势图

    Fig.  14  Technical trend charts of industrial video networks

    表  1  映射方法对比

    Table  1  Comparison of projection methods

    映射方法 像素比例 优点 缺点
    经纬图映射 1.57 映射方法简单 南北两级存在畸变
    立方体映射 1.91 每个面失真小 多面拼接不连续
    八面体映射 1.10 映射面积小 多面拼接不连续
    二十面体映射 1.21 映射面积小 多面拼接不连续
    柱状映射 1.09 映射面积小 存在空白区域
    下载: 导出CSV

    表  2  在VehicleID上不同图片污染程度下的DFR-ST算法性能对比

    Table  2  Performance comparison of DFR-ST Algorithm with different image contamination degrees on VehicleID

    污染程度 测试集数量 = 800 测试集数量 = 1600 测试集数量 = 2400
    mAP Top-1 Top-5 mAP Top-1 Top-5 mAP Top-1 Top-5
    0% 87.76 82.15 95.39 84.94 79.33 92.76 83.18 77.93 89.52
    5% 84.47 78.83 91.41 81.50 75.99 88.32 80.09 74.77 85.48
    10% 82.55 76.92 89.71 79.36 73.72 86.88 78.72 73.49 85.04
    20% 69.18 61.13 78.73 68.35 61.37 76.98 64.38 56.63 73.70
    30% 65.75 58.62 73.69 61.53 53.59 71.10 58.24 50.46 67.29
    40% 61.61 54.12 70.35 59.77 51.84 69.21 54.81 46.84 63.73
    50% 60.66 52.47 69.93 56.06 47.75 65.64 53.70 45.35 63.24
    下载: 导出CSV

    表  3  增量学习的策略对比

    Table  3  Incremental learning policy comparison

    方法 最佳平均精度 推荐正则器 推荐模型
    基于回放的方法
    iCaRL[156] 48.55 Dropout S/B/W
    基于正则化的方法
    LwF[162] 48.11 L2 W
    EBLL算法[163] 48.17 L2 W
    EWC[164] 45.13 S
    基于参数隔离的方法
    PackNet[166] 55.96 Dropout/L2 S/B/W
    HAT[167] 44.19 L2 B/W
    下载: 导出CSV
  • [1] 关新平, 陈彩莲, 杨博, 华长春, 吕玲, 朱善迎. 工业网络系统的感知-传输-控制一体化: 挑战和进展. 自动化学报, 2019, 45(1): 25−36

    Guan Xin-Ping, Chen Cai-Lian, Yang Bo, Hua Chang-Chun, Lv Ling, Zhu Shan-Ying. Towards the integration of sensing, transmission and control for industrial network systems: Challenges and recent developments. Acta Automatica Sinica, 2019, 45(1): 25−36
    [2] 张颖. 工业网络系统的控制-传输联合性能分析 [硕士学位论文], 上海交通大学, 中国, 2020

    Zhang Ying. Performance analysis of the combined control-transmission of industrial network system: From the viewpoint of control-transmission cost [Master dissertation], Shanghai Jiao Tong University, China, 2020
    [3] Zhang Y J, Xu Q M, Guan X P, Chen C L, Li M Y. Wireless/wired integrated transmission for industrial cyber-physical systems: Risk-sensitive co-design of 5G and TSN protocols. Science China Information Sciences, 2022, 65(1): Article No. 110204 doi: 10.1007/s11432-020-3344-8
    [4] Jung S H, Cho Y S, Park R S, Kim J M, Jung H K, Chung Y S. High-resolution millimeter-wave ground-based SAR imaging via compressed sensing. IEEE Transactions on Magnetics, 2018, 54(3): Article No. 9400504.
    [5] Zhao J, Liu Q, Wang X, Mao S W. Scheduled sequential compressed spectrum sensing for wideband cognitive radios. IEEE Transactions on Mobile Computing, 2018, 17(4): 913−926 doi: 10.1109/TMC.2017.2744621
    [6] Shi W Z, Caballero J, Huszár F, Totz J, Aitken A P, Bishop R, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 1874–1883
    [7] Chen A P, Chen Z, Zhang G L, Mitchell K, Yu J Y. Photo-realistic facial details synthesis from single image. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE, 2019. 9428–9438
    [8] Agustsson E, Minnen D, Toderici G, Mentzer F. Multi-realism image compression with a conditional generator. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 22324–22333
    [9] 中国团体标准. 工业互联网 汽车制造业 5G+汽车外观缺陷在线检测系统 通用要求与测试方法, T/ZJEI 003-2023, 2023) (查阅网上资料,未能确认年份信息,请确认)

    Chinese Group Standards. Industrial internet-automotive manufacturing-general requirements and test methods for on-line 5G-based automotive appearance defect detection applications, T/ZJEI 003-2023, 2023
    [10] Yuan X X, Cai Z C. ICHV: A new compression approach for industrial images. IEEE Transactions on Industrial Informatics, 2022, 18(7): 4427−4435 doi: 10.1109/TII.2021.3125375
    [11] Li J F, Liu X Y, Gao Y Q, Zhuo L, Zhang J. BARRN: A blind image compression artifact reduction network for industrial IoT systems. IEEE Transactions on Industrial Informatics, 2023, 19(9): 9479−9490 doi: 10.1109/TII.2022.3228777
    [12] Zhang Z, Lung C H, St-Hilaire M, Lambadaris I. An SDN-based caching decision policy for video caching in information-centric networking. IEEE Transactions on Multimedia, 2020, 22(4): 1069−1083 doi: 10.1109/TMM.2019.2935683
    [13] Zhang Y C, Li P M, Zhang Z L, Bai B, Zhang G, Wang W D, et al. AutoSight: Distributed edge caching in short video network. IEEE Network, 2020, 34(3): 194−199 doi: 10.1109/MNET.001.1900345
    [14] Zhang R X, Huang T C, Sun L F. A long -short-term fusion approach for video cache. In: Proceedings of the IEEE Global Communications Conference. Taipei, China: IEEE, 2020. 1–6
    [15] Chiang Y, Hsu C H, Wei H Y. Collaborative social-aware and QoE-driven video caching and adaptation in edge network. IEEE Transactions on Multimedia, 2021, 23: 4311−4325 doi: 10.1109/TMM.2020.3040532
    [16] Maniotis P, Bourtsoulatze E, Thomos N. Tile-based joint caching and delivery of 360° videos in heterogeneous networks. IEEE Transactions on Multimedia, 2020, 22(9): 2382−2395 doi: 10.1109/TMM.2019.2957993
    [17] Li T, Braud T, Li Y, Hui P. Lifecycle-aware online video caching. IEEE Transactions on Mobile Computing, 2021, 20(8): 2624−2636 doi: 10.1109/TMC.2020.2984364
    [18] Zheng Z W, Wang W, Shan H G, Zhang Z Y. Frame-level video caching and transmission scheduling via stochastic learning. In: Proceedings of the IEEE Global Communications Conference (GLOBECOM). Madrid, Spain: IEEE, 2021. 1–6
    [19] Zhang A Y, Li Q, Chen Y, Ma X T, Zou L H, Jiang Y, et al. Video super-resolution and caching-An edge-assisted adaptive video streaming solution. IEEE Transactions on Broadcasting, 2021, 67(4): 799−812 doi: 10.1109/TBC.2021.3071010
    [20] Huang X Y, He L J, Wang L J, Li F. Towards 5G: Joint optimization of video segment caching, transcoding and resource allocation for adaptive video streaming in a multi-access edge computing network. IEEE Transactions on Vehicular Technology, 2021, 70(10): 10909−10924 doi: 10.1109/TVT.2021.3108152
    [21] Qu Z H, Ye B L, Tang B, Guo S, Lu S L, Zhuang W H. Cooperative caching for multiple bitrate videos in small cell edges. IEEE Transactions on Mobile Computing, 2020, 19(2): 288−299 doi: 10.1109/TMC.2019.2893917
    [22] Shi W X, Wang C, Jiang Y, Li Q, Shen G B, Muntean G M. CoLEAP: Cooperative learning-based edge scheme with caching and prefetching for DASH video delivery. IEEE Transactions on Multimedia, 2021, 23: 3631−3645 doi: 10.1109/TMM.2020.3029893
    [23] Wang F X, Wang F, Liu J C, Shea R, Sun L F. Intelligent video caching at network edge: A multi-agent deep reinforcement learning approach. In: Proceedings of the IEEE Conference on Computer Communications. Toronto, Canada: IEEE, 2020. 2499–2508
    [24] Yeo H, Jung Y, Kim J, Shin J, Han D S. Neural adaptive content-aware internet video delivery. In: Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation. Carlsbad, USA: USENIX Association, 2018. 645–661
    [25] Zhang Y J, Zhang Y X, Wu Y, Tao Y, Bian K G, Zhou P. Improving quality of experience by adaptive video streaming with super-resolution. In: Proceedings of the IEEE Conference on Computer Communications. Toronto, Canada: IEEE, 2020. 1957–1966
    [26] Qian F, Han B, Xiao Q Y, Gopalakrishnan V. Flare: Practical viewport-adaptive 360-degree video streaming for mobile devices. In: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. New Delhi, India: ACM, 2018. 99–114
    [27] He J, Qureshi M A, Qiu L L, Li J, Li F, Han L. Rubiks: Practical 360-degree streaming for smartphones. In: Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. Munich, Germany: ACM, 2018. 482–494
    [28] Zhang Y X, Zhao P Y, Bian K G, Liu Y X, Song L Y, Li X M. DRL360: 360-degree video streaming with deep reinforcement learning. In: Proceedings of the IEEE Conference on Computer Communications. Paris, France: IEEE, 2019. 1252–1260
    [29] Zhang Y X, Guan Y S, Bian K G, Liu Y X, Tuo H, Song L Y, et al. EPASS360: QoE-aware 360-degree video streaming over mobile devices. IEEE Transactions on Mobile Computing, 2021, 20(7): 2338−2353 doi: 10.1109/TMC.2020.2978187
    [30] Xiao M B, Zhou C, Liu Y, Chen S Q. OpTile: Toward optimal tiling in 360-degree video streaming. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 708–716
    [31] Zhou C, Xiao M B, Liu Y. ClusTile: Toward minimizing bandwidth in 360-degree video streaming. In: Proceedings of the IEEE Conference on Computer Communications. Honolulu, USA: IEEE, 2018. 962–970
    [32] Guan Y, Zheng C Y, Zhang X G, Guo Z M, Jiang J C. Pano: Optimizing 360° video streaming with a better understanding of quality perception. In: Proceedings of the ACM Special Interest Group on Data Communication. Beijing, China: ACM, 2019. 394–407
    [33] Xie L, Xu Z M, Ban Y X, Zhang X G, Guo Z M. 360ProbDASH: Improving QoE of 360 video streaming using tile-based http adaptive streaming. In: Proceedings of the 25th ACM International Conference on Multimedia. Mountain View, USA: ACM, 2017. 315–323
    [34] Zou J N, Li C L, Liu C M, Yang Q, Xiong H K, Steinbach E. Probabilistic tile visibility-based server-side rate adaptation for adaptive 360-degree video streaming. IEEE Journal of Selected Topics in Signal Processing, 2020, 14(1): 161−176 doi: 10.1109/JSTSP.2019.2956716
    [35] Xu Y Y, Dong Y B, Wu J R, Sun Z Z, Shi Z R, Yu J Y. Gaze prediction in dynamic 360° immersive videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5333–5342
    [36] Nguyen A, Yan Z S, Nahrstedt K. Your attention is unique: Detecting 360-degree video saliency in head-mounted display for head movement prediction. In: Proceedings of the 26th ACM International Conference on Multimedia. Seoul, Republic of Korea: ACM, 2018. 1190–1198
    [37] Fan C L, Yen S C, Huang C Y, Hsu C H. Optimizing fixation prediction using recurrent neural networks for 360° video streaming in head-mounted virtual reality. IEEE Transactions on Multimedia, 2020, 22(3): 744−759 doi: 10.1109/TMM.2019.2931807
    [38] Hu H, Xu Z M, Zhang X G, Guo Z M. Optimal viewport-adaptive 360-degree video streaming against random head movement. In: Proceedings of the IEEE International Conference on Communications (ICC). Shanghai, China: IEEE, 2019. 1–6
    [39] Ban Y X, Xie L, Xu Z M, Zhang X G, Guo Z M, Wang Y. CUB360: Exploiting cross-users behaviors for viewport prediction in 360 video adaptive streaming. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Diego, CA, USA: IEEE, 2018. 1–6
    [40] Xie L, Zhang X G, Guo Z M. CLS: A cross-user learning based system for improving QoE in 360-degree video adaptive streaming. In: Proceedings of the 26th ACM International Conference on Multimedia. Seoul, Republic of Korea: ACM, 2018. 564–572
    [41] Graf M, Timmerer C, Mueller C. Towards bandwidth efficient adaptive streaming of omnidirectional video over HTTP: Design, implementation, and evaluation. In: Proceedings of the 8th ACM on Multimedia Systems Conference. Taipei, China: ACM, 2017. 261–271
    [42] Mahzari A, Nasrabadi A T, Samiei A, Prakash R. FoV-aware edge caching for adaptive 360° video streaming. In: Proceedings of the 26th ACM International Conference on Multimedia. Seoul, Republic of Korea: ACM, 2018. 173–181
    [43] Corbillon X, de Simone F, Simon G, Frossard P. Dynamic adaptive streaming for multi-viewpoint omnidirectional videos. In: Proceedings of the 9th ACM Multimedia Systems Conference. Amsterdam, Netherlands: ACM, 2018. 237–249
    [44] Palash M, Popescu V, Sheoran A, Fahmy S. Robust 360° video streaming via non-linear sampling. In: Proceedings of the IEEE Conference on Computer Communications. Vancouver, Canada: IEEE, 2021. 1–10
    [45] Chen X D, Tan T X, Cao G H. Popularity-aware 360-degree video streaming. In: Proceedings of the IEEE Conference on Computer Communications. Vancouver, Canada: IEEE, 2021. 1–10
    [46] Tu J Z, Chen C L, Yang Z W, Li M Y, Xu Q M, Guan X P. PSTile: Perception-sensitivity-based 360° tiled video streaming for industrial surveillance. IEEE Transactions on Industrial Informatics, 2023, 19(9): 9777−9789 doi: 10.1109/TII.2022.3216812
    [47] Bewley A, Ge Z Y, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. In: Proceedings of the IEEE International Conference on Image Processing (ICIP). Phoenix, USA: IEEE, 2016. 3464–3468
    [48] Corbillon X, de Simone F, Simon G. 360-degree video head movement dataset. In: Proceedings of the 8th ACM on Multimedia Systems Conference. Taipei, China: ACM, 2017. 199–204
    [49] Wu C L, Tan Z H, Wang Z, Yang S Q. A dataset for exploring user behaviors in VR spherical video streaming. In: Proceedings of the 8th ACM on Multimedia Systems Conference. Taipei, China: ACM, 2017. 193–198
    [50] Cheng Q, Shan H G, Zhuang W H, Yu L, Zhang Z Y, Quek T Q S. Design and analysis of MEC-and proactive caching-based 360° mobile VR video streaming. IEEE Transactions on Multimedia, 2022, 24: 1529−1544 doi: 10.1109/TMM.2021.3067205
    [51] Jiang J C, Sekar V, Zhang H. Improving fairness, efficiency, and stability in HTTP-based adaptive video streaming with Festive. IEEE/ACM Transactions on Networking, 2014, 22(1): 326−340 doi: 10.1109/TNET.2013.2291681
    [52] Huang T Y, Johari R, McKeown N, Trunnell M, Watson M. A buffer-based approach to rate adaptation: Evidence from a large video streaming service. In: Proceedings of the ACM conference on SIGCOMM. Chicago, USA: ACM, 2014. 187–198
    [53] Spiteri K, Urgaonkar R, Sitaraman R K. BOLA: Near-optimal bitrate adaptation for online videos. In: Proceedings of the 35th Annual IEEE International Conference on Computer Communications. Francisco, USA: IEEE, 2016. 1–9
    [54] Yin X Q, Jindal A, Sekar V, Sinopoli B. A control-theoretic approach for dynamic adaptive video streaming over HTTP. In: Proceedings of the ACM Conference on Special Interest Group on Data Communication. London, UK: ACM, 2015. 325–338
    [55] Sun Y, Yin X Q, Jiang J C, Sekar V, Lin F Y, Wang N S, et al. CS2P: Improving video bitrate selection and adaptation with data-driven throughput prediction. In: Proceedings of the ACM SIGCOMM Conference. Florianopolis, Brazil: ACM, 2016. 272–285
    [56] Mao H Z, Netravali R, Alizadeh M. Neural adaptive video streaming with Pensieve. In: Proceedings of the ACM Special Interest Group on Data Communication. Los Angeles, USA: ACM, 2017. 197–210
    [57] Xu L, Xu Q M, Tu J Z, Zhang J L, Zhang Y Z, Chen C L, et al. Learning-based scalable scheduling and routing co-design with stream similarity partitioning for time-sensitive networking. IEEE Internet of Things Journal, 2022, 9(15): 13353−13363 doi: 10.1109/JIOT.2022.3143829
    [58] Xu L, Xu Q M, Chen C L, Zhang Y Z, Wang S L, Guan X P. Efficient task-network scheduling with task conflict metric in time-sensitive networking. IEEE Transactions on Industrial Informatics, 2024, 20(2): 1528−1538 doi: 10.1109/TII.2023.3278883
    [59] Wang X L, Zhang J L, Chen C L, He J P, Ma Y H, Guan X P. Trust-AoI-aware codesign of scheduling and control for edge-enabled IIoT systems. IEEE Transactions on Industrial Informatics, 2024, 20(2): 2833−2842 doi: 10.1109/TII.2023.3299040
    [60] Wen X J, Chen C L, Ren C, Ma Y H, Li M Y, Lv L, et al. Age-of-task-aware co-design of sampling, scheduling, and control for industrial IoT systems. IEEE Internet of Things Journal, 2024, 11(3): 4227−4242 doi: 10.1109/JIOT.2023.3300921
    [61] Li M Y, Chen C L, Wu H Q, Guan X P, Shen X M. Edge-assisted spectrum sharing for freshness-aware industrial wireless networks: A learning-based approach. IEEE Transactions on Wireless Communications, 2022, 21(9): 7737−7752 doi: 10.1109/TWC.2022.3160857
    [62] Wen X J, Chen C L, Li M Y, Lv L, Guan X P. Age-of-task aware sampling rate optimization in edge-assisted industrial network systems. In: Proceedings of the IEEE Global Communications Conference (GLOBECOM). Madrid, Spain: IEEE, 2021. 1–6
    [63] Zhang Y J, Xu Q M, Chen C L, Li M Y, Guan X P, Quek T Q S. Seamless scheduling for NFV-enabled 5G-TSN network: A full-path AoI based method. IEEE Transactions on Industrial Informatics, 2024, 20(12): 13513−13525 doi: 10.1109/TII.2024.3396299
    [64] Li M Y, Guo S T, Chen C, Chen C L, Liao X F, Guan X P. DecAge: Decentralized flow scheduling for industrial 5G and TSN integrated networks. IEEE Transactions on Network Science and Engineering, 2024, 11(1): 543−555 doi: 10.1109/TNSE.2023.3301879
    [65] Liu X N, Yin H, Lin C, Liu Y, Chen Z J, Xiao X. Performance analysis and industrial practice of peer-assisted content distribution network for large-scale live video streaming. In: Proceedings of the 22nd International Conference on Advanced Information Networking and Applications (aina 2008). Gino-wan, Japan: IEEE, 2008. 568–574
    [66] Kanzaki H, Schubert K, Bambos N. Video streaming schemes for industrial IoT. Journal of Reliable Intelligent Environments, 2017, 3(4): 233−241 doi: 10.1007/s40860-017-0051-0
    [67] Wang Z D, Zheng L, Liu Y X, Li Y L, Wang S J. Towards real-time multi-object tracking. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 107–122
    [68] Chen L, Ai H Z, Zhuang Z J, Shang C. Real-time multiple people tracking with deeply learned candidate selection and person re-identification. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). San Diego, USA: IEEE, 2018. 1–6
    [69] Wang G A, Wang Y Z, Zhang H T, Gu R S, Hwang J N. Exploit the connectivity: Multi-object tracking with TrackletNet. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM, 2019. 482–490
    [70] Sun S J, Akhtar N, Song H S, Mian A, Shah M. Deep affinity network for multiple object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(1): 104−119
    [71] Chopra S, Hadsell R, LeCun Y. Learning a similarity metric discriminatively, with application to face verification. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). San Diego, USA: IEEE, 2005. 539–546
    [72] Hadsell R, Chopra S, LeCun Y. Dimensionality reduction by learning an invariant mapping. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). New York, USA: IEEE, 2006. 1735–1742
    [73] Schroff F, Kalenichenko D, Philbin J. FaceNet: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 815–823
    [74] Hermans A, Beyer L, Leibe B. In defense of the triplet loss for person re-identification. arXiv preprint arXiv: 1703.07737, 2017.
    [75] Wang J, Zhou F, Wen S L, Liu X, Lin Y Q. Deep metric learning with angular loss. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2612–2620
    [76] Chari V, Lacoste-Julien S, Laptev I, Sivic J. On pairwise costs for network flow multi-object tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 5537–5545
    [77] Zhang L, Li Y, Nevatia R. Global data association for multi-object tracking using network flows. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA: IEEE, 2008. 1–8
    [78] Wen L Y, Du D W, Li S K, Bian X, Lv S W. Learning non-uniform hypergraph for multi-object tracking. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Honolulu, USA: AAAI, 2019. 8981–8988
    [79] Chiu H K, Prioletti A, Li J, Bohg J. Probabilistic 3D multi-object tracking for autonomous driving. arXiv preprint arXiv: 2001.05673, 2020.
    [80] Weng X S, Wang J R, Held D, Kitani K. 3D multi-object tracking: A baseline and new evaluation metrics. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, UAS: IEEE, 2020. 10359–10366
    [81] Chiu H K, Li J, Ambrus R, Bohg J. Probabilistic 3D multi-modal, multi-object tracking for autonomous driving. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, 2021. 14227–14233
    [82] Bergmann P, Meinhardt T, Leal-Taixé L. Tracking without bells and whistles. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE, 2019. 941–951
    [83] Zaech J N, Liniger A, Dai D X, Danelljan M, van Gool L. Learnable online graph representations for 3D multi-object tracking. IEEE Robotics and Automation Letters, 2022, 7(2): 5103−5110 doi: 10.1109/LRA.2022.3145952
    [84] Yin T W, Zhou X Y, Krähenbühl P. Center-based 3D object detection and tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 11779–11788
    [85] Kratz L, Nishino K. Tracking with local spatio-temporal motion patterns in extremely crowded scenes. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE, 2010. 693–700
    [86] Wen L Y, Li W B, Yan J J, Lei Z, Yi D, Li S Z. Multiple target tracking based on undirected hierarchical relation hypergraph. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. 1282–1289
    [87] Hu W M, Li X, Luo W H, Zhang X Q, Maybank S, Zhang Z F. Single and multiple object tracking using Log-Euclidean Riemannian subspace and block-division appearance model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(12): 2420−2440 doi: 10.1109/TPAMI.2012.42
    [88] Cai L, Zeng H Q, Zhu J Q, Cao J W, Wang Y T, Ma K K. Cascading scene and viewpoint feature learning for pedestrian gender recognition. IEEE Internet of Things Journal, 2021, 8(4): 3014−3026 doi: 10.1109/JIOT.2020.3021763
    [89] Abdel-Basset M, Hawash H, Chakrabortty R K, Ryan M, Elhoseny M, Song H B. ST-DeepHAR: Deep learning model for human activity recognition in IoHT applications. IEEE Internet of Things Journal, 2021, 8(6): 4969−4979 doi: 10.1109/JIOT.2020.3033430
    [90] Yang X H, Liu W F, Zhang D L, Liu W, Tao D C. Targeted attention attack on deep learning models in road sign recognition. IEEE Internet of Things Journal, 2021, 8(6): 4980−4990 doi: 10.1109/JIOT.2020.3034899
    [91] Yang S C, Wen Y, He L H, Zhou M C. Sparse common feature representation for undersampled face recognition. IEEE Internet of Things Journal, 2021, 8(7): 5607−5618 doi: 10.1109/JIOT.2020.3031390
    [92] Huang Z T, Yu Y K, Xu J W, Ni F, Le X Y. PF-Net: Point fractal network for 3D point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 7659–7667
    [93] Yu Y K, Huang Z T, Li F, Zhang H D, Le X Y. Point encoder GAN: A deep learning model for 3D point cloud inpainting. Neurocomputing, 2020, 384: 192−199 doi: 10.1016/j.neucom.2019.12.032
    [94] Le X Y, Mei J H, Zhang H D, Zhou B Y, Xi J T. A learning-based approach for surface defect detection using small image datasets. Neurocomputing, 2020, 408: 112−120 doi: 10.1016/j.neucom.2019.09.107
    [95] Tang S Y, Andriluka M, Andres B, Schiele B. Multiple people tracking by lifted multicut and person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 3701–3710
    [96] Xiang J, Zhang G S, Hou J H, Sang N, Huang R. Multiple target tracking by learning feature representation and distance metric jointly. arXiv preprint arXiv: 1802.03252, 2018.
    [97] Tu J Z, Chen C L, Xu Q M, Yang B, Guan X P. Resource-efficient visual multiobject tracking on embedded device. IEEE Internet of Things Journal, 2022, 9(11): 8531−8543 doi: 10.1109/JIOT.2021.3115102
    [98] Milan A, Schindler K, Roth S. Multi-target tracking by discrete-continuous energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2054−2068 doi: 10.1109/TPAMI.2015.2505309
    [99] Chen Y H, Emer J, Sze V. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. In: Proceedings of the ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). Seoul, Korea (South): IEEE, 2016. 367–379
    [100] Pantho J H, Bhowmik P, Bobda C. Towards an efficient CNN inference architecture enabling in-sensor processing. Sensors, 2021, 21(6): Article No. 1955 doi: 10.3390/s21061955
    [101] Kim K D, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: ICLR, 2016. 1–6
    [102] Lebedev V, Ganin Y, Rakhuba M, Oseledets I V, Lempitsky V S. Speeding-up convolutional neural networks using fine-tuned CP-Decomposition. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: ICLR, 2015. 1–6
    [103] Tai C, Xiao T, Wang X G, E W N. Convolutional neural networks with low-rank regularization. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: ICLR, 2016. 1–6
    [104] Mathur A, Lane N D, Bhattacharya S, Boran A, Forlivesi C, Kawsar F. DeepEye: Resource efficient local execution of multiple deep vision models using wearable commodity hardware. In: Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. Niagara Falls, USA: ACM, 2017. 68–81
    [105] Peng X S, Murphey Y L, Liu R R, Li Y X. Driving maneuver early detection via sequence learning from vehicle signals and video images. Pattern Recognition, 2020, 103: Article No. 107276 doi: 10.1016/j.patcog.2020.107276
    [106] Wan S H, Ding S T, Chen C. Edge computing enabled video segmentation for real-time traffic monitoring in internet of vehicles. Pattern Recognition, 2022, 121: Article No. 108146 doi: 10.1016/j.patcog.2021.108146
    [107] Bashir R M S, Shahzad M, Fraz M M. VR-PROUD: Vehicle re-identification using progressive unsupervised deep architecture. Pattern Recognition, 2019, 90: 52−65 doi: 10.1016/j.patcog.2019.01.008
    [108] Liao S C, Hu Y, Zhu X Y, Li S Z. Person re-identification by local maximal occurrence representation and metric learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 2197–2206
    [109] Zheng L, Shen L Y, Tian L, Wang S J, Wang J D, Tian Q. Scalable person re-identification: A benchmark. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 1116–1124
    [110] Yang Z Y, Soltanian-Zadeh S, Farsiu S. BiconNet: An edge-preserved connectivity-based approach for salient object detection. Pattern Recognition, 2022, 121: Article No. 108231 doi: 10.1016/j.patcog.2021.108231
    [111] Zhao Z J, Zhao S Y, Shen J B. Real-time and light-weighted unsupervised video object segmentation network. Pattern Recognition, 2021, 120: Article No. 108120 doi: 10.1016/j.patcog.2021.108120
    [112] Xu Y Q, Jung C, Chang Y K. Head pose estimation using deep neural networks and 3D point clouds. Pattern Recognition, 2022, 121: Article No. 108210 doi: 10.1016/j.patcog.2021.108210
    [113] Dong J, Yang W K, Yao Y Z, Porikli F. Knowledge memorization and generation for action recognition in still images. Pattern Recognition, 2021, 120: Article No.108188 doi: 10.1016/j.patcog.2021.108188
    [114] Wang Y C, Chen Z Z, Wu F, Wang G. Person re-identification with cascaded pairwise convolutions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1470–1478
    [115] Zhong Z, Zheng L, Luo Z M, Li S Z, Yang Y. Invariance matters: Exemplar memory for domain adaptive person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 598–607
    [116] Ristani E, Tomasi C. Features for multi-target multi-camera tracking and re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 6036–6046
    [117] Chen W H, Chen X T, Zhang J G, Huang K Q. Beyond triplet loss: A deep quadruplet network for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 1320–1329
    [118] Yi D, Lei Z, Liao S C, Li S Z. Deep metric learning for person re-identification. In: Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm, Sweden: IEEE, 2014. 34–39
    [119] Chen G Y, Lin C Z, Ren L L, Lu J W, Zhou J. Self-critical attention learning for person re-identification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE, 2019. 9636–9645
    [120] Qian X L, Fu Y W, Jiang Y G, Xiang T, Xue X Y. Multi-scale deep learning architectures for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 5409–5418
    [121] Liu J W, Zha Z J, Xie H T, Xiong Z W, Zhang Y D. Ca3Net: Contextual-attentional attribute-appearance network for person re-identification. In: Proceedings of the 26th ACM international conference on Multimedia. Seoul, Republic of Korea: ACM, 2018. 737–745
    [122] Fu Y, Wei Y C, Zhou Y Q, Shi H H, Huang G, Wang X C, et al. Horizontal pyramid matching for person re-identification. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Honolulu, USA: AAAI, 2019. 8295–8302
    [123] Yang W J, Huang H J, Zhang Z, Chen X T, Huang K Q, Zhang S. Towards rich feature discovery with class activation maps augmentation for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, USA: IEEE, 2019. 1389–1398
    [124] Zhang Z, Zhang H J, Liu S. Person re-identification using heterogeneous local graph attention networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 12131–12140
    [125] Kalayeh M M, Basaran E, Gökmen M, Kamasak M E, Shah M. Human semantic parsing for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1062–1071
    [126] Liu C, Chang X J, Shen Y D. Unity style transfer for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 6886–6895
    [127] Liu J W, Zha Z J, Hong R C, Wang M, Zhang Y D. Deep adversarial graph attention convolution network for text-based person search. In: Proceedings of the 27th ACM International Conference on Multimedia. Nice, France: ACM, 2019. 665–673
    [128] Liu H, Jie J Q, Jayashree K, Qi M B, Jiang J G, Yan S C, et al. Video-based person re-identification with accumulative motion context. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(10): 2788−2802 doi: 10.1109/TCSVT.2017.2715499
    [129] Li J N, Zhang S L, Huang T J. Multi-scale 3D convolution network for video based person re-identification. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Honolulu, USA: AAAI, 2019. 8618–8625
    [130] Chen D P, Li H S, Xiao T, Yi S, Wang X G. Video person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1169–1178
    [131] Hou R B, Chang H, Ma B P, Huang R, Shan S G. BiCnet-TKS: Learning efficient spatial-temporal representation for video person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 2014–2023
    [132] Liu J W, Zha Z J, Wu W, Zheng K C, Sun Q B. Spatial-temporal correlation and topology learning for person re-identification in videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4368–4377
    [133] Tu J Z, Chen C L, Huang X L, He J P, Guan X P. DFR-ST: Discriminative feature representation with spatio-temporal cues for vehicle re-identification. Pattern Recognition, 2022, 131: Article No. 108887 doi: 10.1016/j.patcog.2022.108887
    [134] Zhou Y, Shao L. Viewpoint-aware attentive multi-view inference for vehicle re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 6489–6498
    [135] Zhang H Y, Ananthanarayanan G, Bodik P, Philipose G, Bahl P, Freedman M. Live video analytics at scale with approximation and delay-tolerance. In: Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation. Boston, USA: USENIX Association, 2017. 377–392
    [136] Zhang W Y, Li S G, Liu L Y, Jia Z H, Zhang Y Y, Raychaudhuri D. Hetero-Edge: Orchestration of real-time vision applications on heterogeneous edge clouds. In: Proceedings of the IEEE Conference on Computer Communications. Paris, France: IEEE, 2019. 1270–1278
    [137] Yang P, Lv F, Wu W, Zhang N, Yu L, Shen X S. Edge coordinated query configuration for low-latency and accurate video analytics. IEEE Transactions on Industrial Informatics, 2020, 16(7): 4855−4864 doi: 10.1109/TII.2019.2949347
    [138] Jiang J C, Ananthanarayanan G, Bodik P, Sen S, Stoica I. Chameleon: Scalable adaptation of video analytics. In: Proceedings of the Conference of the ACM Special Interest Group on Data Communication. Budapest, Hungary: ACM, 2018. 253–266
    [139] Wang S B, Yang S S, Zhao C. SurveilEdge: Real-time video query based on collaborative cloud-edge deep learning. In: Proceedings of the IEEE Conference on Computer Communications. Toronto, Canada: IEEE, 2020. 2519–2528
    [140] Ran X K, Chen H L, Zhu X D, Liu Z M, Chen J S. DeepDecision: A mobile deep learning framework for edge video analytics. In: Proceedings of the IEEE Conference on Computer Communications. Honolulu, USA: IEEE, 2018. 1421–1429
    [141] Liu L Y, Li H Y, Gruteser M. Edge assisted real-time object detection for mobile augmented reality. In: Proceedings of the 25th Annual International Conference on Mobile Computing and Networking. Los Cabos, Mexico: ACM, 2019. Article No. 25
    [142] Wang C, Zhang S, Chen Y, Qian Z Z, Wu J, Xiao M J. Joint configuration adaptation and bandwidth allocation for edge-based real-time video analytics. In: Proceedings of the IEEE Conference on Computer Communications. Toronto, Canada: IEEE, 2020. 257–266
    [143] Tan T X, Cao G H. FastVA: Deep learning video analytics through edge processing and NPU in mobile. In: Proceedings of the IEEE Conference on Computer Communications. Toronto, Canada: IEEE, 2020. 1947–1956
    [144] Huynh L N, Lee Y, Balan R K. DeepMon: Mobile GPU-based deep learning framework for continuous vision applications. In: Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. Niagara Falls, USA: ACM, 2017. 82–95
    [145] Xu M W, Zhu M Z, Liu Y X, Lin F X, Liu X Z. DeepCache: Principled cache for mobile deep vision. In: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. New Delhi, India: ACM, 2018. 129–144
    [146] Fang B Y, Zhang X, Zhang M. NestDNN: Resource-aware multi-tenant on-device deep learning for continuous mobile vision. In: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. New Delhi, India: ACM, 2018. 115–127
    [147] Wang X L, Chen C L, He J P, Zhu S Y, Guan X P. Learning-based online transmission path selection for secure estimation in edge computing systems. IEEE Transactions on Industrial Informatics, 2021, 17(5): 3577−3587 doi: 10.1109/TII.2020.3012090
    [148] Long C C, Cao Y, Jiang T, Zhang Q. Edge computing framework for cooperative video processing in multimedia IoT systems. IEEE Transactions on Multimedia, 2018, 20(5): 1126−1139 doi: 10.1109/TMM.2017.2764330
    [149] Yang K P, Shan H G, Sun T X, Hu H J, Wu Y X, Yu L, et al. Reinforcement learning-based mobile edge computing and transmission scheduling for video surveillance. IEEE Transactions on Emerging Topics in Computing, 2022, 10(2): 1142−1156
    [150] Tu J Z, Xu Q M, Chen C L. CANS: Communication limited camera network self-configuration for intelligent industrial surveillance. In: Proceedings of the 47th Annual Conference of the IEEE Industrial Electronics Society. Toronto, Canada: IEEE, 2021. 1–6
    [151] Tu J Z, Chen C L, Xu Q M, Guan X P. EdgeLeague: Camera network configuration with dynamic edge grouping for industrial surveillance. IEEE Transactions on Industrial Informatics, 2023, 19(5): 7110−7121 doi: 10.1109/TII.2022.3205938
    [152] 范绍帅, 吴剑波, 田辉. 面向能量受限工业物联网设备的联邦学习资源管理. 通信学报, 2022, 43(8): 65−77 doi: 10.11959/j.issn.1000-436x.2022126

    Fan Shao-Shuai, Wu Jian-Bo, Tian Hui. Federated learning resource management for energy-constrained industrial IoT devices. Journal on Communications, 2022, 43(8): 65−77 doi: 10.11959/j.issn.1000-436x.2022126
    [153] Ali S, Li Q M, Yousafzai A. Blockchain and federated learning-based intrusion detection approaches for edge-enabled industrial IoT networks: A survey. Ad Hoc Networks, 2024, 152: Article No. 103320 doi: 10.1016/j.adhoc.2023.103320
    [154] Qian Z, Feng Y M, Dai C L, Li W, Li G H. Mobility-aware proactive video caching based on asynchronous federated learning in mobile edge computing systems. Applied Soft Computing, 2024, 162: Article No. 111795 doi: 10.1016/j.asoc.2024.111795
    [155] Liu F Y, Ye M, Du B. Domain generalized federated learning for person re-identification. Computer Vision and Image Understanding, 2024, 241: Article No. 103969 doi: 10.1016/j.cviu.2024.103969
    [156] Rebuffi D A, Kolesnikov A, Sperl G, Lampert C H. iCarl: Incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 5533–5542
    [157] Castro F M, Marín-Jiménez M, Guil N, Schmid C, Alahari K. End-to-end incremental learning. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 241–257
    [158] Wu Y, Chen Y P, Wang L J, Ye Y C, Liu Z C, Guo Y D, et al. Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 374–382
    [159] Lopez-Paz D, Ranzato M. Gradient episodic memory for continual learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 6470–6479
    [160] Aljundi R, Lin M, Goujaud B, Bengio Y. Gradient based sample selection for online continual learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019. Article No. 1058
    [161] Shin H, Lee J K, Kim J H, Kim J. Continual learning with deep generative replay. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 2994–3003
    [162] Li Z Z, Hoiem D. Learning without forgetting. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 614–629
    [163] Rannen A, Aljundi R, Blaschko M B, Tuytelaars T. Encoder based lifelong learning. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 1329–1337
    [164] Liu X L, Masana M, Herranz L, van de Weijer J, López A M, Bagdanov A D. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In: Proceedings of the 24th International Conference on Pattern Recognition (ICPR). Beijing, China: IEEE, 2018. 2262–2268
    [165] Dhar P, Singh R V, Peng K C, Wu Z Y, Chellappa R. Learning without memorizing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 5133–5141
    [166] Guizilini V, Ambrus R, Pillai S, Raventos A, Gaidon A. 3D packing for self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 2482–2491
    [167] Serra J, Suris D, Miron M, Karatzoglou A. Overcoming catastrophic forgetting with hard attention to the task. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR, 2018. 4548–4557
    [168] Singh P, Mazumder P, Rai P, Namboodiri V P. Rectification-based knowledge retention for continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 15277–15286
    [169] Yan S P, Xie J W, He X M. DER: Dynamically expandable representation for class incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 3013–3022
    [170] Tao X Y, Hong X P, Chang X Y, Dong S L, Wei X, Gong Y H. Few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 12180–12189
    [171] Zhang C, Song N, Lin G S, Zheng Y, Pan P, Xu Y H. Few-shot incremental learning with continually evolved classifiers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 12450–12459
    [172] Dong N, Zhang Y Q, Ding M L, Lee G H. Incremental-DETR: Incremental few-shot object detection via self-supervised learning. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. Washington, USA: AAAI, 2023. 543–551
    [173] Yin L, Perez-Rua J M, Liang K J. Sylph: A hypernetwork framework for incremental few-shot object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 9025–9035
    [174] Cermelli F, Mancini M, Buló S, Ricci E, Caputo B. Modeling the background for incremental and weakly-supervised semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(12): 10099−10113 doi: 10.1109/TPAMI.2021.3133954
    [175] Ren C, Chen C L, Wen X J, Ma Y H, Zhu S Y, Guan X P. Joint design of communication and computing for digital-twin-enabled aircraft final assembly. IEEE Internet of Things Journal, 2023, 10(18): 15872−15886 doi: 10.1109/JIOT.2023.3266278
    [176] Ren C, Chen C L, Li P Z, Wen X J, Ma Y H, Guan X P. Digital-twin-enabled task scheduling for state monitoring in aircraft testing process. IEEE Internet of Things Journal, 2024, 11(16): 26751−26765 doi: 10.1109/JIOT.2024.3373669
    [177] Li Y J, Liu W, Zhang Y, Zhang W L, Gao C Y, Chen Q H, et al. Interactive real-time monitoring and information traceability for complex aircraft assembly field based on digital twin. IEEE Transactions on Industrial Informatics, 2023, 19(9): 9745−9756 doi: 10.1109/TII.2023.3234618
    [178] Zhang Q, Zheng S G, Yu C J, Wang Q, Ke Y L. Digital thread-based modeling of digital twin framework for the aircraft assembly system. Journal of Manufacturing Systems, 2022, 65: 406−420 doi: 10.1016/j.jmsy.2022.10.004
    [179] Jin J, Hu J S, Li C Y, Shi Z H, Lei P, Tian W. A digital twin system of reconfigurable tooling for monitoring and evaluating in aerospace assembly. Journal of Manufacturing Systems, 2023, 68: 56−71 doi: 10.1016/j.jmsy.2023.03.004
    [180] 张景龙, 陈彩莲, 许齐敏, 林美涵, 卢宣兆, 陈营修. 面向工业互联网的异构时间敏感数据流协同传输机制设计. 中国科学: 技术科学, 2022, 52(1): 138−151 doi: 10.1360/SST-2021-0394

    Zhang Jing-Long, Chen Cai-Lian, Xu Qi-Min, Lin Mei-Han, Lu Xuan-Zhao, Chen Ying-Xiu. Design of coordinated transmission mechanism of heterogeneous time-sensitive data flow for industrial internet. Scientia Sinica Technologica, 2022, 52(1): 138−151 doi: 10.1360/SST-2021-0394
    [181] 汪世豪. 基于激光雷达与工业相机数据融合的路面感知算法研究 [硕士学位论文]. 重庆理工大学, 中国, 2022

    Wang Shi-Hao. Research on road sensing algorithm based on data fusion of lidar and industrial camera [Master dissertation], Chongqing University of Technology, China, 2022
    [182] 胡永利, 朴星霖, 孙艳丰, 尹宝才. 多源异构感知数据融合方法及其在目标定位跟踪中的应用. 中国科学: 信息科学, 2013, 43(10): 1288−1306 doi: 10.1360/112013-120

    Hu Yong-Li, Piao Xing-Lin, Sun Yan-Feng, Yin Bao-Cai. Multi-source heterogeneous data fusion method and its application in object positioning and tracking. Scientia Sinica Informationis, 2013, 43(10): 1288−1306 doi: 10.1360/112013-120
    [183] 裴郁杉, 唐雄燕, 黄蓉, 李瑞华. 通信感知计算融合在工业互联网中的愿景与关键技术. 邮电设计技术, 2022(3): 14−18 doi: 10.12045/j.issn.1007-3043.2022.03.003

    Pei Yu-Shan, Tang Xiong-Yan, Huang Rong, Li Rui-Hua. Vision and key technologies of communication sensing and computing integration in industrial internet. Designing Techniques of Posts and Telecommunications, 2022(3): 14−18 doi: 10.12045/j.issn.1007-3043.2022.03.003
    [184] Lin M H, Xu Q M, Lu X Z, Zhang J L, Chen C L. Control and transmission co-design for industrial cps integrated with time-sensitive networking. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC). Prague, Czech Republic: IEEE, 2022. 229–234
    [185] 杨林瑶, 陈思远, 王晓, 张俊, 王成红. 数字孪生与平行系统: 发展现状、对比及展望. 自动化学报, 2019, 45(11): 2001−2031

    Yang Lin-Yao, Chen Si-Yuan, Wang Xiao, Zhang Jun, Wang Cheng-Hong. Digital twins and parallel systems: State of the art, comparisons and prospect. Acta Automatica Sinica, 2019, 45(11): 2001−2031
  • 加载中
计量
  • 文章访问数:  7
  • HTML全文浏览量:  10
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-10-08
  • 录用日期:  2025-03-23
  • 网络出版日期:  2025-06-06

目录

    /

    返回文章
    返回