2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于信息融合的智能网联汽车安全交互决策

黄昭彦 杨烁 吴建华 范佳琦 田炜 殷翔 方浩 褚洪庆 高炳钊

黄昭彦, 杨烁, 吴建华, 范佳琦, 田炜, 殷翔, 方浩, 褚洪庆, 高炳钊. 基于信息融合的智能网联汽车安全交互决策. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240680
引用本文: 黄昭彦, 杨烁, 吴建华, 范佳琦, 田炜, 殷翔, 方浩, 褚洪庆, 高炳钊. 基于信息融合的智能网联汽车安全交互决策. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240680
Huang Zhao-Yan, Yang Shuo, Wu Jian-Hua, Fan Jia-Qi, Tian Wei, Yin Xiang, Fang Hao, Chu Hong-Qing, Gao Bing-Zhao. Safety interactive decision-making for intelligent connected vehicles based on information fusion. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240680
Citation: Huang Zhao-Yan, Yang Shuo, Wu Jian-Hua, Fan Jia-Qi, Tian Wei, Yin Xiang, Fang Hao, Chu Hong-Qing, Gao Bing-Zhao. Safety interactive decision-making for intelligent connected vehicles based on information fusion. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240680

基于信息融合的智能网联汽车安全交互决策

doi: 10.16383/j.aas.c240680 cstr: 32138.14.j.aas.c240680
基金项目: 国家重点研发计划(2023YFB2504400), 国家自然科学基金(62373289, 62473291), 中央高校基本科研业务费专项资金资助
详细信息
    作者简介:

    黄昭彦:同济大学汽车学院博士研究生. 主要研究方向为自动驾驶安全决策与规划. E-mail: huangzhaoyan@tongji.edu.cn

    杨烁:宾夕法尼亚大学电气与系统工程系博士后. 主要研究方向为控制理论, 形式化方法. E-mail: yangs1@seas.upenn.edu

    吴建华:同济大学汽车学院硕士研究生. 主要研究方向为端到端, 自动驾驶和视觉——语言——行动模型. E-mail: 2332980@tongji.edu.cn

    范佳琦:同济大学上海自主智能无人系统科学中心博士生. 主要研究方向为自动驾驶场景理解研究, 视觉语言模型. E-mail: fanjq@tongji.edu.cn

    田炜:同济大学汽车学院副教授. 主要研究方向为自动驾驶感知与预测技术. E-mail: tian-wei@tongji.edu.cn

    殷翔:上海交通大学自动化与感知学院教授. 主要研究方向为系统与控制理论, 自主系统和可信人工智能. E-mail: yinxiang@sjtu.edu.cn

    方浩:北京理工大学自动化学院教授. 主要研究方向为多智能体协同决策与控制, 智能无人系统的多传感器融合SLAM和可信群体智能中的形式化方法. E-mail: fangh@bit.edu.cn

    褚洪庆:同济大学汽车学院副教授. 主要研究方向为网联新能源汽车经济性驾驶策略, 人类驾驶数据引导的汽车安全决策和数据机理混合增强的车辆运动控制. E-mail: chuhongqing@tongji.edu.cn

    高炳钊:同济大学汽车学院教授. 主要研究方向为汽车动力传动系统优化, 汽车控制与智能化. 本文通信作者. E-mail: gaobz@tongji.edu.cn

Safety Interactive Decision-making for Intelligent Connected Vehicles Based on Information Fusion

Funds: Supported by National Key Research and Development Program of China(2023YFB2504400), National Natural Science Foundation of China (62373289, 62473291), Fundamental Research Funds for Central Universities
More Information
    Author Bio:

    HUANG Zhao-Yan Ph.D. candidate at the School of Automotive Studies, Tongji University. His research interest covers safety decision-making and planning for autonomous driving. Equally contributed of this paper

    YANG Shuo Postdoctor in Department of Electrical and Systems Engineering University of Pennsylvania. His research interest covers control theory and formal methods

    WU Jian-Hua Master student at the School of Automotive Studies, Tongji University. His research interest covers end-to-end autonomous driving and vision-language-action Models

    FAN Jia-Qi Ph.D. candidate at Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University. Her research interest covers scene understanding of autonomous driving and vision-language model

    TIAN Wei Associate professor at the School of Automotive Studies, Tongji University. His research interest covers perception and prediction technologies for autonomous driving

    YIN Xiang Professor at the School of Automation and Intelligent Sensing, Shanghai Jiao Tong University. His research interest covers systems and control theory, autonomous systems, and trustworthy AI

    FANG Hao Professor at School of Automation, Beijing Institute of Technology. His research interest covers multi-agent cooperative decision-making and control, multi-sensor fusion SLAM for intelligent unmanned systems, and formal methods in trustworthy swarm intelligence

    CHU Hong-Qing Associate professor at the School of Automotive Studies, Tongji University. His research interest covers economic driving strategies for connected new energy vehicles, human driving data-guided vehicle safety decision-making, and data-mechanism hybrid enhanced vehicle motion control

    GAO Bing-Zhao Professor at the Tongji University. His main research interest covers vehicle power transmission optimization, vehicle control and intelligence. Corresponding author of this paper

  • 摘要: 在开放交通场景, 智能网联汽车仍然存在安全可信性弱、交互属性不足等关键瓶颈问题. 随着人工智能(AI)的发展和深度学习的突破, AI模型在自动驾驶领域取得了显著成果, 可以应用于自动驾驶中的场景理解和推理. 本文对基于信息融合的智能网联汽车安全交互决策研究进行综述, 首先梳理开放场景交通感知和理解方面的研究, 然后探讨具有社会交互属性的决策规划模型, 最后总结针对AI模型幻觉的安全验证技术, 通过结合三方面研究, 充分利用AI模型的强大能力实现“熟练司机”驾驶技能, 并讨论安全保障技术, 弥补AI模型“偶尔犯错”的不足, 有望解决自动驾驶安全长尾问题, 进一步推动自动驾驶技术的发展.
    1)  11 数据来源《盖世汽车研究院Robotaxi产业研究报告(2023版)》
  • 图  1  开放场景交通感知和理解结构[21][29][34]

    Fig.  1  Open scene traffic perception and understanding structure[21][29][34]

    图  2  基于强化学习/模仿学习的端到端自动驾驶体系结构

    Fig.  2  An end-to-end autonomous driving architecture based on reinforcement learning/imitation lerning

    图  3  端到端自动驾驶规划框架

    Fig.  3  An end-to-end autonomous driving planning architecture

    图  4  自监督学习范式下具有社会交互属性的决策规划模型

    Fig.  4  Decision planning model with social interaction attribute under self-supervised learning paradigm

    图  5  基于信息融合的智能网联汽车安全交互决策技术路线

    Fig.  5  An information fusion-based safety interaction decision-making technical roadmap for intelligent connected vehicles

  • [1] 新华社. 目前全国已开放智能网联汽车测试道路里程超过15000公里[N]. 人民日报海外版, 2023-06-23(01)

    The country has opened intelligent connected car test road mileage of more than 15, 000 kilometers. People's Daily Overseas Editionp, 2023-06-23, (01
    [2] Abdel-Aty M, Ding S. A matched case-control analysis of autonomous vs human-driven vehicle accidents. Nature Communications, 2024, 15: 4931 doi: 10.1038/s41467-024-48526-4
    [3] Document for Full Self-Driving Capability; https://www.tesla.com/support/full-self-driving-subscriptions
    [4] 2023 Disengagement Reports (California Department of Motor Vehicles, 2024); https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/disengagement-reports/.
    [5] FSD Community Tracker; https://www.teslafsdtracker.com/home
    [6] Overview of motor vehicle traffic crashes in 2022. (National Highway Traffic Safety Administration, 2024).
    [7] Kusano KD, Scanlon JM, Chen YH, McMurry TL, Chen R, Gode T, et al. Comparison of waymo rider-only crash data to human benchmarks at 7.1 million miles. arXiv preprint arXiv: https://arxiv.org/abs/2312.12675, 2023.
    [8] Road vehicles – Safety of the intended functionality. 2019-01, https://www.iso.org/standard/70939.html.
    [9] Feng S, Sun H, Yan X, Zhu H, Zou Z, Shen S, et al. Dense reinforcement learning for safety validation of autonomous vehicles. Nature, 2023, 615(7953): 620−627 doi: 10.1038/s41586-023-05732-2
    [10] Bozga M, Iosif R, Sifakis J. Verification of component-based systems with recursive architectures. Theoretical Computer Science, 2023, 940: 146−175 doi: 10.1016/j.tcs.2022.10.022
    [11] Wang W S, Wang L T, Zhang C Y, Liu C L, Sun L J. Social interactions for autonomous driving: A review and perspectives. Foundations and Trends in Robotics, 2022, 10(3-4): 198−376 doi: 10.1561/2300000078
    [12] Li D, Huang Y L, Qian L X. Potential adoption of robotaxi service: The roles of perceived benefits to multiple stakeholders and environmental awareness. Foundations and Trends in Robotics, 2022, 126: 120−135
    [13] The Select Committee on Artificial Intelligence of the National Science and Technology Council. The National Artificial Intelligence R&D Strategic Plan 2023 Update, 2023-05
    [14] Philion J, Fidler S. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3D. In: Proceedings of the European Conference on Computer Vision (ECCV). Glasgow, UK: Springer International Publishing, 2020: 194-210.
    [15] Reading C, Harakeh A, Chae J, Waslander S L. Categorical depth distribution network for monocular 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 8555-8564.
    [16] Huang J J, Huang G, Zhu Z, Ye Y, Du D L. High-performance multi-camera 3D object detection in bird-eye-view. arXiv preprint arXiv: 2112.11790, 2021.
    [17] Pan B, Sun J, Leung H Y T, Andonian A, Zhou B L. Cross-view semantic segmentation for sensing surroundings. IEEE Robotics and Automation Letters, 2020, 5(3): 4867−4873 doi: 10.1109/LRA.2020.3004325
    [18] Roddick T, Cipolla R. Predicting semantic map representations from images using pyramid occupancy networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 11138 -11147.
    [19] Gong S, Ye X Q, Tan X, Wang D J, Ding E, Zhou Y, et al. GitNet: Geometric prior-based transformation for birds-eye-view segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV). Cham: Springer Nature Switzerland, 2022: 396-411.
    [20] Wang Y, Guizilini V C, Zhang T, Wang Y, Zhao H, Solomon J. DETR3D: 3D object detection from multi-view images via 3D-to-2D queries. In: Proceedings of Conference on Robot Learning. PMLR, 2022: 180-191.
    [21] Li Z, Wang W H, Li H Y, Xie E Z, Sima C H, L T, et al. BEVFormer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In: Proceedings of the European Conference on Computer Vision (ECCV). Cham: Springer Nature Switzerland, 2022: 1-18.
    [22] Yang C Y, Chen Y T, Tian H, Tao C X, Zhu X Z, Zhang Z X, et al. BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023: 17830-17839.
    [23] Liu Y F, Wang T C, Zhang X Y, Sun J. PETR: Position embedding transformation for multi-view 3D object detection. In: Proceedings of the European Conference on Computer Vision (ECCV). Cham: Springer Nature Switzerland, 2022: 531-548.
    [24] Liu Y F, Yan J J, Jia F, Li S L, Gao A, Wang T C, et al. PETRv2: A unified framework for 3D perception from multi-camera images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023: 3262-3272.
    [25] Lang A H, Vora S, Caesar H, Zhou L B, Yang J, Beijbom O. BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). California, United States: IEEE, 2019: 12697-12705.
    [26] Chen X Z, Ma H, Wan J, Li B, Xia T. Multi-view 3D object detection network for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Hawaii, United States: IEEE, 2017: 1907-1915.
    [27] Zhou Y, Tuzel O. VoxelNet: End-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Utah, United States: IEEE, 2018: 4490-4499.
    [28] 秦超, 王亚飞, 张宇超, 殷承良. 基于极端稀疏激光点云和RGB图像的3D目标检测. 激光与光电子学进展, 2022, 59(18): 447−458

    Qing Chao, Wang Ya-Fei, Zhang Yu-Chao, Yin Cheng-Liang. 3D object detection based on extremely sparse laser point cloud and RGB images. Laser & Optoelectronics Progress, 2022, 59(18): 447−458
    [29] Liu Z J, Tang H T, Amini A, Yang X Y, Mao H Z, Rus D L, et al. BEVFusion: Multi-task multi-sensor fusion with unified bird's-eye view representation. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). London, United Kingdom: IEEE, 2023: 2774-2781.
    [30] Li Y W, Chen Y L, Qi X J, Li Z M, Sun J, Jia J Y. Unifying voxel-based representation with transformer for 3D object detection. Advances in Neural Information Processing Systems, 2022, 35: 18442−18455
    [31] Pang S, Morris D, Radha H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detection. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV, USA: IEEE, 2020: 10386-10393.
    [32] Li Q, Wang Y, Wang Y L, Zhao H. HDMapNet: An online HD map construction and evaluation framework. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Philadelphia, PA, USA: IEEE, 2022: 4628-4634.
    [33] 吴绍斌, 耿家琳, 吴超, 闫泽新, 陈恺宇. 基于多帧信息的多传感器融合三维目标检测. 北京理工大学学报, 2023, 43(12): 1282−1289

    Wu Shao-Bin, Geng Jia-Lin, Wu Chao, Yan Ze-Xin, Chen Kai-Yu. Multi-sensor fusion 3D object detection based on multi-frame information. Transactions of Beijing Institute of Technology, 2023, 43(12): 1282−1289
    [34] Huang J J, Huang G. BEVDet4D: Exploit temporal cues in multi-camera 3D object detection. arXiv preprint arXiv: 2203.17054, 2022.
    [35] Qin Z Q, Chen J Y, Chen C, Chen X Z, Li X. UniFormer: Unified multi-view fusion transformer for spatial-temporal representation in Bird's-Eye-View. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023: 8690-8699.
    [36] Sun W, Lin X, Shi Y, Zhang C, Wu H, Zheng S. SparseDrive: End-to-End autonomous driving via sparse scene representation. arXiv preprint arXiv: 2405.19620, 2024.
    [37] Weng X, Ivanovic B, Wang Y, Wang Y, Pavone M. PARA-Drive: Parallelized Architecture for Real-time Autonomous Driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, United States: IEEE, 2024: 15449-15458.
    [38] Li P, Cui D. Does End-to-End autonomous driving really need perception tasks? arXiv preprint arXiv: 2409.18341, 2024.
    [39] Kuefler A, Morton J, Wheeler T, Kochenderfer M. Imitating driver behavior with generative adversarial networks. In: Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV). Los Angeles, CA, USA: IEEE, 2017: 204-211.
    [40] Lu C, Wang H J, Lv C, Gong J W, Xi J Q, Cao D P. Learning driver-specific behavior for overtaking: A combined learning framework. IEEE Transactions on Vehicular Technology, 2018, 67(8): 6788−6802 doi: 10.1109/TVT.2018.2820002
    [41] Acerbo F S, Swevers J, Tuytelaars T, Son T D. Evaluation of MPC-based imitation learning for human-like autonomous driving. IFAC-PapersOnLine, 2023, 56(2): 4871−4876 doi: 10.1016/j.ifacol.2023.10.1257
    [42] Ahmedov H B, Yi D W, Sui J. Application of a brain-inspired deep imitation learning algorithm in autonomous driving. Software Impacts, 2021, 10: 100165 doi: 10.1016/j.simpa.2021.100165
    [43] 徐优志. 自动驾驶车辆高速道路环境下超车行为决策研究. 北京理工大学, 中国, 2016

    Xu You-Zhi. Decision-making modeling of overtaking behavior for autonomous vehicles on freeway environment [Master thesis], Beijing Institute of Technology, China, 2016
    [44] 陈鸿军. 基于模仿学习的智能车辆行为决策与运动控制方法研究. 国防科技大学, 中国, 2019

    Chen Hong-Jun. Research on behavior decision-making and motion control methods based on imitation learning for intelligent vehicles [Master thesis], National University of Defense Technology, China, 2019
    [45] Xu H Z, Gao Y, Yu F, Darrell T. End-to-end learning of driving models from large-scale video datasets. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Hawaii, United States: IEEE, 2017: 2174-2182.
    [46] Bhattacharyya R, Wulfe B, Phillips D J, Kuefler A, Morton J, Senanayake R, et al. Modeling human driving behavior through generative adversarial imitation learning. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(3): 2874−2887
    [47] Li Y Z, Song J M, Ermon S. InfoGAIL: Interpretable imitation learning from visual demonstrations. Advances in Neural Information Processing Systems, 2017, 30
    [48] Wang H J, Gao H B, Yuan S H, Zhao H F, Wang K L, Wang X L, et al. Interpretable decision-making for autonomous vehicles at highway on-ramps with latent space reinforcement learning. IEEE Transactions on Vehicular Technology, 2021, 70(9): 8707−8719 doi: 10.1109/TVT.2021.3098321
    [49] Codevilla F, Müller M, López A, Koltun V, Dosovitskiy A. End-to-end driving via conditional imitation learning. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE, 2018: 4693-4700.
    [50] Liang X D, Wang T R, Yang L N, Xing E. CIRL: Controllable imitative reinforcement learning for vision-based self-driving. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018: 584 -599.
    [51] Chen C Y, Seff A, Kornhauser A, Xiao J X. DeepDriving: Learning affordance for direct perception in autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2015: 2722-2730.
    [52] Kong J, Pfeiffer M, Schildbach G, Borrelli F. Kinematic and dynamic vehicle models for autonomous driving control design. In: Proceedings of 2015 IEEE Intelligent Vehicles Symposium (IV). Seoul, Korea (South): IEEE, 2015: 1094-1099.
    [53] Casas S, Sadat A, Urtasun R. Mp3: A unified model to map, perceive, predict and plan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021: 14403-14412.
    [54] Chitta K, Prakash A, Jaeger B, Yu Z H, Renz K, Geiger A. TransFuser: Imitation with transformer-based sensor fusion for autonomous driving. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(11): 12878−12895 doi: 10.1109/TPAMI.2022.3200245
    [55] Hu Y H, Yang J Z, Chen L, Li K Y, Sima C H, Zhu X Z, et al. Planning-oriented autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023: 17853-17862.
    [56] Toromanoff M, Wirbel E, Moutarde F. End-to-end model-free reinforcement learning for urban driving using implicit affordances. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 7153-7162.
    [57] Wen L, Duan J L, Li S E, Xu S B, Peng H. Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization. In: Proceedings of 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). Rhodes, Greece: IEEE, 2020: 1-7.
    [58] Chitta K, Prakash A, Geiger A. NEAT: Neural attention fields for end-to-end autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021: 15793-15803.
    [59] Wu P H, Jia X S, Chen L, Yan J C, Li H Y, Qiao Y. Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline. Advances in Neural Information Processing Systems, 2022, 35: 6119−6132
    [60] Prakash A, Chitta K, Geiger A. Multi-modal fusion transformer for end-to-end autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021: 7077-7087.
    [61] Jia X S, Wu P H, Chen L, Xie J W, He C H, Yan J C, et al. Think Twice before driving: Towards scalable decoders for End-to-End autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023: 21983-21994.
    [62] Chen L, Wu P H, Chitta K, Jaeger B, Geiger A, Li H Y. End-to-end autonomous driving: Challenges and frontiers. arXiv. preprint arXiv: 2306.16927, 2023.
    [63] Ngiam J, Caine B, Vasudevan V, Zhang Z D, Chiang H T-L, Ling J, et al. Scene transformer: A unified architecture for predicting multiple agent trajectories. arXiv preprint arXiv: 2106.08417, 2021.
    [64] Renz K, Chitta K, Mercea O B, Koepke A, Akata Z, Geiger A. PlanT: Explainable planning transformers via object-level representations. In: Proceedings of Conference on Robot Learning (CoRL). PMLR, 2023: 459-470.
    [65] Zhang K P, Feng X L, Wu L, He Z B. Trajectory prediction for autonomous driving using spatial-temporal graph attention transformer. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(11): 22343−22353 doi: 10.1109/TITS.2022.3164450
    [66] Ye T J, Jing W, Hu C Y, Huang S K, Gao L P, Li F Z, et al. FusionAD: Multi-modality fusion for prediction and planning tasks of autonomous driving. arXiv preprint arXiv: 2308.01006, 2023.
    [67] Jin B, Liu X Y, Zheng Y P, Li P F, Zhao H, Zhang T, et al. ADAPT: Action-aware driving caption transformer. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). London, United Kingdom: IEEE, 2023: 7554-7561.
    [68] Liu H X, Feng S. Curse of rarity for autonomous vehicles. Nature Communications, 2024, 15: 4808 doi: 10.1038/s41467-024-49194-0
    [69] Yan X, Zhang H, Cai Y, Guo J, Qiu W, Gao B, et al. Forging vision foundation models for autonomous driving: Challenges, methodologies, and opportunities. arXiv preprint arXiv: 2401.08045, 2024.
    [70] Gao H, Wang Z, Li Y, Long K, Yang M, Shen Y. A survey for foundation models in autonomous driving. arXiv preprint arXiv: 2402.01105, 2024.
    [71] Tian X, Gu J, Li B, Liu Y, Wang Y, Zhao Z, et al. DriveVLM: The convergence of autonomous driving and large vision-language models. arXiv preprint arXiv: 2402.12289, 2024.
    [72] Bai Z, Wang P, Xiao T, He T, Han Z, Zhang Z, et al. Hallucination of multimodal large language models: A survey. arXiv preprint arXiv: 2404.18930, 2024.
    [73] Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In: Proceedings of International Conference on Machine Learning. Glasgow, UK: PMLR, 2020: 1597-1607.
    [74] Misra I, Maaten L. Self-supervised learning of pretext-invariant representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 6707-6717.
    [75] Luo C X, Yang X D, Yuille A. Self-supervised pillar motion learning for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 3183-3192.Pp
    [76] Karlsson R, Carballo A, Fujii K, Ohtani K, Takeda K. Predictive world models from real-world partial observations. In: Proceedings of 2023 IEEE International Conference on Mobility, Operations, Services and Technologies (MOST). Detroit, MI, USA: IEEE, 2023: 152-166.
    [77] Kingma D P, Welling M. Auto-encoding variational bayes. arXiv preprint arXiv: 1312.6114, 2013.
    [78] Liao Y Y, Xie J, Geiger A. KITTI-360: A novel dataset and benchmarks for urban scene understanding in 2D and 3D. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(3): 3292−3310
    [79] Hu A, Russell L, Yeo H, Murez Z, Fedoseev G, Kendall A, et al. GAIA-1: A generative world model for autonomous driving. arXiv preprint arXiv: 2309.17080, 2023.
    [80] Wang X F, Zhu Z, Huang G, Chen X Z, Zhu J G, Lu J W. DriveDreamer: Towards real-world-driven world models for autonomous driving. arXiv preprint arXiv: 2309.09777, 2023.
    [81] Zhang L J, Xiong Y W, Yang Z, Casas S, Hu R, Urtasun R. Learning unsupervised world models for autonomous driving via discrete diffusion. arXiv preprint arXiv: 2311.01017, 2023.
    [82] Chang H W, Zhang H, Jiang L, Liu C, Freeman W T. MaskGIT: Masked generative image transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, United States: IEEE, 2022: 11315-11325.
    [83] Van Den Oord A, Vinyals O, Kavukcuoglu K. Neural discrete representation learning. Advances in Neural Information Processing Systems, 2017, 30
    [84] Gu T P, Chen G Y, Li J L, Lin C Z, Rao Y M, Zhou J, et al. Stochastic trajectory prediction via motion indeterminacy diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, United States: IEEE, 2022: 17113-17122.
    [85] Dabral R, Mughal M H, Golyanik V, Theobalt C. Mofusion: A framework for denoising-diffusion-based motion synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023: 9760-9770.
    [86] Li Z Y, Liang H W, Wang H Q, Zheng X K, Wang J, Zhou P F. A multi-modal vehicle trajectory prediction framework via conditional diffusion model: A coarse-to-fine approach. Knowledge-Based Systems, 2023, 280: 110990 doi: 10.1016/j.knosys.2023.110990
    [87] Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, United States: IEEE, 2022: 10684-10695.
    [88] Chen K, Xie E, Chen Z, Wang Y, Hong L, Li Z, et al. GeoDiffusion: Text-prompted geometric control for object detection data generation. arXiv preprint arXiv: 2306.04607, 2023.
    [89] Zheng W, Song R, Guo X, Zhang C, Chen L. GenAD: Generative end-to-end autonomous driving. arXiv preprint arXiv: 2402.11502, 2024.
    [90] Belta C, Yordanov B, Gol E A. Formal methods for discrete-time dynamical systems. Cham: Springer International Publishing, 2017.
    [91] Pek C, Manzinger S, Koschi M, Althoff M. Using online verification to prevent autonomous vehicles from causing accidents. Nature Machine Intelligence, 2020, 2(9): 518−528 doi: 10.1038/s42256-020-0225-y
    [92] Yin X, Gao B, Yu X. Formal synthesis of controllers for safety-critical autonomous systems: developments and challenges. Annual Reviews in Control, 2024, accepted.
    [93] Donze A, Ferrere T, Maler O. Efficient robust monitoring for STL. Lecture Notes in Computer Science, 2013, 8044: 264−279
    [94] Donze A, Maler O. Robust satisfaction of temporal logic over real-valued signals. Lecture Notes in Computer Science, 2010, 6246(1): 92−106
    [95] Fainekos G E, Pappas G J. Robustness of temporal logic specifications for continuous-time signals. Theoretical Computer Science, 2009, 410(42): 4262−4291 doi: 10.1016/j.tcs.2009.06.021
    [96] Deshmukh J V, Donze A, Ghosh S, Jin X Q, Juniwal G, Seshia S A. Robust online monitoring of signal temporal logic. Formal Methods in System Design, 2017, 51(1): 5−30 doi: 10.1007/s10703-017-0286-7
    [97] Zhang Z Y, Arcaini P, Xie X. Online reset for signal temporal logic monitoring. IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems, 2022, 41(11): 4421−4432
    [98] Yu W, Zhao C, Wang H, Liu J, Ma X, Yang Y, et al. Online legal driving behavior monitoring for self-driving vehicles. Nature Communications, 2024, 15: 408 doi: 10.1038/s41467-024-44694-5
    [99] Sahin Y E, Quirynen R, Cairano S D. Autonomous vehicle decision-making and monitoring based on signal temporal logic and Mixed-Integer Programming. In: Proceedings of American Control Conference (ACC). Denver, USA: IEEE, 2020: 454-459.
    [100] Arechiga N. Specifying safety of autonomous vehicles in signal temporal logic. In: Proceedings of 2019 IEEE Intelligent Vehicles Symposium (IV). Paris, France: IEEE, 2019: 58-63.
    [101] Hekmatnejad M, Yaghoubi S, Dokhanchi A, Amor H B, Shrivastava A, Karam L, et al. Encoding and monitoring responsibility sensitive safety rules for automated vehicles in signal temporal logic. In: Proceedings of the 17th ACM-IEEE International Conference on Formal Methods and Models for System Design. California, United States: ACM, 2019: 1-11.
    [102] Qin X, Deshmukh J V. Clairvoyant monitoring for signal temporal logic. Lecture Notes in Computer Science, 2020, 12288: 178−195
    [103] Yu X Y, Dong W J, Li S Y, Yin X. Model predictive monitoring of dynamical systems for signal temporal logic specifications. Automatica, 2024, 160: 111445 doi: 10.1016/j.automatica.2023.111445
    [104] Bassem G, Vinayak S P. Quantitative robustness for signal temporal logic with time-freeze quantifiers. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023, 42(12): 4436−4449 doi: 10.1109/TCAD.2023.3283296
    [105] Zhong B Z, Jordan C, Provost J. Extending signal temporal logic with quantitative semantics by intervals for robust monitoring of Cyber-Physical Systems. ACM Transactions on Cyber-Physical Systems, 2022, 5(2): 1−25
    [106] Finkbeiner B, Fr?nzle M, Kohn F. A truly robust signal temporal logic: Monitoring safety properties of interacting Cyber-Physical Systems under uncertain observation. Algorithms, 2022, 15(4): 126 doi: 10.3390/a15040126
    [107] Salamati A, Soudjani S, Zamani M. Data-driven verification of stochastic linear systems with signal temporal logic constraints. Automatica, 2021, 131: 109781 doi: 10.1016/j.automatica.2021.109781
    [108] Yu X Y, Dong W J, Yin X, Li S Y. Online monitoring of dynamic systems for signal temporal logic specifications with model information. In: Proceedings of 2022 IEEE 61st Conference on Decision and Control (CDC). Cancun, Mexico: IEEE, 2022: 1553-1559.
    [109] Yang S, Pappas G J, Mangharam R, Lindemann L. Safe perception-based control under stochastic sensor uncertainty using conformal prediction. In: Proceedings of 2023 IEEE 62nd Conference on Decision and Control (CDC). Singapore, Singapore: IEEE, 2023: 6072-6078
    [110] Lindemann L, Cleaveland M, Shim G, Pappas G J. Safe planning in dynamic environments using conformal prediction. IEEE Robotics and Automation Letters, 2023, 8(8): 5116−5123 doi: 10.1109/LRA.2023.3292071
    [111] Lekeufack J, Angelopoulos A A, Bajcsy A, Jordan M I, Malik J. Conformal decision theory: Safe autonomous decisions from imperfect predictions. arXiv preprint arXiv: 2310.05921, 2023.
    [112] Cleaveland M, Lee I, Pappas G J, Lindemann L. Conformal prediction regions for time series using linear complementarity programming. arXiv preprint arXiv: 2304.01075, 2023.
    [113] Dixit A, Lindemann L, Wei S X, Cleaveland M, Pappas G J, Burdick J W. Adaptive conformal prediction for motion planning among dynamic agents. In: Proceedings of The 5th Annual Learning for Dynamics and Control Conference. PMLR, 2023: 300-314.
    [114] Yu X Y, Zhao Y Q, Yin X, Lindemann L. Signal temporal logic control synthesis among uncontrollable dynamic agents with conformal prediction. arXiv preprint arXiv: 2312.04242, 2023.
    [115] Lindemann L, Qin X, Deshmukh J V, Pappas G J. Conformal prediction for STL runtime verification. In: Proceedings of the ACM/IEEE 14th International Conference on Cyber-Physical Systems (with CPS-IoT Week 2023). New York, USA: ACM, 2023: 142-153.
    [116] Sinha R, Schmerling E, Pavone M. Closing the loop on runtime monitors with fallback-safe MPC. In: Proceedings of 2023 IEEE 62nd Conference on Decision and Control (CDC). Singapore, Singapore: IEEE, 2023: ? 6533-6540.
    [117] Yoo C, Belta C. Control with probabilistic signal temporal logic. arXiv preprint arXiv: 1510.08474, 2015.
    [118] 陈杰, 吕梓亮, 黄鑫源, 洪奕光. 非线性系统的安全分析与控制: 障碍函数方法. 自动化学报, 2023, 49(3): 1−13

    Chen Jie, Lv Xin-Liang, Huang Xin-Yuan, Hong Yi-Guang. Safety analysis and safety-critical control of nonlinear systems: Barrier function approach. Acta Automatica Sinica, 2023, 49(3): 1−13
    [119] Ames A D, Xu X R, Grizzle J W, Tabuada P. Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 2016, 62(8): 3861−3876
    [120] Xu X R, Grizzle J W, Tabuada P, Ames A D. Correctness guarantees for the composition of lane keeping and adaptive cruise control. IEEE Transactions on Automation Science and Engineering, 2017, 15(3): 1216−1229
    [121] Xiao W, Belta C. High-order control barrier functions. IEEE Transactions on Automatic Control, 2021, 67(7): 3655−3662
    [122] Lyu Y W, Luo W H, Dolan J M. Probabilistic safety-assured adaptive merging control for autonomous vehicles. In: Proceedings of 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, 2021: 10764-10770.
    [123] Lyu Y W, Luo W H, Dolan J M. Adaptive safe merging control for heterogeneous autonomous vehicles using parametric control barrier functions. In: Proceedings of 2022 IEEE Intelligent Vehicles Symposium (IV). Aachen, Germany: IEEE, 2022: 542-547.
    [124] He S Y, Zeng J, Sreenath K. Autonomous racing with multiple vehicles using a parallelized optimization with safety guarantee using control barrier functions. In: Proceedings of 2022 IEEE International Conference on Robotics and Automation (ICRA). Philadelphia, USA: IEEE, 2022: 3444-3451.
    [125] Rosolia U, Borrelli F. Learning how to autonomously race a car: a predictive control approach. IEEE Transactions on Control Systems Technology, 2019, 28(6): 2713−2719
    [126] Alshiekh M, Bloem R, Ehlers R, K?nighofer B, Niekum S, Topcu U. Safe reinforcement learning via shielding. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2018, 32(1).
    [127] Jansen N, K?nighofer B, Junges S, Serban A C, Bloem R. Safe reinforcement learning via probabilistic shields. arXiv preprint arXiv: 1807.06096, 2018.
    [128] Hunt N, Fulton N, Magliacane S, Hoang T N, Das S, Armando S-L. Verifiably safe exploration for end-to-end reinforcement learning. In: Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control (HSCC). ACM, 2021: 1-11.
    [129] Yin J, Dawson C, Fan C C, Tsiotras P. Shield model predictive path integral: A computationally efficient robust MPC method using Control Barrier Functions. IEEE Robotics and Automation Letters, 2023, 8(11): 7106−7113 doi: 10.1109/LRA.2023.3315211
    [130] Kochdumper N, Krasowski H, Wang X, Bak S, Althoff M. Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes. IEEE Open Journal of Control Systems, 2023, 2: 79−92 doi: 10.1109/OJCSYS.2023.3256305
    [131] Hsu K C, Ren A Z, Nguyen D P, Majumdar A, Fisac J F. Sim-to-Lab-to-Real: Safe reinforcement learning with shielding and generalization guarantees. Artificial Intelligence, 2023, 314: 103811 doi: 10.1016/j.artint.2022.103811
    [132] Wolff E M, Murray R M. Optimal control of nonlinear systems with temporal logic specifications. In: Proceedings of Robotics Research: The 16th International Symposium ISRR. Cham: Springer International Publishing, 2016: 21-37.
    [133] Aasi E, Vasile C I, Belta C. A control architecture for provably-correct autonomous driving. In: Proceedings of American Control Conference (ACC). New Orleans, LA, USA: IEEE, 2021: 2913-2918.
    [134] Charitidou M, Dimarogonas D V. Receding horizon control with online barrier function design under signal temporal logic specifications. IEEE Transactions on Automatic Control, 2022, 68(6): 3545−3556
    [135] Meng Y, Fan C. Signal temporal logic neural predictive control. IEEE Robotics and Automation Letters, 2023, 8(11): 7719−7726 doi: 10.1109/LRA.2023.3315536
    [136] Shi W T, Luo X, Hong J L, Zhao C L, Gao B Z, Chen H. Accelerating Model Predictive Control with neural network optimizer. In: Proceedings of 2023 7th CAA International Conference on Vehicular Control and Intelligence (CVCI). Changsha, China: IEEE, 2023: 1-7.
    [137] 陈仲瑶, 方浩. 基于线性时序逻辑的智能体不确定行为规划. 中国科学: 技术科学, 2020, 50(05): 516−525 doi: 10.1360/SST-2019-0292

    Chen Zhong-Yao, Fang Hao. Probabilistic action planning based on linear temporal logic. SCIENTIA SINICA Technologica, 2020, 50(05): 516−525 doi: 10.1360/SST-2019-0292
    [138] Song Y, Romero A, Müller M, Koltun V, Scaramuzza D. Reaching the limit in autonomous racing: Optimal control versus reinforcement learning. Science Robotics, 2023, 8(82): eadg1462 doi: 10.1126/scirobotics.adg1462
    [139] 田戴荧, 方浩, 杨庆凯. 信号时序逻辑约束下基于终点回溯的高效规划. 无人系统技术, 2021, 4(01): 44−50

    Tian Dai-Ying, Fang Hao, Yang Qing-Kai. Efficient planning based on destination backtracking under Signal Temporal Logic constraints. Unmanned Systems Technology, 2021, 4(01): 44−50
    [140] 殷翔, 任晓华, 李少远. 基于强化学习的机器人复杂时序逻辑任务路径规划方法. 中国专利, CN114355947A, 2022-04-15

    Yin Xiang, Ren Xiao-Hua, Li Shao-Yuan. Path planning method of complex temporal logic tasks for robots based on reinforcement learning. China Patent, CN114355947A, 2022-04-15
    [141] Lee K M B, Yoo C, Fitch R. Signal temporal logic synthesis as probabilistic inference. In: Proceedings of 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, 2021: 5483-5489.
  • 加载中
计量
  • 文章访问数:  74
  • HTML全文浏览量:  36
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-10-18
  • 录用日期:  2025-04-03
  • 网络出版日期:  2025-06-19

目录

    /

    返回文章
    返回