Omnidirectional Vision-based Self-localization by Using Large-scale Metric-topological 3D Map
-
摘要: 面向大规模室内环境, 研究了基于全向视觉的移动机器人自定位. 提出用分层的几何-拓扑三维地图管理广域环境特征, 定义了不同层次的三维局部环境特征及全局拓扑属性, 给出了分层地图的应用方法. 构建了全向视觉传感器成像模型及其不确定性传播方法, 使得地图中的概率元素能够在系统中有效应用. 采用随机点预估搜索的方法提取环境元素对应的曲线边缘特征. 用带反馈的分层估计方法在融合中心对多观测特征产生的相应估计状态进行总体融合. 以分层逻辑架构设计实现了移动机器人交互式自定位系统. 实验分析了真实环境中不同初始位姿和观测信息情况下定位系统的收敛性和定位精度, 在考虑动态障碍物的遮挡情况下完成了机器人的在线环境感知和运动自定位任务. 实验结果表明本文方法的可靠性和实用性.
-
关键词:
- 自定位 /
- 几何-拓扑混合三维地图 /
- 全向视觉 /
- 人机交互
Abstract: Towards large-scale indoor environment, a novel metric-topological 3D map is proposed for robot self-localization based on omnidirectional vision. The local metric map, in a hierarchical manner, defines geometrical elements according to their environmental feature levels. Then, the topological parts in the global map are used to connect the adjacent local maps. We design a nonlinear omnidirectional camera model to project the probabilistic map elements with uncertainty manipulation. Therefore, image features can be extracted in the vicinity of corresponding projected curves. For the self-localization task, a human-machine interaction system is developed using a hierarchical logic. It provides a fusion center which adopts feedback hierarchical fusion method to fuse local estimates generated from multi-observations. Finally, a series of experiments are conducted to prove the reliable and practical performance of our system.
计量
- 文章访问数: 3213
- HTML全文浏览量: 93
- PDF下载量: 1498
- 被引次数: 0