2.793

2018影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于事件相机的合成孔径成像

余磊 廖伟 周游龙 杨文 夏桂松

余磊, 廖伟, 周游龙, 杨文, 夏桂松. 基于事件相机的合成孔径成像. 自动化学报, 2020, 45(x): 1−14 doi: 10.16383/j.aas.c200388
引用本文: 余磊, 廖伟, 周游龙, 杨文, 夏桂松. 基于事件相机的合成孔径成像. 自动化学报, 2020, 45(x): 1−14 doi: 10.16383/j.aas.c200388
Yu Lei, Liao Wei, Zhou You-Long, Yang Wen, Xia Gui-Song. Event camera based synthetic aperture imaging. Acta Automatica Sinica, 2020, 45(x): 1−14 doi: 10.16383/j.aas.c200388
Citation: Yu Lei, Liao Wei, Zhou You-Long, Yang Wen, Xia Gui-Song. Event camera based synthetic aperture imaging. Acta Automatica Sinica, 2020, 45(x): 1−14 doi: 10.16383/j.aas.c200388

基于事件相机的合成孔径成像

doi: 10.16383/j.aas.c200388
基金项目: 国家自然科学基金(61871297)资助, 中央高校基本科研业务费专项资金(2042020kf0019)资助
详细信息
    作者简介:

    余磊:武汉大学电子信息学院副教授. 主要研究方向为稀疏信号处理, 图像处理和神经形态视觉感知. 本文通信作者. E-mail: ly.wd@whu.edu.cn

    廖伟:武汉大学电子信息学院研究生. 主要研究方向为数字图像处理. E-mail: 2016301200164@whu.edu.cn

    周游龙:武汉大学电子信息学院研究生. 主要研究方向为事件相机图像重建. E-mail: zhouyl2019@whu.edu.cn

    杨文:武汉大学电子信息学院教授. 主要研究方向为图像处理与机器视觉, 多模态信息感知与融合. E-mail: yangwen@whu.edu.cn

    夏桂松:武汉大学计算机学院教授, 主要研究兴趣包括计算机视觉、模式识别与智能系统、遥感图像解译等. E-mail: guisong.xia@whu.edu.cn

Event Camera Based Synthetic Aperture Imaging

Funds: Supported by National Natural Science Foundation of China (61871297), Fundamental Research Funds for the Central Universities of China (2042020kf0019)
  • 摘要: 合成孔径成像通过多角度获取目标信息来等效大孔径和小景深相机成像. 因此, 该技术可以虚化遮挡物, 实现对被遮挡目标的成像. 然而, 在密集遮挡和极端光照条件下, 由于遮挡物的密集干扰和相机本身较低的动态范围, 基于传统相机的合成孔径成像无法有效地对被遮挡目标进行成像. 本文利用事件相机低延时、高动态的特性, 提出基于事件相机的合成孔径成像方法. 事件相机产生异步事件数据, 具有极低的延时, 能够以连续视角观测场景, 从而消除密集干扰的影响. 而事件相机的高动态范围使其能够有效处理极端光照条件下的成像问题. 通过分析场景亮度变化与事件相机输出的事件点之间的关系, 从对焦后事件点重建出被遮挡目标, 实现基于事件相机的合成孔径成像. 实验结果表明, 本文方法与传统方法相比, 在密集遮挡条件下重建图像的对比度、清晰度、峰值信噪比和结构相似性指数均有较大提升. 同时, 在极端光照条件下, 本文方法能有效解决过曝/欠曝问题, 重建出清晰的被遮挡目标图像.
  • 图  1  基于传统相机的SAI和基于事件相机的SAI效果对比. 第1列分别为拍摄实景和目标图像. 第2、3、4列分别对应密集遮挡、极高光照条件、极低光照条件下, 基于传统相机的SAI和本文提出的基于事件相机SAI的成像结果对比.

    Fig.  1  Comparison of traditional camera based SAI and event camera based SAI. The first column are experimental scene and object image. Columns 2, 3, and 4 are correspond to the comparison of conventional camera based SAI results and event camera based SAI results under dense occlusions, extreme high light and extreme low light conditions.

    图  2  基于传统相机的合成孔径成像[4]

    Fig.  2  Traditional camera based SAI[4]

    图  3  基于事件相机的合成孔径成像系统原型

    Fig.  3  Prototype of synthetic aperture imaging system based on event camera

    图  4  亮度变化导致事件相机激发事件点数据. 光学成像系统等效于一个低通滤波器, 场景亮度的突变传入相机后转变为连续的亮度变化, 事件相机对亮度变化作出响应, 激发正极性(右下子图)或负极性(右上子图)事件点.

    Fig.  4  Event camera generates events when brightness changes. The optical imaging system is equivalent to a low-pass filter. The sudden change of brightness is converted into a continuous brightness change in the camera. Event camera responds to the brightness change and generates positive (bottom-right inset) or negative (top-right inset) events.

    图  5  极端光照条件下事件相机输出的事件流数据. 第一、二行分别对应于极高、极低光照下事件相机输出的三维事件点与事件点数量分布图.

    Fig.  5  Event stream generated by event camera under extreme low light condition. The first and second row are 3-D event stream and the distribution image of event number in extremely high or low light condition.

    图  6  两种不同类型的遮挡物. 相较于灌木枝丛, 纸板的缝隙数量极少, 且缝隙之间的间距较大.

    Fig.  6  Two different types of occluders. Compared with dense bushes, the number of gaps in cardboard is very small, and the space between the gaps is large.

    图  7  PSNR与负重建阈值阈值的关系

    Fig.  7  Relationship between PSNR and negative reconstruction threshold

    图  8  PSNR与尺度因子的关系

    Fig.  8  Relationship between PSNR and scale factor

    图  9  灌木枝丛遮挡条件下的多深度合成孔径成像结果与对比. 第一行为传统合成孔径成像结果, 第二行基于固定阈值重建法的成像结果, 第三行为基于自适应阈值重建法的成像结果.

    Fig.  9  Comparison of SAI results at different focus depths under the condition of dense bushes occlusion. The first row are SAI-T, the second row are reconstructed results with fixed thresholds , the third row are reconstructed results with adaptive thresholds.

    图  10  灌木枝从遮挡条件下的合成孔径成像结果与对比. 第一、二行分别对应于几何目标与玩具熊目标.

    Fig.  10  Comparison of SAI results under the condition of dense bushes occlusion condition. The first and second rows correspond to geometric object and teddy bear.

    图  11  不同密集遮挡条件情况下的合成孔径成像结果与对比. 第一行为极端密集遮挡情况, 第二行为一般密集遮挡情况, 第三行为稀疏遮挡情况.

    Fig.  11  Comparison of SAI results under different density occlusions condition. The first row are extremely dense occlusion, the second row are normal dense occlusion condition, the third row are sparse occlusion condition.

    图  12  极端光照条件下的合成孔径成像结果与对比. 第一、二行对应于极高光照条件下, 几何目标与玩具熊目标的合成孔径成像结果, 第三、四行对应于极低光照条件下, 几何目标与玩具熊目标的合成孔径成像结果.

    Fig.  12  Comparison of SAI results under extrme light conditions. The first and second rows correspond to geometric object and teddy bear under extremely high light condition,and the third and fourth rows correspond to geometric object and teddy bear under extremely low light condition.

    表  1  灌木枝丛遮挡条件下的多深度合成孔径成像质量对比

    Table  1  Quantitative comparison of multi-depth SAI results under dense bushes occlusion condition

    对焦深度 方法 PSNR SSIM
    0.4m 传统方法 16.72 dB 0.2167
    固定阈值重建 17.73 dB 0.2919
    自适应阈值重建 21.13 dB 0.3061
    0.8m 传统方法 15.41 dB 0.2693
    固定阈值重建 15.42 dB 0.2782
    自适应阈值重建 21.73 dB 0.3136
    1.2m 传统方法 17.62 dB 0.0916
    固定阈值重建 18.03 dB 0.1759
    自适应阈值重建 25.22 dB 0.4518
    下载: 导出CSV

    表  2  灌木枝丛遮挡条件下的合成孔径成像质量对比

    Table  2  Quantitative comparison of SAI results under the dense bushes occlusion condition

    目标类型 方法 PSNR SSIM
    几何目标 传统方法 13.45 dB 0.2313
    固定阈值重建 17.57 dB 0.2671
    自适应阈值重建 18.03 dB 0.2646
    玩具熊 传统方法 6.952 dB 0.1439
    固定阈值重建 7.795 dB 0.2175
    自适应阈值重建 9.199 dB 0.2334
    下载: 导出CSV

    表  3  不同密集遮挡情况下的多深度合成孔径成像质量对比

    Table  3  Quantitative comparison of SAI results under different density occlusions condition

    遮挡密集程度 方法 PSNR SSIM
    极端密集遮挡 传统方法 11.20 dB 0.1476
    固定阈值重建 17.14 dB 0.1884
    自适应阈值重建 18.52 dB 0.1741
    一般密集遮挡 传统方法 13.37 dB 0.4028
    固定阈值重建 18.54 dB 0.1840
    自适应阈值重建 19.51 dB 0.2037
    稀疏遮挡 传统方法 14.35 dB 0.5508
    固定阈值重建 11.22 dB 0.3384
    自适应阈值重建 14.19 dB 0.3912
    下载: 导出CSV
  • [1] Gershun A. The light field. Journal of Mathematics and Physics, 1939, 18(1-4): 51−151 doi: 10.1002/sapm193918151
    [2] Levoy M, Hanrahan P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. ACM Press, New York, NY, USA: ACM, 1996.31−42.
    [3] Zhang X, Zhang Y, Yang T, Yang Y. Synthetic aperture photography using a moving camera-IMU system. Pattern Recognition, 2017, 62: 175−188 doi: 10.1016/j.patcog.2016.07.019
    [4] Vaish V, Wilburn B, Joshi N, Levoy M. Using plane+parallax for calibrating dense camera arrays. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington, DC, USA: IEEE 2004.1: I−I.
    [5] Vaish V, Garg G, Talvala E, Antunez E, Wilburn B, Horowitz M, et al. Synthetic aperture focusing using a shear-warp factorization of the viewing transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops. San Diego, CA, USA: IEEE 2005.129−129.
    [6] 项祎祎, 刘宾, 李艳艳. 基于共焦照明的合成孔径成像方法. 光学学报, 2020, 40(08): 73−79
    [7] Yang T, Zhang Y, Tong X, Zhang X, Yu R. A new hybrid synthetic aperture imaging model for tracking and seeing people through occlusion. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(9): 1461−1475 doi: 10.1109/TCSVT.2013.2242553
    [8] Yang T, Zhang Y, Tong X, Zhang X, Yu R. Continuously tracking and see-through occlusion based on a new hybrid synthetic aperture imaging model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, USA: IEEE, 2011.3409−3416.
    [9] Joshi N, Avidan S, Matusik W, Kriegman D J. Synthetic aperture tracking: tracking through occlusions. In: Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV). Rio de Janeiro, Brazil: IEEE, 2007.1−8.
    [10] 周程灏, 王治乐, 朱峰. 大口径光学合成孔径成像技术发展现状. 中国光学, 2017, 10(01): 25−38
    [11] Pei Z, Li Y, Ma M, Li J, Leng C, Zhang X, et al. Occluded-object 3D reconstruction using camera array synthetic aperture imaging. Sensors, 2019, 19(3): 607 doi: 10.3390/s19030607
    [12] Pei Z, Zhang Y, Chen X, Yang Y. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognition, 2013, 46(1): 174−187 doi: 10.1016/j.patcog.2012.06.014
    [13] Lichtsteiner P, Posch C, Delbruck T. A 128× 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566−576 doi: 10.1109/JSSC.2007.914337
    [14] Brandli C, Berner R, Yang M, Liu S, Delbruck T. A 240 × 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor. IEEE Journal of Solid-State Circuits, 2014, 49(10): 2333−2341 doi: 10.1109/JSSC.2014.2342715
    [15] Isaksen A, McMillan L, Gortler S J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press, New York, NY, USA: 2000.297−306.
    [16] Yang J C, Everett M, Buehler C, McMillan L. A real-time distributed light field camera. Rendering Techniques, 2002, 2002: 77−86
    [17] Wilburn B, Joshi N, Vaish V, Talvala E, Antunez E, et al. High performance imaging using large camera arrays. In: Proceedings of ACM SIGGRAPH 2005 Papers. Association for Computing Machinery, New York, NY, USA: ACM, 2005.765−776.
    [18] Vaish V, Levoy M, Szeliski R, Zitnick C L, Kang S B. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York, NY, USA: IEEE, 2006.2: 2331−2338.
    [19] Pei Z, Zhang Y, Yang T, Zhang X, Yang Y. A novel multi-object detection method in complex scene using synthetic aperture imaging. Pattern Recognition, 2012, 45(4): 1637−1658 doi: 10.1016/j.patcog.2011.10.003
    [20] Maqueda A I, Loquercio A, Gallego G, Garcia N, Scaramuzza D. Event-based vision meets deep learning on steering prediction for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah, USA: IEEE, 2018.5419−5427.
    [21] Zhu A Z, Atanasov N, Daniilidis K. Event-based visual inertial odometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, Hawaii, USA: IEEE, 2017.5391−5399.
    [22] Vidal A R, Rebecq H, Horstschaefer T, Scaramuzza D. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robotics and Automation Letters, 2018, 3(2): 994−1001 doi: 10.1109/LRA.2018.2793357
    [23] Cohen G, Afshar S, Morreale B, Bessell T, Wabnitz A, Rutten M, et al. Event-based sensing for space situational awareness. The Journal of the Astronautical Sciences, 2019, 66(2): 125−141 doi: 10.1007/s40295-018-00140-5
    [24] Kim H, Leutenegger S, Davison A J. Real-time 3D reconstruction and 6-DoF tracking with an event camera. In: Proceedings of the European Conference on Computer Vision (ECCV). Amsterdam, Netherlands: Springer, Cham, 2016.349−364.
    [25] Barua S, Miyatani Y, Veeraraghavan A. Direct face detection and video reconstruction from event cameras. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Lake Placid, NY, USA: IEEE, 2016.1−9.
    [26] Watkins Y, Thresher A, Mascarenas D, Kenyon G T. Sparse coding enables the reconstruction of high-fidelity images and video from retinal spike trains. In: Proceedings of the International Conference on Neuromorphic Systems (ICONS). New York, NY, USA: ACM, 2018.1−5.
    [27] Scheerlinck C, Barnes N, Mahony R. Continuous-time intensity estimation using event cameras. In: Proceedings of the Asian Conference on Computer Vision (ACCV). Perth, Australia: Springer, Cham, 2018.308−324.
    [28] Rebecq H, Ranftl R, Koltun V, Scaramuzza D. Events-to-video: Bringing modern computer vision to event cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE, 2019.3857−3866.
    [29] Scheerlinck C, Rebecq H, Gehrig D, Barnes N, Mahny R, Scaramuzza D. Fast image reconstruction with an event camera. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Colorado, USA: IEEE, 2020.156−163.
    [30] Wang L, Ho Y S, Yoon K J. Event-based high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE, 2019.10081−10090.
    [31] Goodman J W. Introduction To Fourier Optics. Colorado: Roberts and Company Publishers, 1995.
    [32] Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge: Cambridge University Press, 2003.
    [33] Zhang Z. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330−1334 doi: 10.1109/34.888718
    [34] Wang L, Kim T K, Yoon K J. EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020.8315−8325.
    [35] Li H, Li G, Liu H, Shi L. Super-resolution of spatiotemporal event-stream image. Neurocomputing, 2019, 335: 206−214 doi: 10.1016/j.neucom.2018.12.048
    [36] Choi J, Yoon K J. Learning to Super Resolve Intensity Images From Events. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020.2768−2776.
  • 加载中
计量
  • 文章访问数:  94
  • HTML全文浏览量:  15
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-06-17
  • 录用日期:  2020-09-14

目录

    /

    返回文章
    返回