论文标题

使用射线密度融合从自我运动中基于事件的立体深度估计

Event-based Stereo Depth Estimation from Ego-motion using Ray Density Fusion

论文作者

Ghosh, Suman, Gallego, Guillermo

论文摘要

事件摄像机是受生物启发的传感器,通过响应现场的亮度变化来模仿人类视网膜。它们以微秒的分辨率产生了基于异步尖峰的输出,比传统相机具有优势,例如高动态范围,低运动模糊和功率效率。大多数基于事件的立体声方法都试图利用相机的高时间分辨率以及跨相机的事件的同时性,以建立匹配和估计深度。相比之下,这项工作调查了如何通过融合反射的射线密度来估算立体声事件摄像机的深度,而无需明确的数据关联,并证明了其对头部安装的摄像头数据的有效性,该摄像头数据以egipentric的方式记录。代码和视频可从https://github.com/tub-rip/dvs_mcemvs获得

Event cameras are bio-inspired sensors that mimic the human retina by responding to brightness changes in the scene. They generate asynchronous spike-based outputs at microsecond resolution, providing advantages over traditional cameras like high dynamic range, low motion blur and power efficiency. Most event-based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth. By contrast, this work investigates how to estimate depth from stereo event cameras without explicit data association by fusing back-projected ray densities, and demonstrates its effectiveness on head-mounted camera data, which is recorded in an egocentric fashion. Code and video are available at https://github.com/tub-rip/dvs_mcemvs

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源