论文标题

基于事件的密集重建管道

Event-Based Dense Reconstruction Pipeline

论文作者

Xiao, Kun, Wang, Guohui, Chen, Yi, Nan, Jinghong, Xie, Yongfeng

论文摘要

事件摄像机是一种与传统相机不同的新型传感器。每个像素通过事件异步触发。触发事件是像素上辐照的亮度的变化。如果亮度的增加或降低高于一定阈值,则输出事件。与传统摄像机相比,事件摄像机具有高动态范围的优势,并且没有运动模糊。由于事件是由强度边缘的明显运动引起的,因此大多数3D重建的地图仅由场景边缘组成,即半密度地图,这对于某些应用而言不足。在本文中,我们提出了一条管道,以实现基于事件的密集重建。首先,深度学习用于重建事件中的强度图像。然后,使用运动(SFM)来估计摄像机的固有,外部和稀疏点云。最后,多视图立体声(MVS)用于完成密集的重建。

Event cameras are a new type of sensors that are different from traditional cameras. Each pixel is triggered asynchronously by event. The trigger event is the change of the brightness irradiated on the pixel. If the increment or decrement of brightness is higher than a certain threshold, an event is output. Compared with traditional cameras, event cameras have the advantages of high dynamic range and no motion blur. Since events are caused by the apparent motion of intensity edges, the majority of 3D reconstructed maps consist only of scene edges, i.e., semi-dense maps, which is not enough for some applications. In this paper, we propose a pipeline to realize event-based dense reconstruction. First, deep learning is used to reconstruct intensity images from events. And then, structure from motion (SfM) is used to estimate camera intrinsic, extrinsic and sparse point cloud. Finally, multi-view stereo (MVS) is used to complete dense reconstruction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源