论文标题
用于自动驾驶的鱼眼摄像机上的广义对象检测:数据集,表示和基线
Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
论文作者
论文摘要
对象检测是自动驾驶中的一个全面研究的问题。但是,在鱼眼相机的情况下,它的探索相对较少。由于径向变形很大,尤其是在图像的外围,标准边界盒在鱼眼摄像机中失败。我们探索了在这项工作中的鱼眼图像中的对象检测等方面的界限,椭圆和通用多边形等更好的表示形式。我们使用IOU指标使用准确的实例分割地面真相比较这些表示形式。我们设计了一种新型的弯曲边界框模型,该模型具有最佳的鱼眼失真模型。我们还设计了一种用于获得多边形顶点的曲率自适应周长采样方法,与均匀采样相比,相对地图得分提高了4.9%。总体而言,提出的多边形模型将MIOU相对准确性提高了40.3%。这是关于鱼眼摄像机的对象检测的首次详细研究,据我们所知。包括10,000张图像以及所有对象表示的数据集将公开以鼓励进一步的研究。我们在简短的视频中总结了我们的作品,并在https://youtu.be/ilkozvjpl-a上进行了定性结果。
Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image's periphery. We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. We use the IoU metric to compare these representations using accurate instance segmentation ground truth. We design a novel curved bounding box model that has optimal properties for fisheye distortion models. We also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge. The dataset comprising of 10,000 images along with all the object representations ground truth will be made public to encourage further research. We summarize our work in a short video with qualitative results at https://youtu.be/iLkOzvJpL-A.