论文标题
Nanoflownet:纳米四轮驱动器上的实时密集光流
NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
论文作者
论文摘要
纳米四轮驱动器是小的,敏捷且廉价的平台,非常适合在狭窄的,混乱的环境中部署。由于其有效载荷有限,这些车辆在处理能力方面受到了高度限制,从而使基于常规视觉的方法具有安全性和自主导航不兼容。最近的机器学习发展有望在低潜伏期处高性能感知,而专用的边缘计算硬件有可能增强这些有限设备的处理能力。在这项工作中,我们提出了纳米牛油网,这是一个轻巧的卷积神经网络,用于实时密集的光流估计,对边缘计算硬件。我们从最新的语义细分方面汲取了灵感来设计该网络。此外,我们使用运动边界地面真实数据指导光流的学习,这可以提高性能而不会影响延迟。 MPI-SINTEL数据集的验证结果显示了拟议网络的高性能,鉴于其受限的体系结构。此外,我们通过将其部署在超低功率GAP8微处理器上,并将其应用于BitCraze Crazyflie,这是34 G纳米四轮飞机的BitCraze Crazyflie,通过将其部署在超低功率GAP8微处理器上,成功地展示了纳米牛油的功能。
Nano quadcopters are small, agile, and cheap platforms that are well suited for deployment in narrow, cluttered environments. Due to their limited payload, these vehicles are highly constrained in processing power, rendering conventional vision-based methods for safe and autonomous navigation incompatible. Recent machine learning developments promise high-performance perception at low latency, while dedicated edge computing hardware has the potential to augment the processing capabilities of these limited devices. In this work, we present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware. We draw inspiration from recent advances in semantic segmentation for the design of this network. Additionally, we guide the learning of optical flow using motion boundary ground truth data, which improves performance with no impact on latency. Validation results on the MPI-Sintel dataset show the high performance of the proposed network given its constrained architecture. Additionally, we successfully demonstrate the capabilities of NanoFlowNet by deploying it on the ultra-low power GAP8 microprocessor and by applying it to vision-based obstacle avoidance on board a Bitcraze Crazyflie, a 34 g nano quadcopter.