论文标题
使用高分辨率激光摄像机融合的机器人收获的准确果实定位
Accurate Fruit Localisation for Robotic Harvesting using High Resolution LiDAR-Camera Fusion
论文作者
论文摘要
准确的深度感应在确保自然果园环境中的机器人收获成功率方面起着至关重要的作用。固态LIDAR(SSL)是一种最近引入的激光雷达技术,可以感知场景中的高分辨率几何信息,这可能被用来接收准确的深度信息。同时,来自LiDAR和相机的感觉信息的融合可以显着提高收获机器人的感测能力。这项工作引入了基于激光摄像机融合的视觉传感和感知策略,以对苹果园中的收获机器人进行准确的水果定位。应用和评估两种SOTA外部校准方法,基于目标和无目标的基于目标和无目标,以获得LiDAR和相机之间的精确外部基质。通过外部校准,点云和颜色图像融合在一起,使用一个阶段实例分割网络执行水果定位。实验表明,LiDAR相机在自然环境中的视觉传感上可以提高质量。同时,引入激光摄像机融合会在很大程度上提高了水果定位的准确性和鲁棒性。具体而言,通过在0.5 m,1.2 m和1.8 m处使用激光摄像机分别为0.245、0.227和0.275 cm,果实定位的标准偏差。这些测量误差仅是Realsense D455的五分之一。最后,我们已经连接了可视化点云,以演示高度准确的感应方法。
Accurate depth-sensing plays a crucial role in securing a high success rate of robotic harvesting in natural orchard environments. Solid-state LiDAR (SSL), a recently introduced LiDAR technique, can perceive high-resolution geometric information of the scenes, which can be potential utilised to receive accurate depth information. Meanwhile, the fusion of the sensory information from LiDAR and camera can significantly enhance the sensing ability of the harvesting robots. This work introduces a LiDAR-camera fusion-based visual sensing and perception strategy to perform accurate fruit localisation for a harvesting robot in the apple orchards. Two SOTA extrinsic calibration methods, target-based and targetless-based, are applied and evaluated to obtain the accurate extrinsic matrix between the LiDAR and camera. With the extrinsic calibration, the point clouds and color images are fused to perform fruit localisation using a one-stage instance segmentation network. Experimental shows that LiDAR-camera achieves better quality on visual sensing in the natural environments. Meanwhile, introducing the LiDAR-camera fusion largely improves the accuracy and robustness of the fruit localisation. Specifically, the standard deviations of fruit localisation by using LiDAR-camera at 0.5 m, 1.2 m, and 1.8 m are 0.245, 0.227, and 0.275 cm respectively. These measurement error is only one one fifth of that from Realsense D455. Lastly, we have attached our visualised point cloud to demonstrate the highly accurate sensing method.