论文标题

无人飞船图像的语义图像分割的合成数据

Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned Spacecraft

论文作者

Armstrong, William S., Drakontaidis, Spencer, Lui, Nicholas

论文摘要

很难从外太空工作的其他航天器拍摄的航天器的图像,尤其是在通常需要进行深度学习任务所需的尺度上。语义图像分割,对象检测和定位以及姿势估计是经过良好研究的领域,对于许多应用,其结果有力,并且在自主航天器操作和会合方面非常有用。但是,最近的研究表明,这些在广泛和共同领域中的强大结果甚至可能会概括为地球上的特定工业应用。为了解决这个问题,我们提出了一种生成用于语义分割的合成图像数据的方法,该数据可推广到其他任务,并提供了一个原型合成图像数据集,该数据集由无人飞船的2D单眼图像组成,以便在自主码头雷达雷德氏菌Rendeemof Rendeezvous的领域进行进一步研究。我们还对这些综合数据提出了强大的基准结果(Sørensen-Dice系数为0.8723),这表明可以为此任务训练良好的表现图像分割模型是可行的,尤其是在已知目标航天器及其配置的情况下。

Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by, especially at a scale typically required for deep learning tasks. Semantic image segmentation, object detection and localization, and pose estimation are well researched areas with powerful results for many applications, and would be very useful in autonomous spacecraft operation and rendezvous. However, recent studies show that these strong results in broad and common domains may generalize poorly even to specific industrial applications on earth. To address this, we propose a method for generating synthetic image data that are labelled for semantic segmentation, generalizable to other tasks, and provide a prototype synthetic image dataset consisting of 2D monocular images of unmanned spacecraft, in order to enable further research in the area of autonomous spacecraft rendezvous. We also present a strong benchmark result (Sørensen-Dice coefficient 0.8723) on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task, especially if the target spacecraft and its configuration are known.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源