论文标题
Lightn:轻巧的变压器网络,用于在点云下采样中的跨越头顶折衷
LighTN: Light-weight Transformer Network for Performance-overhead Tradeoff in Point Cloud Downsampling
论文作者
论文摘要
与传统的任务液化方法相比,以任务为导向的神经网络在点云下采样范围内的性能提高了。最近,变形金刚的网络家族在视觉任务中显示出更强大的学习能力。但是,基于变压器的体系结构潜在地消耗了太多资源,这些资源通常对于下采样范围内的低开销任务网络一文不值。本文提出了一个新颖的轻型变压器网络(Lightn),用于以任务为导向的点云下采样,作为端到端和插件解决方案。在Lightn中,提出了一个单头自相关模块,以提取精致的全局上下文特征,其中三个投影矩阵被同时消除以节省资源开销,而对称矩阵的输出则满足置换不变。然后,我们设计了一种新颖的下采样损耗函数,以指导Lightn专注于具有更均匀分布和突出点覆盖率的临界点云区域。此外,我们引入了一种前馈网络缩放机制,以根据扩展策略增强Lightn的可学习能力。关于分类和注册任务的广泛实验的结果表明,Lightn可以通过有限的资源开销来实现最先进的性能。
Compared with traditional task-irrelevant downsampling methods, task-oriented neural networks have shown improved performance in point cloud downsampling range. Recently, Transformer family of networks has shown a more powerful learning capacity in visual tasks. However, Transformer-based architectures potentially consume too many resources which are usually worthless for low overhead task networks in downsampling range. This paper proposes a novel light-weight Transformer network (LighTN) for task-oriented point cloud downsampling, as an end-to-end and plug-and-play solution. In LighTN, a single-head self-correlation module is presented to extract refined global contextual features, where three projection matrices are simultaneously eliminated to save resource overhead, and the output of symmetric matrix satisfies the permutation invariant. Then, we design a novel downsampling loss function to guide LighTN focuses on critical point cloud regions with more uniform distribution and prominent points coverage. Furthermore, We introduce a feed-forward network scaling mechanism to enhance the learnable capacity of LighTN according to the expand-reduce strategy. The result of extensive experiments on classification and registration tasks demonstrates LighTN can achieve state-of-the-art performance with limited resource overhead.