论文标题
Bipointnet:点云的二进制神经网络
BiPointNet: Binary Neural Network for Point Clouds
论文作者
论文摘要
为了减轻在边缘设备上运行的实时点云应用程序的资源限制,在本文中,我们提出了BipointNet,这是第一种模型二进制方法,用于有效地对点云进行深入学习。我们发现,点云的二进制模型的巨大性能下降主要源于两个挑战:聚集诱导的特征均质化导致信息熵的降解,以及缩放变形,阻碍优化并使规模敏感的结构无效。通过理论理由和深入分析,我们的BipointNet引入了熵最大化聚合(EMA),以调节汇总之前的分布,以获取最大信息熵,以及层面上的尺度恢复(LSR),以有效地恢复特征表示能力。广泛的实验表明,BipointNet通过说服利润率在该水平上以与完整的Precision对应物相媲美来优于现有的二进制方法。我们强调说,我们的技术是通用的,可以确保各种基本任务和主流骨架的重大改进。此外,Bipointnet在现实世界中受到的设备上节省了令人印象深刻的14.7倍加速度和18.9倍的存储空间。
To alleviate the resource constraint for real-time point cloud applications that run on edge devices, in this paper we present BiPointNet, the first model binarization approach for efficient deep learning on point clouds. We discover that the immense performance drop of binarized models for point clouds mainly stems from two challenges: aggregation-induced feature homogenization that leads to a degradation of information entropy, and scale distortion that hinders optimization and invalidates scale-sensitive structures. With theoretical justifications and in-depth analysis, our BiPointNet introduces Entropy-Maximizing Aggregation (EMA) to modulate the distribution before aggregation for the maximum information entropy, and Layer-wise Scale Recovery (LSR) to efficiently restore feature representation capacity. Extensive experiments show that BiPointNet outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones. Moreover, BiPointNet gives an impressive 14.7x speedup and 18.9x storage saving on real-world resource-constrained devices.