论文标题

实验室:可学习的二进制神经网络的激活二进制器

LAB: Learnable Activation Binarizer for Binary Neural Networks

论文作者

Falkena, Sieger, Jamali-Rad, Hadi, van Gemert, Jan

论文摘要

二进制神经网络(BNNS)正在因将渴望强大的深度学习带入边缘设备而受到关注。在这个领域的传统智慧是使用符号()进行双向功能。我们争论并说明sign()是一个独特的瓶颈,限制了整个网络中的信息传播。为了减轻这一点,我们建议分配标志(),用可学习的激活二聚体器(LAB)代替它,使网络可以每层学习细粒度的二聚体内核 - 而不是全球阈值。实验室是一个新颖的通用模块,可以无缝集成到现有的架构中。为了确认这一点,我们将其插入四个开创性的BNN中,并以延迟和复杂性可容忍的成本显示出可观的性能。最后,我们在实验室周围建立了一个端到端的BNN(作为实验室-BNN),并证明它在Imagenet上的最先进的情况下取得了竞争性能。

Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep learning towards edge devices. The traditional wisdom in this space is to employ sign() for binarizing featuremaps. We argue and illustrate that sign() is a uniqueness bottleneck, limiting information propagation throughout the network. To alleviate this, we propose to dispense sign(), replacing it with a learnable activation binarizer (LAB), allowing the network to learn a fine-grained binarization kernel per layer - as opposed to global thresholding. LAB is a novel universal module that can seamlessly be integrated into existing architectures. To confirm this, we plug it into four seminal BNNs and show a considerable performance boost at the cost of tolerable increase in delay and complexity. Finally, we build an end-to-end BNN (coined as LAB-BNN) around LAB, and demonstrate that it achieves competitive performance on par with the state-of-the-art on ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源