论文标题
实用的网络加速与微小集合
Practical Network Acceleration with Tiny Sets
论文作者
论文摘要
由于数据隐私问题,通过微小的培训集加速网络已成为实践中的关键需求。先前的方法主要采用过滤级修剪来加速网络,以稀缺的培训样本。在本文中,我们揭示了在这种情况下,掉落块是一种从根本上出色的方法。它的加速度比率更高,并在几次射击设置下导致更好的延迟准确性性能。为了选择要删除的块,我们提出了一个新概念,即可恢复性,以衡量恢复压缩网络的难度。我们的可恢复性对于选择要删除的块是有效的。最后,我们提出了一种名为“实践”的算法,以使用一组微小的培训图像加速网络。练习以优于以前的方法的大幅度边距。对于减少延迟的22%,练习在Imagenet-1k上平均超过7%的方法。它还具有很高的概括能力,在无数据或室外数据设置下也可以很好地工作。我们的代码在https://github.com/doctorkey/practise上。
Due to data privacy issues, accelerating networks with tiny training sets has become a critical need in practice. Previous methods mainly adopt filter-level pruning to accelerate networks with scarce training samples. In this paper, we reveal that dropping blocks is a fundamentally superior approach in this scenario. It enjoys a higher acceleration ratio and results in a better latency-accuracy performance under the few-shot setting. To choose which blocks to drop, we propose a new concept namely recoverability to measure the difficulty of recovering the compressed network. Our recoverability is efficient and effective for choosing which blocks to drop. Finally, we propose an algorithm named PRACTISE to accelerate networks using only tiny sets of training images. PRACTISE outperforms previous methods by a significant margin. For 22% latency reduction, PRACTISE surpasses previous methods by on average 7% on ImageNet-1k. It also enjoys high generalization ability, working well under data-free or out-of-domain data settings, too. Our code is at https://github.com/DoctorKey/Practise.