论文标题

通过得分不足的重要性来朝着域广义修剪

Toward domain generalized pruning by scoring out-of-distribution importance

论文作者

Cai, Rizhao, Li, Haoliang, Kot, Alex

论文摘要

过滤器修剪已被广泛用于压缩卷积神经网络,以降低部署阶段的计算成本。最近的研究表明,过滤器修剪技术可以实现深度神经网络的无损压缩,从而在不牺牲准确性性能的情况下减少了冗余过滤器(内核)。但是,当训练和测试数据来自相似的环境条件(独立和相同分布)时进行评估,并且过滤器修剪技术将如何影响跨域的概括(分布式)性能。我们进行了广泛的经验实验,并揭示了尽管滤波器修剪后可以保持域内性能,但跨域的性能将在很大程度上衰减。由于对过滤器的重要性得分是修剪的核心问题之一,因此我们通过使用域级风险的方差来设计重要性评分估计,以考虑看不见的分布中的修剪风险。因此,我们可以保持更多的域广义过滤器。实验表明,在相同的修剪比下,我们的方法可以比基线过滤器修剪法获得明显更好的跨域泛化性能。对于第一次尝试,我们的工作阐明了域概括和过滤修剪研究的关节问题。

Filter pruning has been widely used for compressing convolutional neural networks to reduce computation costs during the deployment stage. Recent studies have shown that filter pruning techniques can achieve lossless compression of deep neural networks, reducing redundant filters (kernels) without sacrificing accuracy performance. However, the evaluation is done when the training and testing data are from similar environmental conditions (independent and identically distributed), and how the filter pruning techniques would affect the cross-domain generalization (out-of-distribution) performance is largely ignored. We conduct extensive empirical experiments and reveal that although the intra-domain performance could be maintained after filter pruning, the cross-domain performance will decay to a large extent. As scoring a filter's importance is one of the central problems for pruning, we design the importance scoring estimation by using the variance of domain-level risks to consider the pruning risk in the unseen distribution. As such, we can remain more domain generalized filters. The experiments show that under the same pruning ratio, our method can achieve significantly better cross-domain generalization performance than the baseline filter pruning method. For the first attempt, our work sheds light on the joint problem of domain generalization and filter pruning research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源