论文标题
多视图注意转移以增强有效的语音
Multi-View Attention Transfer for Efficient Speech Enhancement
论文作者
论文摘要
最近的深度学习模型在言语增强方面已经达到了高性能。但是,在没有明显的性能下降的情况下获得快速和低复杂模型仍然具有挑战性。先前关于言语增强的知识蒸馏研究无法解决此问题,因为它们的输出蒸馏方法在某些方面不符合语音增强任务。在这项研究中,我们提出了基于特征的蒸馏多视图注意转移(MV-AT),以在时域中获得有效的语音增强模型。基于多视图功能提取模型,MV-AT将教师网络的多视图知识传输到学生网络,而无需其他参数。实验结果表明,所提出的方法始终提高瓦伦蒂尼和深噪声抑制(DNS)数据集对各种规模的学生模型的性能。与我们提出的方法相比,使用我们提出的方法,一种有效部署的轻量级模型,分别达到15.4倍和4.71倍的参数和浮点操作(FLOPS),与具有相似性能的基线模型相比,分别达到了15.4倍和4.71倍。
Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4x and 4.71x fewer parameters and floating-point operations (FLOPs), respectively, compared to the baseline model with similar performance.