论文标题

与上下文编解码器的小组通信,以进行轻量级源分离

Group Communication with Context Codec for Lightweight Source Separation

论文作者

Luo, Yi, Han, Cong, Mesgarani, Nima

论文摘要

尽管最近在神经网络架构进行语音分离方面取得了进展,但模型大小,模型复杂性和模型性能之间的平衡仍然是将这种模型部署到低资源平台上的重要且具有挑战性的问题。在本文中,我们提出了两个简单的模块,“小组通信”和上下文编解码器,可以轻松地应用于广泛的体系结构,以共同降低模型的大小和复杂性而无需牺牲性能。组通信模块将高维特征分为低维特征组,并捕获组间依赖性。然后,所有组都可以共享具有明显较小模型大小的分离模块。上下文编解码器模块包含上下文编码器和上下文解码器,被设计为可学习的下采样和升级模块,以降低由分离模块处理的顺序特征的长度。组通信和上下文编解码器模块的组合称为GC3设计。实验结果表明,在多个网络架构上应用GC3进行语音分离可以分别实现PAR或更好的性能,而模型大小和17.6%的模型复杂性分别可以实现。

Despite the recent progress on neural network architectures for speech separation, the balance between the model size, model complexity and model performance is still an important and challenging problem for the deployment of such models to low-resource platforms. In this paper, we propose two simple modules, group communication and context codec, that can be easily applied to a wide range of architectures to jointly decrease the model size and complexity without sacrificing the performance. A group communication module splits a high-dimensional feature into groups of low-dimensional features and captures the inter-group dependency. A separation module with a significantly smaller model size can then be shared by all the groups. A context codec module, containing a context encoder and a context decoder, is designed as a learnable downsampling and upsampling module to decrease the length of a sequential feature processed by the separation module. The combination of the group communication and the context codec modules is referred to as the GC3 design. Experimental results show that applying GC3 on multiple network architectures for speech separation can achieve on-par or better performance with as small as 2.5% model size and 17.6% model complexity, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源