论文标题

Audio Albert:一个用于自我监督的音频表示的精简版

Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

论文作者

Chi, Po-Han, Chung, Pei-Hung, Wu, Tsung-Han, Hsieh, Chun-Cheng, Chen, Yen-Hao, Li, Shang-Wen, Lee, Hung-yi

论文摘要

对于自我监督的语音处理,将验证的模型用作语音表示提取器至关重要。在最近的工作中,在声学模型训练中使用了增加模型的大小,以实现更好的性能。在本文中,我们提出了Audio Albert,这是自我监督的语音表示模型的精简版。我们使用具有两个下游任务的表示形式,说话者标识和音素分类。我们表明,音频Albert能够在下游任务中使用那些巨大的模型来实现竞争性能,同时使用91 \%的参数。此外,我们使用一些简单的探测模型来测量说话者和音素的信息在潜在表示中编码的程度。在探测实验时,我们发现潜在表示与最后一层相比,音素和说话者的丰富信息更丰富。

For self-supervised speech processing, it is crucial to use pretrained models as speech representation extractors. In recent works, increasing the size of the model has been utilized in acoustic model training in order to achieve better performance. In this paper, we propose Audio ALBERT, a lite version of the self-supervised speech representation model. We use the representations with two downstream tasks, speaker identification, and phoneme classification. We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks while utilizing 91\% fewer parameters. Moreover, we use some simple probing models to measure how much the information of the speaker and phoneme is encoded in latent representations. In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源