论文标题
混合ASR系统中深卷积神经网络的框架级规格
Frame-level SpecAugment for Deep Convolutional Neural Networks in Hybrid ASR Systems
论文作者
论文摘要
受端到端ASR系统的数据增强方法的启发,我们提出了一种框架级规范方法(F-Specaugment),以改善基于混合HMM ASR系统的深卷积神经网络(CNN)的性能。与话语级别的规范类似,F-Specaugment执行三个转换:时间扭曲,频率掩盖和时间掩盖。 F-Specaugment没有在训练期间独立地将其应用于每个卷积窗口,而不是在话语级别上应用转换。我们证明,对于基于CNN的Deep Hybrid模型,F-Specaugment比话语水平规格更有效。我们评估了50层自我归一化的Deep CNN(SNDCNN)声学模型的拟议F-Specaugment,该模型训练多达25000小时的训练数据。我们观察到F-Specaugment在四种语言的不同ASR任务中相对较低的ASR任务相对降低了0.5-4.5%。随着训练数据规模的增加,增强技术的好处往往会降低,因此所报告的大规模培训对于理解F-Specaugment的有效性很重要。我们的实验表明,即使有25K培训数据,F-Specaugment仍然有效。我们还证明,F-Specaugment的好处大约等同于将深CNN的培训数据增加一倍。
Inspired by SpecAugment -- a data augmentation method for end-to-end ASR systems, we propose a frame-level SpecAugment method (f-SpecAugment) to improve the performance of deep convolutional neural networks (CNN) for hybrid HMM based ASR systems. Similar to the utterance level SpecAugment, f-SpecAugment performs three transformations: time warping, frequency masking, and time masking. Instead of applying the transformations at the utterance level, f-SpecAugment applies them to each convolution window independently during training. We demonstrate that f-SpecAugment is more effective than the utterance level SpecAugment for deep CNN based hybrid models. We evaluate the proposed f-SpecAugment on 50-layer Self-Normalizing Deep CNN (SNDCNN) acoustic models trained with up to 25000 hours of training data. We observe f-SpecAugment reduces WER by 0.5-4.5% relatively across different ASR tasks for four languages. As the benefits of augmentation techniques tend to diminish as training data size increases, the large scale training reported is important in understanding the effectiveness of f-SpecAugment. Our experiments demonstrate that even with 25k training data, f-SpecAugment is still effective. We also demonstrate that f-SpecAugment has benefits approximately equivalent to doubling the amount of training data for deep CNNs.