论文标题
强大的联邦学习反对言语情感识别的对抗性攻击
Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition
论文作者
论文摘要
由于机器学习和语音处理的发展,近年来,语音情感识别一直是一个流行的研究主题。但是,语音数据在语音情感识别的图像应用程序中上载和处理时无法受到保护。此外,事实证明,深层神经网络容易受到人类与人类与众不同的对抗扰动的攻击。扰动产生的对抗性攻击可能导致深层神经网络错误地预测了情绪状态。我们提出了一个新型联邦对抗学习框架,以保护数据和深度神经网络。提出的框架包括i)联合学习数据隐私,ii)在训练阶段进行对抗培训,以及在测试阶段进行模型鲁棒性的随机化。实验表明,我们提出的框架可以有效地在本地保护语音数据,并针对一系列对抗性攻击提高模型鲁棒性。
Due to the development of machine learning and speech processing, speech emotion recognition has been a popular research topic in recent years. However, the speech data cannot be protected when it is uploaded and processed on servers in the internet-of-things applications of speech emotion recognition. Furthermore, deep neural networks have proven to be vulnerable to human-indistinguishable adversarial perturbations. The adversarial attacks generated from the perturbations may result in deep neural networks wrongly predicting the emotional states. We propose a novel federated adversarial learning framework for protecting both data and deep neural networks. The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness. The experiments show that our proposed framework can effectively protect the speech data locally and improve the model robustness against a series of adversarial attacks.