论文标题
联合声音事件检测和声学场景分类的双耳信号表示
Binaural Signal Representations for Joint Sound Event Detection and Acoustic Scene Classification
论文作者
论文摘要
声音事件检测(SED)和声学场景分类(ASC)是两项广泛研究的音频任务,构成了声学场景分析研究的重要组成部分。考虑声音事件和声学场景之间的共享信息,共同执行这两个任务是复杂的机器听力系统的自然部分。在本文中,我们研究了几个空间音频特征在训练执行SED和ASC的关节深神经网络(DNN)模型中的有用性。对包含双耳记录和同步声音事件和声学场景标签的两个不同数据集进行了实验,以分析执行SED和ASC之间的差异。提出的结果表明,使用特定双耳特征,主要是与相变(GCC-PHAT)以及相位差异的罪恶和余弦的普遍相关性,从而在单独的和关节任务中具有更好的性能模型,与基于logMel Energies的基线方法相比。
Sound event detection (SED) and Acoustic scene classification (ASC) are two widely researched audio tasks that constitute an important part of research on acoustic scene analysis. Considering shared information between sound events and acoustic scenes, performing both tasks jointly is a natural part of a complex machine listening system. In this paper, we investigate the usefulness of several spatial audio features in training a joint deep neural network (DNN) model performing SED and ASC. Experiments are performed for two different datasets containing binaural recordings and synchronous sound event and acoustic scene labels to analyse the differences between performing SED and ASC separately or jointly. The presented results show that the use of specific binaural features, mainly the Generalized Cross Correlation with Phase Transform (GCC-phat) and sines and cosines of phase differences, result in a better performing model in both separate and joint tasks as compared with baseline methods based on logmel energies only.