论文标题

使用Densenet和全球平均池的Densenet进行音频标记和弱监督的声学事件检测的联合框架

A Joint Framework for Audio Tagging and Weakly Supervised Acoustic Event Detection Using DenseNet with Global Average Pooling

论文作者

Kao, Chieh-Chi, Shi, Bowen, Sun, Ming, Wang, Chao

论文摘要

本文提出了一个主要用于音频标签的网络体系结构,该架构也可用于弱监督的声学事件检测(AED)。所提出的网络由一个修改后的Densenet作为特征提取器和一个全局平均池(GAP)层组成,以预测推理时间的帧级标签。该体系结构的灵感来自Zhou等人提出的工作,Zhou等人是一个众所周知的框架,使用GAP来定位给定图像级标签的视觉对象。虽然以前的大多数作品都在弱监督的AED上使用了具有基于注意力的机制来定位声学事件的复发层,但拟议的网络使用Densenet提取的特征图直接定位事件,而无需任何复发层。在Dcase 2017的音频标记任务中,我们的方法在DEV集合中的最先进方法在F1分数中的最先进方法高出了5.3%,而根据绝对值,在eval集合中的最新方法为6.0%。对于DCASE 2018中弱监督的AED任务,我们的模型在DEV集合中的最先进方法在DEV集合中优于8.1%,而根据绝对值的评估集,使用数据增强和三个训练来利用未标记的数据,而在“绝对值”设置方面优于“绝对值”集合。

This paper proposes a network architecture mainly designed for audio tagging, which can also be used for weakly supervised acoustic event detection (AED). The proposed network consists of a modified DenseNet as the feature extractor, and a global average pooling (GAP) layer to predict frame-level labels at inference time. This architecture is inspired by the work proposed by Zhou et al., a well-known framework using GAP to localize visual objects given image-level labels. While most of the previous works on weakly supervised AED used recurrent layers with attention-based mechanism to localize acoustic events, the proposed network directly localizes events using the feature map extracted by DenseNet without any recurrent layers. In the audio tagging task of DCASE 2017, our method significantly outperforms the state-of-the-art method in F1 score by 5.3% on the dev set, and 6.0% on the eval set in terms of absolute values. For weakly supervised AED task in DCASE 2018, our model outperforms the state-of-the-art method in event-based F1 by 8.1% on the dev set, and 0.5% on the eval set in terms of absolute values, by using data augmentation and tri-training to leverage unlabeled data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源