论文标题
使用异质图检测视觉意识的声学事件
Visually-aware Acoustic Event Detection using Heterogeneous Graphs
论文作者
论文摘要
听觉事件的感知固有地依赖于音频和视觉提示。许多现有的多模式方法使用模式特异性模型处理每种方式,然后融合嵌入以编码关节信息。相比之下,我们采用异质图来明确捕获模态之间的空间和时间关系,并表示有关基础信号的详细信息。使用异质图方法来解决视觉意识的声学事件分类的任务,该任务是一种紧凑,有效且可扩展的方式,以图形形式表示数据。通过异质图,我们显示了在空间和时间尺度上有效地建模模式和模式间关系。我们的模型可以通过相关的超参数很容易适应不同的事件规模。在Audioset上进行的实验是一个大型基准测试,表明我们的模型实现了最新的性能。
Perception of auditory events is inherently multimodal relying on both audio and visual cues. A large number of existing multimodal approaches process each modality using modality-specific models and then fuse the embeddings to encode the joint information. In contrast, we employ heterogeneous graphs to explicitly capture the spatial and temporal relationships between the modalities and represent detailed information about the underlying signal. Using heterogeneous graph approaches to address the task of visually-aware acoustic event classification, which serves as a compact, efficient and scalable way to represent data in the form of graphs. Through heterogeneous graphs, we show efficiently modelling of intra- and inter-modality relationships both at spatial and temporal scales. Our model can easily be adapted to different scales of events through relevant hyperparameters. Experiments on AudioSet, a large benchmark, shows that our model achieves state-of-the-art performance.