论文标题
学习动态互动传播的声学散射场
Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation
论文作者
论文摘要
我们提出了一种用于交互式应用的新型混合声音传播算法。我们的方法是为动态场景而设计的,并使用基于神经网络的散射场表示以及射线跟踪来有效地产生镜面,弥漫性,衍射和遮挡效果。我们使用几何深度学习来使用球形谐波近似声学散射场。我们使用大型3D数据集进行训练,并将其准确性与使用基于波浪的求解器生成的地面真相进行比较。计算运行时学习的散射场的附加开销很小,我们通过在具有衍射和遮挡效果的动态场景中产生合理的声音效应来证明其交互性能。我们根据视听用户研究证明了方法的感知益处。
We present a novel hybrid sound propagation algorithm for interactive applications. Our approach is designed for dynamic scenes and uses a neural network-based learned scattered field representation along with ray tracing to generate specular, diffuse, diffraction, and occlusion effects efficiently. We use geometric deep learning to approximate the acoustic scattering field using spherical harmonics. We use a large 3D dataset for training, and compare its accuracy with the ground truth generated using an accurate wave-based solver. The additional overhead of computing the learned scattered field at runtime is small and we demonstrate its interactive performance by generating plausible sound effects in dynamic scenes with diffraction and occlusion effects. We demonstrate the perceptual benefits of our approach based on an audio-visual user study.