论文标题
通过对抗训练具有不确定性定量的强大DNN替代模型
Robust DNN Surrogate Models with Uncertainty Quantification via Adversarial Training
论文作者
论文摘要
为了计算效率,替代模型已用于模拟数学模拟器以进行物理或生物过程。当在许多随机采样的输入点上重复模拟时,高速模拟对于进行不确定性定量(UQ)至关重要(aka,蒙特卡洛方法)。在某些情况下,UQ仅具有替代模型可行。最近,深度神经网络(DNN)替代模型因其难以匹配的仿真精度而变得越来越受欢迎。但是,众所周知,当输入数据以特定方式扰动时,DNN容易出错,这是对抗性训练的动力。在替代模型的使用情况下,关注的问题不是故意的攻击,但更多的是DNN对输入方向的准确性的高度敏感性,这是使用仿真模型的研究人员在很大程度上忽略的问题。在本文中,我们通过经验研究和假设检验来展示此问题的严重性。此外,我们在对抗训练中采用方法来增强DNN替代模型的鲁棒性。实验表明,我们的方法显着提高了替代模型的鲁棒性,而不会损害仿真精度。
For computational efficiency, surrogate models have been used to emulate mathematical simulators for physical or biological processes. High-speed simulation is crucial for conducting uncertainty quantification (UQ) when the simulation is repeated over many randomly sampled input points (aka, the Monte Carlo method). In some cases, UQ is only feasible with a surrogate model. Recently, Deep Neural Network (DNN) surrogate models have gained popularity for their hard-to-match emulation accuracy. However, it is well-known that DNN is prone to errors when input data are perturbed in particular ways, the very motivation for adversarial training. In the usage scenario of surrogate models, the concern is less of a deliberate attack but more of the high sensitivity of the DNN's accuracy to input directions, an issue largely ignored by researchers using emulation models. In this paper, we show the severity of this issue through empirical studies and hypothesis testing. Furthermore, we adopt methods in adversarial training to enhance the robustness of DNN surrogate models. Experiments demonstrate that our approaches significantly improve the robustness of the surrogate models without compromising emulation accuracy.