论文标题
神经odes的嘈杂学习充当稳健的基因座扩大
Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening
论文作者
论文摘要
我们研究了评估基于微分方程(DE)网络的鲁棒性的问题和挑战,以防止合成分布转移。我们提出了一种新颖而简单的精度度量,可用于评估固有的鲁棒性并验证数据集损坏模拟器。我们还提出了方法论建议,注定要评估神经des'的鲁棒性的许多面孔,并将其与它们的离散对应物进行了严格的比较。然后,我们使用此标准来评估廉价的数据增强技术,作为证明神经odes自然鲁棒性的可靠方法,以防止多个数据集中的模拟图像损坏。
We investigate the problems and challenges of evaluating the robustness of Differential Equation-based (DE) networks against synthetic distribution shifts. We propose a novel and simple accuracy metric which can be used to evaluate intrinsic robustness and to validate dataset corruption simulators. We also propose methodology recommendations, destined for evaluating the many faces of neural DEs' robustness and for comparing them with their discrete counterparts rigorously. We then use this criteria to evaluate a cheap data augmentation technique as a reliable way for demonstrating the natural robustness of neural ODEs against simulated image corruptions across multiple datasets.