论文标题
强大的跟踪对抗攻击
Robust Tracking against Adversarial Attacks
论文作者
论文摘要
尽管深度卷积神经网络(CNN)容易受到对抗性攻击的攻击,但很少有人努力构建针对对抗性攻击的强大的深层跟踪算法。当前关于对抗攻击和防御的研究主要存在于单个图像中。在这项工作中,我们首先尝试在视频序列之上生成对抗性示例,以改善对抗攻击的跟踪鲁棒性。为此,我们在估计的跟踪结果逐帧的情况下产生轻质扰动时会考虑时间运动。一方面,我们将时间扰动添加到原始视频序列中,作为对抗性示例,以极大地降低跟踪性能。另一方面,我们依次估计输入序列的扰动,并学会消除其对性能恢复的影响。我们将拟议的对抗攻击和防御方法应用于最先进的深层跟踪算法。对基准数据集的广泛评估表明,我们的防御方法不仅消除了由对抗性攻击引起的大型性能下降,而且还可以在深层跟踪器不受对抗性攻击时获得额外的性能。
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks. To this end, we take temporal motion into consideration when generating lightweight perturbations over the estimated tracking results frame-by-frame. On one hand, we add the temporal perturbations into the original video sequences as adversarial examples to greatly degrade the tracking performance. On the other hand, we sequentially estimate the perturbations from input sequences and learn to eliminate their effect for performance restoration. We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms. Extensive evaluations on the benchmark datasets demonstrate that our defense method not only eliminates the large performance drops caused by adversarial attacks, but also achieves additional performance gains when deep trackers are not under adversarial attacks.