论文标题
在保护视觉推荐人方面,对DNN的强大效率低下的实证研究
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
论文作者
论文摘要
基于视觉的推荐系统(VRSS)通过将用户的反馈与从深神经网络(DNN)提取的产品图像的视觉特征集成在一起,从而增强了建议性能。最近,已证明了定义的\ textit {对抗性攻击}的人类侵蚀图像扰动可以改变VRSS推荐性能,例如产品的推送/nuking类别。但是,由于对对抗性训练技术已被证明可以成功地鲁棒性DNN在保持分类准确性方面,因此据我们所知,尚未研究两个重要的问题:1)这些防御机制能够如何保护VRSS绩效? 2)无效/有效防御的原因是什么?为了回答这些问题,我们定义了一套防御和攻击环境以及推荐模型,以经验研究防御机制的功效。结果表明,通过DNN鲁棒化保护VRS的风险令人震惊。我们的实验阐明了在非常有效的攻击场景中视觉特征的重要性。鉴于VRSS对许多公司的财务影响,我们认为这项工作可能会增加研究如何成功保护基于视觉的推荐人的需求。源代码和数据可在https://anonymon.4open.science/r/868f87ca-c8a4-41ba-41BA-9AF9-20C41DE33029/获得。
Visual-based recommender systems (VRSs) enhance recommendation performance by integrating users' feedback with the visual features of product images extracted from a deep neural network (DNN). Recently, human-imperceptible images perturbations, defined \textit{adversarial attacks}, have been demonstrated to alter the VRSs recommendation performance, e.g., pushing/nuking category of products. However, since adversarial training techniques have proven to successfully robustify DNNs in preserving classification accuracy, to the best of our knowledge, two important questions have not been investigated yet: 1) How well can these defensive mechanisms protect the VRSs performance? 2) What are the reasons behind ineffective/effective defenses? To answer these questions, we define a set of defense and attack settings, as well as recommender models, to empirically investigate the efficacy of defensive mechanisms. The results indicate alarming risks in protecting a VRS through the DNN robustification. Our experiments shed light on the importance of visual features in very effective attack scenarios. Given the financial impact of VRSs on many companies, we believe this work might rise the need to investigate how to successfully protect visual-based recommenders. Source code and data are available at https://anonymous.4open.science/r/868f87ca-c8a4-41ba-9af9-20c41de33029/.