论文标题

重新访问对抗性的注射攻击针对推荐系统的攻击

Revisiting Adversarially Learned Injection Attacks Against Recommender Systems

论文作者

Tang, Jiaxi, Wen, Hongyi, Wang, Ke

论文摘要

推荐系统在现代信息和电子商务应用程序中起着重要作用。尽管不断增加的研究致力于提高建议的相关性和多样性,但最先进的建议模型的潜在风险却不那么探索,也就是说,这些模型可能会受到恶意第三方的攻击,通过注入假用户互动以实现其目的。本文重新审视了对抗性的注射攻击问题,其中注入的假用户“行为”是由攻击者以自己的模型在本地学习的 - 这种模型与攻击的模型有可能不同,但分享了类似的属性以允许攻击传输。我们发现,文献中大多数现有的作品都有两个主要局限性:(1)他们不能精确地解决优化问题,从而使攻击的危害较小,(2)他们对攻击的理想知识,导致对现实攻击能力缺乏理解。我们证明,生成假用户作为优化问题的确切解决方案可能会导致更大的影响。我们在现实世界数据集上的实验揭示了攻击的重要特性,包括攻击可传递性及其局限性。这些发现可以激发有用的防御方法,以防止这种现有的攻击。

Recommender systems play an important role in modern information and e-commerce applications. While increasing research is dedicated to improving the relevance and diversity of the recommendations, the potential risks of state-of-the-art recommendation models are under-explored, that is, these models could be subject to attacks from malicious third parties, through injecting fake user interactions to achieve their purposes. This paper revisits the adversarially-learned injection attack problem, where the injected fake user `behaviors' are learned locally by the attackers with their own model -- one that is potentially different from the model under attack, but shares similar properties to allow attack transfer. We found that most existing works in literature suffer from two major limitations: (1) they do not solve the optimization problem precisely, making the attack less harmful than it could be, (2) they assume perfect knowledge for the attack, causing the lack of understanding for realistic attack capabilities. We demonstrate that the exact solution for generating fake users as an optimization problem could lead to a much larger impact. Our experiments on a real-world dataset reveal important properties of the attack, including attack transferability and its limitations. These findings can inspire useful defensive methods against this possible existing attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源