论文标题
视觉意识推荐系统的黑盒攻击模型
A Black-Box Attack Model for Visually-Aware Recommender Systems
论文作者
论文摘要
由于深度学习的进步,视觉吸引的推荐系统(RS)最近引起了研究的兴趣。这样的系统将协作信号与图像相结合,通常表示为预先训练的图像模型输出的特征向量。由于项目目录可能是巨大的,因此建议服务提供商通常依赖于商品提供商提供的图像。在这项工作中,我们表明,依靠此类外部来源可以使RS容易受到攻击,而攻击者的目标是不公平地推广某些被推送的项目。具体而言,我们演示了新的视觉攻击模型如何有效地影响黑框方法中的项目得分和排名,即不知道模型的参数。主要的基本思想是系统地创建针对推送的项目图像的小型人类侵蚀性扰动,并设计适当的梯度近似方法,以逐步提高推动项目的分数。两个数据集上的实验评估表明,即使视觉特征对推荐系统的整体性能的贡献是适度的,新型攻击模型也是有效的。
Due to the advances in deep learning, visually-aware recommender systems (RS) have recently attracted increased research interest. Such systems combine collaborative signals with images, usually represented as feature vectors outputted by pre-trained image models. Since item catalogs can be huge, recommendation service providers often rely on images that are supplied by the item providers. In this work, we show that relying on such external sources can make an RS vulnerable to attacks, where the goal of the attacker is to unfairly promote certain pushed items. Specifically, we demonstrate how a new visual attack model can effectively influence the item scores and rankings in a black-box approach, i.e., without knowing the parameters of the model. The main underlying idea is to systematically create small human-imperceptible perturbations of the pushed item image and to devise appropriate gradient approximation methods to incrementally raise the pushed item's score. Experimental evaluations on two datasets show that the novel attack model is effective even when the contribution of the visual features to the overall performance of the recommender system is modest.