학술논문

Revisiting Data Poisoning Attacks on Deep Learning Based Recommender Systems
Document Type
Conference
Source
2023 IEEE Symposium on Computers and Communications (ISCC) Computers and Communications (ISCC), 2023 IEEE Symposium on. :1261-1267 Jul, 2023
Subject
Aerospace
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Fields, Waves and Electromagnetics
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Deep learning
Computers
Sensitivity
Buildings
Robustness
Recommender systems
data poisoning attacks
adversarial machine learning
Language
ISSN
2642-7389
Abstract
Deep learning based recommender systems(DLRS) as one of the up-and-coming recommender systems, and their robustness is crucial for building trustworthy recommender systems. However, recent studies have demonstrated that DLRS are vulnerable to data poisoning attacks. Specifically, an unpopular item can be promoted to regular users by injecting well-crafted fake user profiles into the victim recommender systems. In this paper, we revisit the data poisoning attacks on DLRS and find that state-of-the-art attacks suffer from two issues: user-agnostic and fake-user-unitary or target-item-agnostic, reducing the effectiveness of promotion attacks. To gap these two limitations, we proposed our improved method Generate Targeted Attacks(GTA), to implement targeted attacks on vulnerable users defined by user intent and sensitivity. We initialize the fake users by adding seed items to address the cold start problems of fake users so that we can implement targeted attacks. Our extensive experiments on two real-world datasets demonstrate the effectiveness of GTA.