학술논문

Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation
Document Type
Conference
Source
2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) DSN Dependable Systems and Networks (DSN), 2023 53rd Annual IEEE/IFIP International Conference on. :288-301 Jun, 2023
Subject
Aerospace
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineering Profession
Robotics and Control Systems
Transportation
Training
Privacy
Adaptation models
Federated learning
Perturbation methods
Predictive models
Data models
Language
ISSN
2158-3927
Abstract
Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training process. However, output perturbations are ineffective in an FL setting, because a malicious server can access the model without output perturbation while training perturbations struggle to achieve a good utility. This paper proposes a novel defense, called CIP, to fortify FL against MI attacks via a client-level input perturbation during training and inference procedures. The key insight is to shift each client's local data distribution via a personalized perturbation to get a shifted model. CIP achieves a good balance between privacy and utility. Our evaluation shows that CIP causes accuracy to drop at most 0.7% while reducing attacks to random guessing.