학술논문

Two-Level Privacy-Preserving Framework: Federated Learning for Attack Detection in the Consumer Internet of Things
Document Type
Periodical
Source
IEEE Transactions on Consumer Electronics IEEE Trans. Consumer Electron. Consumer Electronics, IEEE Transactions on. 70(1):4258-4265 Feb, 2024
Subject
Power, Energy and Industry Applications
Components, Circuits, Devices and Systems
Fields, Waves and Electromagnetics
Privacy
Security
Data privacy
Cryptography
Data models
Computational modeling
Servers
FL
PHE
ConsumerIoT
privacy
attack detection
Language
ISSN
0098-3063
1558-4127
Abstract
As the adoption of Consumer Internet of Things (CIoT) devices surges, so do concerns about security vulnerabilities and privacy breaches. Given their integration into daily life and data collection capabilities, it is crucial to safeguard user privacy against unauthorized access and potential leaks proactively. Federated learning, an advanced machine learning, provides a promising solution by inherently prioritizing privacy, circumventing the need for centralized data collection, and bolstering security. Yet, federated learning opens up avenues for adversaries to extract critical information from the machine learning model through data leakage and model inference attacks targeted at the central server. In response to this particular concern, we present an innovative two-level privacy-preserving framework in this paper. This framework synergistically combines federated learning with partially homomorphic encryption, which we favor over other methods such as fully homomorphic encryption and differential privacy. Our preference for partially homomorphic encryption is based on its superior balance between computational efficiency and model performance. This advantage becomes particularly relevant when considering the intense computational demands of fully homomorphic encryption and the sacrifice to model accuracy often associated with differential privacy. Incorporating partially homomorphic encryption augments federated learning’s privacy assurance, introducing an additional protective layer. The fundamental properties of partially homomorphic encryption enable the central server to aggregate and compute operations on the encrypted local models without decryption, thereby preserving sensitive information from potential exposures. Empirical results substantiate the efficacy of the proposed framework, which significantly ameliorates attack prediction error rates and reduces false alarms compared to conventional methods. Moreover, through security analysis, we prove our proposed framework’s enhanced privacy compared to existing methods that deploy federated learning for attack detection.