학술논문

A Survey of Security Protection Methods for Deep Learning Model
Document Type
Periodical
Author
Source
IEEE Transactions on Artificial Intelligence IEEE Trans. Artif. Intell. Artificial Intelligence, IEEE Transactions on. 5(4):1533-1553 Apr, 2024
Subject
Computing and Processing
Data models
Security
Training
Computational modeling
Data privacy
Artificial intelligence
Mobile handsets
deep learning (DL)
defense method
security
Language
ISSN
2691-4581
Abstract
In recent years, deep learning (DL) models have attracted widespread concern. Due to its own characteristics, DL has been successfully applied in the fields of object detection, superresolution reconstruction, speech recognition, natural language processing, etc., bringing high efficiency to industrial production and daily life. With the Internet of Things, 6G and other new technologies have been proposed, leading to an exponential growth in data volume. DL models currently suffer from some security issues, such as privacy issues during data collection, defense issues during model training and deployment, etc. The sensitive data of users and special institutions that are directly used as training data of DL models may lead to information leakage and serious privacy problems. In addition, DL models have encountered many malicious attacks in the real world, such as poisoning attack, exploratory attack, adversarial attack, etc., which caused model security problems. Therefore, this article discusses ways of ensuring the security and data privacy of DL models under diversified attack methods and the ways of ensuring the privacy security of edge mobile devices equipped with pretrained deep neural networks. Alternatively, this article analyzes the privacy security of DL models for typical deployment platforms such as server/cloud, edge mobile device, and web browser and, then, summarizes future research direction.