학술논문

Differentially Private Federated Learning for Multitask Objective Recognition
Document Type
Periodical
Source
IEEE Transactions on Industrial Informatics IEEE Trans. Ind. Inf. Industrial Informatics, IEEE Transactions on. 20(5):7269-7281 May, 2024
Subject
Power, Energy and Industry Applications
Signal Processing and Analysis
Computing and Processing
Communication, Networking and Broadcast Technologies
Task analysis
Computational modeling
Optimization
Federated learning
Deep learning
Training
Servers
Differential privacy (DP)
federated learning (FL)
multiobjective optimization (MOO)
objective recognition
Language
ISSN
1551-3203
1941-0050
Abstract
Many machine learning models are naturally multitask, which may involve regression and classification tasks, in which they can be trained by the multitask network to yield a more generalized model with the aid of correlated features. When these learning models are deployed on Internet-of-Things devices, the computation efficiency and the privacy of the data can pose a significant challenge to developing a federated learning (FL) algorithm for both higher learning performance and better privacy protection. In this article, a new FL framework is proposed for a class of multitask learning problems with hard parameter-sharing model through which the learning tasks are reformulated as a multiobjective optimization problem for better performance. Specifically, the stochastic multiple gradient descent approach and differential privacy are integrated into this FL algorithm for achieving a Pareto optimality that obtains a good tradeoff among different learning tasks while providing data protection. The outstanding performance of this algorithm is demonstrated by the empirical experiments on multiMINIST, the Chinese city parking dataset, and Cityscapes dataset.