학술논문

Efficient Privacy-Preserving Stochastic Nonconvex Optimization
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Mathematics - Optimization and Control
Statistics - Machine Learning
Language
Abstract
While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been developed, privacy-preserving nonconvex ERM remains a challenge. We study nonconvex ERM, which takes the form of minimizing a finite-sum of nonconvex loss functions over a training set. We propose a new differentially private stochastic gradient descent algorithm for nonconvex ERM that achieves strong privacy guarantees efficiently, and provide a tight analysis of its privacy and utility guarantees, as well as its gradient complexity. Our algorithm reduces gradient complexity while improves the best previous utility guarantee given by Wang et al. (NeurIPS 2017). Our experiments on benchmark nonconvex ERM problems demonstrate superior performance in terms of both training cost and utility gains compared with previous differentially private methods using the same privacy budgets.
Comment: 29 pages, 5 figures, 3 tables. This version corrects a miscalculation in the previous proof, resulting in an improved utility bound for the algorithm