학술논문

Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
Document Type
Periodical
Source
IEEE Transactions on Information Theory IEEE Trans. Inform. Theory Information Theory, IEEE Transactions on. 65(2):742-769 Feb, 2019
Subject
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Optimization
Training
Biological neural networks
Data models
Numerical models
Convergence
Nonconvex optimization
over-parametrized neural networks
random matrix theory
Language
ISSN
0018-9448
1557-9654
Abstract
In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.