학술논문

Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order o(1/k2)
Document Type
redif-article
Source
Springer, Computational Optimization and Applications. 83(2):615-649
Subject
Language
English
Abstract
The accelerated gradient method initiated by Nesterov is now recognized to be one of the most powerful tools for solving smooth convex optimization problems. This method improves significantly the convergence rate of function values from O(1/k) of the standard gradient method down to $$O(1/k^2)$$ O ( 1 / k 2 ) . In this paper, we present two generalized variants of Nesterov’s accelerated proximal gradient method for solving composition convex optimization problems in which the objective function is represented by the sum of a smooth convex function and a nonsmooth convex part. We show that with suitable ways to pick the sequences of parameters, the convergence rate for the function values of this proposed method is actually of order $$o(1/k^2)$$ o ( 1 / k 2 ) . Especially, when the objective function is p-uniformly convex for $$p>2$$ p > 2 , the convergence rate is of order $$O\left( \ln k/k^{2p/(p-2)}\right)$$ O ln k / k 2 p / ( p - 2 ) , and the convergence is linear if the objective function is strongly convex. By-product, we derive a forward–backward algorithm generalizing the one by Attouch–Peypouquet (SIAM J Optim 26(3):1824–1834, 2016), which produces a convergence sequence with a convergence rate of the function values of order $$o(1/k^2)$$ o ( 1 / k 2 ) . Initial computational experiments for solving linear inverse problems with the $$l_1$$ l 1 -regularization demonstrate the capabilities of the proposed algorithms.