학술논문

Practical Acceleration of the Condat-V\~u Algorithm
Document Type
Working Paper
Source
Subject
Mathematics - Optimization and Control
Language
Abstract
The Condat-V\~u algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-V\~u, including proximal gradient descent (PGD). It is well-known that PGD exhibits suboptimal performance, and a simple adjustment to PGD can accelerate its convergence rate from $\mathcal{O}(1/T)$ to $\mathcal{O}(1/T^2)$ on convex objectives, and this accelerated rate is optimal. In this work, we show that a simple adjustment to the Condat-V\~u algorithm allows it to recover accelerated PGD (APGD) as a special case, instead of PGD. We prove that this accelerated Condat--V\~u algorithm achieves optimal convergence rates and significantly outperforms the traditional Condat-V\~u algorithm in regimes where the Condat--V\~u algorithm approximates the dynamics of PGD. We demonstrate the effectiveness of our approach in various applications in machine learning and computational imaging.