1402. Stochastic Primal Dual Hybrid Gradient Algorithm with Adaptive Step-Sizes
Invited abstract in session MD-32: Algorithms for machine learning and inverse problems: adaptive strategies, stream Advances in large scale nonlinear optimization.
Monday, 14:30-16:00Room: 41 (building: 303A)
Authors (first author is the speaker)
1. | Claire Delplancke
|
EDF Lab Paris-Saclay | |
2. | Antonin Chambolle
|
CEREMADE, CNRS, Université Paris-Dauphine, PSL | |
3. | Matthias J. Ehrhardt
|
University of Bath | |
4. | Carola-Bibiane Schönlieb
|
DAMTP, University of Cambridge | |
5. | Junqi Tang
|
School of Mathematics, University of Birmingham |
Abstract
In this work we propose a new primal-dual algorithm with adaptive step-sizes. The stochastic primal-dual hybrid gradient (SPDHG) algorithm with constant step-sizes (Chambolle et al. 2018, Ehrhardt et al. 2019) has become widely applied in large-scale convex optimization across many scientific fields due to its scalability. While the product of the primal and dual step-sizes is subject to an upper-bound in order to ensure convergence, the selection of the ratio of the step-sizes is critical in applications. Up- to-now there is no systematic and successful way of selecting the primal and dual step-sizes for SPDHG. In this work, we propose a general class of adaptive SPDHG (A-SPDHG) algorithms, and prove their convergence under weak assumptions. We also propose concrete parameters- updating strategies which satisfy the assumptions of our theory and thereby lead to convergent algorithms. Numerical examples on computed tomography demonstrate the effectiveness of the proposed schemes.
Keywords
- Algorithms
- Convex Optimization
- Large Scale Optimization
Status: accepted
Back to the list of papers