Topic: Nonlinear Convergence Acceleration for Computational Science and Data Science
Date of Presentation: Wednesday, November 24, 2021
Location: Online
Abstract:
Many numerical methods for problems in computational science, optimization and data science can be viewed as nonlinear fixed-point methods. When the problems are ill-conditioned, the convergence of commonly used fixed-point methods can be exceedingly slow. We will discuss nonlinear convergence acceleration methods that can be used as an outer iteration to speed up the convergence of the fixed-point method. These outer-loop acceleration methods can be based on nonlinear versions of well-known algorithms such as conjugate gradients, GMRES, LBFGS, multigrid, and Nesterov acceleration. We will pay special attention to the so-called Anderson acceleration method, which is widely used but whose convergence properties are still poorly understood. We reveal interesting convergence patterns and provide some theoretical results towards understanding them. Example applications discussed will include acceleration of tensor decomposition algorithms and the ADMM optimization method.