When: Oct 06 2022 @ 1:30 PM

Title: Accelerated Gradient Descent in the PDE Framework

Abstract:

Following the seminal work of Nesterov, accelerated optimization methods (sometimes referred to as momentum methods) have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a attraction basin large enough to accommodate the initial overshoot. This behavior has made accelerated search methods particularly popular within the machine learning community where stochastic variants have been proposed as well. Until recently, however, accelerated optimization methods have been applied to searches over finite parameter spaces. We show how a variational setting for these finite dimensional methods (published by Wibisono, Wilson and Jordan in 2016) can be extended to the infinite dimensional setting, both in linear functional spaces as well as to the more complicated manifold of 2D curves and 3D surfaces. Moreover, we also show how extremely simple explicit discretizaion schemes can be used to efficiently solve the resulting class of high dimensional optimization problems. We will illustrate applications of this strategy to problems in image restortation, image segmentation, and 3D reconstruction.

Join via Zoom:

https://wse.zoom.us/j/95738965246