Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization

Dec 6, 2022

Speakers

About

Stochastic approximation is a foundation for many algorithms found in machine learning and optimization. It is in general slow to converge: the mean square error vanishes as O(n^-1). A deterministic counterpart known as quasi-stochastic approximation (QSA) is a viable alternative in many applications, including gradient free optimization and reinforcement learning. It was assumed in recent prior research that the optimal achievable convergence rate is O(n^-2). It is shown in this paper that through design it is possible to obtain far faster convergence, of order O(n^-4+δ), with δ>0 arbitrary. Two acceleration techniques are introduced for the first time to achieve this rate of convergence. The theory is also specialized within the context of gradient free optimization, and tested on standard benchmarks. The main results are based on a combination of recent results from number theory and techniques adapted from stochastic approximation theory.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022