Nov 28, 2022
Speaker · 1 follower
Speaker · 1 follower
We develop a new algorithm for non-convex stochastic optimization that finds an ϵ-critical point in the optimal O(ϵ^-3) stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to "correct” a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes and are able to provide an adaptive algorithm whose convergence rate automatically improves with decreasing variance in the gradient estimates. We validate our results on a variety of large-scale deep learning architectures and benchmarks tasks.We develop a new algorithm for non-convex stochastic optimization that finds an ϵ-critical point in the optimal O(ϵ^-3) stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to "correct” a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes and are able to provide an adaptive algorithm…
Account · 952 followers
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Yonggan Fu, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Shunyu Yao, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Takuya Ito, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%