Jul 24, 2023
Speaker · 0 followers
Speaker · 0 followers
Particle gradient descent is widely used to optimize functions of probability measures. Mean-field analyses tend the number of particles to infinity, thereby providing insights on the density of particles. In a mean-field regime, the particles globally optimize displacement convex functions. We investigate the consequence of this convergence for a finite number of particles. To achieve an ϵ-optimal solution, we prove that O(1/ϵ^2) particles and O(d/ϵ^4) time are sufficient for Lipschitz displacement convex functions. For smooth displacement convex functions, we improve the complexity. Finally, we demonstrate the application of our results for function approximation with specific neural architectures with two-dimensional inputs.Particle gradient descent is widely used to optimize functions of probability measures. Mean-field analyses tend the number of particles to infinity, thereby providing insights on the density of particles. In a mean-field regime, the particles globally optimize displacement convex functions. We investigate the consequence of this convergence for a finite number of particles. To achieve an ϵ-optimal solution, we prove that O(1/ϵ^2) particles and O(d/ϵ^4) time are sufficient for Lipschitz displace…
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Tony Wang, …
Ayan Das, …