Dec 6, 2022
Speaker · 0 followers
Speaker · 0 followers
For a d-dimensional log-concave distribution π(θ) ∝ e^-f(θ) constrained to a convex body K, the problem of outputting samples from a distribution ν which is ε-close in infinity-distance sup_θ∈ K |logν(θ)/π(θ)| to π arises in differentially private optimization. While sampling within total-variation distance ε of π can be done by algorithms whose runtime depends polylogarithmically on 1/ε, prior algorithms for sampling in ε infinity distance have runtime bounds that depend polynomially on 1/ε. We bridge this gap by presenting an algorithm that outputs a point ε-close to π in infinity distance that requires at most poly(log1/ε, d) calls to a membership oracle for K and evaluation oracle for f, when f is Lipschitz. Our approach departs from prior works that construct Markov chains on a 1/ε^2-discretization of K to achieve a sample with ε infinity-distance error, and present a method to directly convert continuous samples from K with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension d in the running time for the problem of sampling from a log-concave distribution on polytopes K with infinity distance ε, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.For a d-dimensional log-concave distribution π(θ) ∝ e^-f(θ) constrained to a convex body K, the problem of outputting samples from a distribution ν which is ε-close in infinity-distance sup_θ∈ K |logν(θ)/π(θ)| to π arises in differentially private optimization. While sampling within total-variation distance ε of π can be done by algorithms whose runtime depends polylogarithmically on 1/ε, prior algorithms for sampling in ε infinity distance have runtime bounds that depend polynomially on 1/ε. We…
Account · 952 followers
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker