(Nearly) Optimal Private Linear Regression via Adaptive Clipping

Jul 2, 2022

Speakers

About

We study the problem of differentially private linear regression where each of the data point is sampled from a fixed sub-Gaussian style distribution. We propose and analyze a one-pass mini-batch stochastic gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement. Noise is added for DP but the noise standard deviation is estimated online. Compared to existing (ϵ, δ)-DP techniques which have sub-optimal error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in terms of key parameters like dimensionality d, number of points N, and the standard deviation σof the noise in observations. For example, when the d-dimensional covariates are sampled i.i.d. from the normal distribution, then the excess error of DP-AMBSSGD due to privacy is σ^2 d/N(1+d/(ϵ^2 N)), i.e., the error is meaningful when number of samples N≥ d log d which is the standard operative regime for linear regression. In contrast, error bounds for existing efficient methods in this setting are: d^3/(ϵ^2 N^2), even for σ=0. That is, for constant ϵ, the existing techniques require N=d^1.5 to provide a non-trivial result.

Organizer

About COLT

The conference is held annually since 1988 and has become the leading conference on Learning theory by maintaining a highly selective process for submissions. It is committed in high-quality articles in all theoretical aspects of machine learning and related topics.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow COLT