Differentially Private Learning with Margin Guarantees

Dec 2, 2022

Speakers

About

Title: Differentially Private Learning with Margin Guarantees Abstract: Preserving privacy is a crucial objective for machine learning algorithms. But, despite the remarkable theoretical and algorithmic progress in differential privacy over the last decade or more, its application to learning still faces several obstacles. A recent series of publications have shown that differentially private PAC learning of infinite hypothesis sets is not possible, even for common hypothesis sets such as that of linear functions. Another rich body of literature has studied differentially private empirical risk minimization in a constrained optimization setting and shown that the guarantees are necessarily dimension-dependent. In the unconstrained setting, dimension-independent bounds have been given, but they admit a dependency on the norm of a vector that can be extremely large, which makes them uninformative. These results raise some fundamental questions about private learning with common high-dimensional problems: is differentially private learning with favorable (dimension-independent) guarantees possible for standard hypothesis sets? This talk presents a series of new differentially private algorithms for learning linear classifiers, kernel classifiers, and neural-network classifiers with dimension-independent, confidence-margin guarantees. Joint work with Raef Bassily and Ananda Theertha Suresh.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022