Dec 10, 2023
Sprecher:in · 0 Follower:innen
Sprecher:in · 0 Follower:innen
Sprecher:in · 0 Follower:innen
In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest. Though unrivaled in versatility, DP-SGD requires a non-trivial privacy overhead (for privately tuning the model’s hyperparameters) and a computational complexity which might be extravagant for simple models such as linear and logistic regression. This paper revamps the objective perturbation mechanism with tighter privacy analyses and new computational tools that boost it to perform competitively with DP-SGD on unconstrained convex generalized linear problems.In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest. Though unrivaled in versatility, DP-SGD requires a non-trivial privacy overhead (for privately tuning the model’s hyperparameters) and a computational complexity which might be extravagant for simple models such as linear and logistic regression. This paper revamps the objective perturbation mechanism wit…
Konto · 645 Follower:innen
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Lecheng Kong, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Bing Li, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Pengchong Hu, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Liang Zhang, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 0 = 0.0%