Dec 10, 2023
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Differentially private mechanisms restrict the membership inference capabilities of powerful (optimal) adversaries against machine learning models. Such adversaries are rarely encountered in practice. In this work, we examine a common threat model relaxation, where (sub-optimal) adversaries lack access to the exact model training database, but may possess related or partial data. We then derive a full formal characterisation of adversarial membership inference capabilities in this setting in terms of hypothesis testing errors, which we validate experimentally. Our work can help stakeholders to better understand the privacy properties of sensitive data processing systems under realistic threat model relaxations.Differentially private mechanisms restrict the membership inference capabilities of powerful (optimal) adversaries against machine learning models. Such adversaries are rarely encountered in practice. In this work, we examine a common threat model relaxation, where (sub-optimal) adversaries lack access to the exact model training database, but may possess related or partial data. We then derive a full formal characterisation of adversarial membership inference capabilities in this setting in ter…
Account · 623 followers
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Pengxiang Wu, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Jinyung Hong, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Brian Moore, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Jaemin Cho, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%