Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: LAMP: Extracting Text from Gradients with Language Model Priors
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            LAMP: Extracting Text from Gradients with Language Model Priors
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            LAMP: Extracting Text from Gradients with Language Model Priors

            Nov 28, 2022

            Sprecher:innen

            MB

            Mislav Balunovic

            Sprecher:in · 1 Follower:in

            DD

            Dimitar Dimitrov

            Sprecher:in · 1 Follower:in

            NJ

            Nikola Jovanović

            Sprecher:in · 1 Follower:in

            Über

            Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our key insight is to model the prior probability of the text with an auxiliary language model, ut…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Panel
            27:10

            Panel

            Virginia Smith, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness
            05:05

            LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness

            Xiaojun Xu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Part 2: Lifelong Learning Approaches: Overview of currently used strategies
            24:54

            Part 2: Lifelong Learning Approaches: Overview of currently used strategies

            Gido van de Ven

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 2 = 0.2%

            A Reproducible and Realistic Evaluation of Partial Domain Adaptation Methods
            05:32

            A Reproducible and Realistic Evaluation of Partial Domain Adaptation Methods

            Tiago Salvador, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            When Bayesian Orthodoxy Can Go Wrong: Model Selection and Out-of-Distribution Generalization
            36:20

            When Bayesian Orthodoxy Can Go Wrong: Model Selection and Out-of-Distribution Generalization

            Andrew Gordon Wilson, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A code super optimizer through neural monte-carlo tree search
            11:16

            A code super optimizer through neural monte-carlo tree search

            Wenda Zhou, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen