Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models

            Jul 24, 2023

            Speakers

            PR

            Phillip Rust

            Speaker · 0 followers

            AS

            Anders Søgaard

            Speaker · 1 follower

            About

            Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential…

            Organizer

            I2
            I2

            ICML 2023

            Account · 616 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Theory on Forgetting and Generalization of Continual Learning
            04:49

            Theory on Forgetting and Generalization of Continual Learning

            Sen Lin, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Discover-Then-Rank Unlabeled Support Vectors in the Dual Space for Multi-Class Active Learning
            05:00

            Discover-Then-Rank Unlabeled Support Vectors in the Dual Space for Multi-Class Active Learning

            Dayou Yu, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Speed-Oblivious Online Scheduling: Knowing (Precise) Speeds is not Necessary
            05:03

            Speed-Oblivious Online Scheduling: Knowing (Precise) Speeds is not Necessary

            Alexander Lindermayr, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards a Better Understanding of Representation Dynamics under TD-learning
            03:56

            Towards a Better Understanding of Representation Dynamics under TD-learning

            Yunhao Tang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Grounding Language Models to Images for Multimodal Inputs and Outputs
            05:11

            Grounding Language Models to Images for Multimodal Inputs and Outputs

            Jing Yu Koh, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Meta-Learning Reliable Priors for Interactive Learning
            39:19

            Meta-Learning Reliable Priors for Interactive Learning

            Jonas Rothfuss

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023