Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-003-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-003-alpha.b-cdn.net
      • sl-yoda-v2-stream-003-beta.b-cdn.net
      • 1544410162.rsc.cdn77.org
      • 1005514182.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Preservation of the Global Knowledge by Not-True Distillation in Federated Learning

            Nov 28, 2022

            Speakers

            GL

            Gihun Lee

            Speaker · 0 followers

            MJ

            Minchan Jeong

            Speaker · 0 followers

            YS

            Yongjin Shin

            Speaker · 0 followers

            About

            In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models. Although this precludes the need to access clients' data directly, the global model's convergence often suffers from data heterogeneity. This study starts from an analogy to continual learning and suggests that forgetting could be the bottleneck of federated learning. We observe that the global model forgets the knowledge from previous rounds, and the local training induces for…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 961 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Simulations for Open Science Token Communities: Designing the Knowledge Commons
            02:46

            Simulations for Open Science Token Communities: Designing the Knowledge Commons

            Jakub Smékal, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Distributed Inverse Constrained Reinforcement Learning (D-ICRL)  for Multi-agent Systems (MASs)
            04:57

            Distributed Inverse Constrained Reinforcement Learning (D-ICRL) for Multi-agent Systems (MASs)

            Shicheng Liu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Best paper announcement
            00:52

            Best paper announcement

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Model-based Trajectory Stitching for Improved Off-line Reinforcement Learning
            06:39

            Model-based Trajectory Stitching for Improved Off-line Reinforcement Learning

            Charles A. Hepburn, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            SAGDA: Achieving 𝒪(ϵ^-2) Communication Complexity in Federated Min-Max Learning
            05:04

            SAGDA: Achieving 𝒪(ϵ^-2) Communication Complexity in Federated Min-Max Learning

            Haibo Yang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Efficient Planning in a Compact Latent Action Space
            09:18

            Efficient Planning in a Compact Latent Action Space

            Zhengyao Jiang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022