Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

            Dec 6, 2021

            Speakers

            AN

            Anshul Nasery

            Speaker · 0 followers

            ST

            Soumyadeep Thakur

            Speaker · 0 followers

            VP

            Vihari Piratla

            Speaker · 0 followers

            About

            In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions. Such models are often re-trained on new data periodically, and they hence need to generalize to data not too far into the future. In this context, there is much prior work on enhancing temporal generalization, e.g. continuous transportation of past data, kernel smoothed time-sensitive par…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Dangers of Bayesian Model Averaging under Covariate Shift
            15:57

            Dangers of Bayesian Model Averaging under Covariate Shift

            Pavel Izmailov, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Challenges and Solutions to build a Data Pipeline to Identify Anomalies in Enterprise System Performance
            02:02

            Challenges and Solutions to build a Data Pipeline to Identify Anomalies in Enterprise System Performance

            Xiaobo Huang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Deformable Butterfly: A Highly Structured and Sparse Linear Transform
            14:08

            Deformable Butterfly: A Highly Structured and Sparse Linear Transform

            Rui Lin, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            DNN-based Topology Optimisation:  Spatial Invariance and Neural Tangent Kernel
            09:39

            DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel

            Benjamin Dupuis, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            An Improved Analysis of Gradient Tracking for Decentralized Machine Learning
            07:22

            An Improved Analysis of Gradient Tracking for Decentralized Machine Learning

            Anastasia Koloskova, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Culturing PERLS
            35:57

            Culturing PERLS

            Mark Nitzberg

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021