Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Understanding and Mitigating Exploding Inverses in Invertible Neural Networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Understanding and Mitigating Exploding Inverses in Invertible Neural Networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Understanding and Mitigating Exploding Inverses in Invertible Neural Networks

            Apr 14, 2021

            Speakers

            JB

            Jens Behrmann

            Sprecher:in · 0 Follower:innen

            PV

            Paul Vicol

            Sprecher:in · 0 Follower:innen

            KW

            Kuan-Chieh Wang

            Sprecher:in · 0 Follower:innen

            About

            Invertible neural networks (INNs) have been used to design generative models, implement memory-saving gradient computation, and solve inverse problems. In this work, we show that commonly-used INN architectures suffer from exploding inverses and are thus prone to becoming numerically non-invertible. Across a wide range of INN use-cases, we reveal failures including the non-applicability of the change-of-variables formula on in- and out-of-distribution (OOD) data, incorrect gradients for memory-s…

            Organizer

            A2
            A2

            AISTATS 2021

            Konto · 63 Follower:innen

            Categories

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            About AISTATS 2021

            The 24th International Conference on Artificial Intelligence and Statistics was held virtually from Tuesday, 13 April 2021 to Thursday, 15 April 2021.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Sample Elicitation
            03:16

            Sample Elicitation

            Jiaheng Wei, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Stability and Risk Bounds of Iterative Hard Thresholding
            03:08

            Stability and Risk Bounds of Iterative Hard Thresholding

            Xiao-Tong Yuan, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Fast and Smooth Interpolation on Wasserstein Space
            02:56

            Fast and Smooth Interpolation on Wasserstein Space

            Sinho Chewi, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors
            02:58

            Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors

            Nikhil Mehta, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations
            02:59

            Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations

            Simone Rossi, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
            03:09

            Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

            Aravind Reddy, …

            A2
            A2
            AISTATS 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow AISTATS 2021