Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Hyperparameter Optimization Is Deceiving Us, and How to Stop It
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Hyperparameter Optimization Is Deceiving Us, and How to Stop It
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Hyperparameter Optimization Is Deceiving Us, and How to Stop It

            Dez 6, 2021

            Sprecher:innen

            AFC

            A. Feder Cooper

            Sprecher:in · 0 Follower:innen

            YL

            Yucheng Lu

            Sprecher:in · 0 Follower:innen

            JZF

            Jessica Zosa Forde

            Sprecher:in · 0 Follower:innen

            Über

            Recent empirical work shows that inconsistent results, based on choice of hyperparameter optimization (HPO) configuration, are a widespread problem in ML research. When comparing two algorithms J and K, searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. We provide a theoretical complement to this prior work, arguing that, to avoid such deception, the process of drawin…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            The Art of Gaussian Processes: Multioutput GPs
            17:29

            The Art of Gaussian Processes: Multioutput GPs

            César Lincoln C. Mattos, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            On the Out-of-distribution Generalization of Probabilistic Image Modelling
            10:06

            On the Out-of-distribution Generalization of Probabilistic Image Modelling

            Mingtian Zhang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Panel Discussion 3
            30:05

            Panel Discussion 3

            Jon Kleinberg, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Implicitly Regularized RL with Implicit Q-values
            04:23

            Implicitly Regularized RL with Implicit Q-values

            Nino Vieillard, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Identifiability in Inverse Reinforcement Learning
            15:07

            Identifiability in Inverse Reinforcement Learning

            Haoyang Cao, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Online Learning in Periodic Zero-Sum Games
            12:44

            Online Learning in Periodic Zero-Sum Games

            Tanner Fiez, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen