Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: DICE: Diversity In Deep Ensembles Via Conditional Redundancy Adversarial Estimation
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            DICE: Diversity In Deep Ensembles Via Conditional Redundancy Adversarial Estimation
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            DICE: Diversity In Deep Ensembles Via Conditional Redundancy Adversarial Estimation

            May 3, 2021

            Speakers

            AR

            Alexandre Ramé

            Sprecher:in · 0 Follower:innen

            MC

            Matthieu Cord

            Sprecher:in · 0 Follower:innen

            About

            Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional…

            Organizer

            I2
            I2

            ICLR 2021

            Konto · 909 Follower:innen

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            On Position Embeddings in BERT
            06:28

            On Position Embeddings in BERT

            Benyou Wang, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
            04:28

            A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning

            Samuel Horváth, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Combining Label Propagation and Simple Models Out-Performs Graph Neural Networks
            05:15

            Combining Label Propagation and Simple Models Out-Performs Graph Neural Networks

            Qian Huang, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Efficient Click-Through Rate Prediction for Developing Countries via Tabular Learning
            10:36

            Efficient Click-Through Rate Prediction for Developing Countries via Tabular Learning

            Joonyoung Yi, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN
            04:09

            Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN

            Dipam Paul, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Persistent Reinforcement Learning via Subgoal Curricula
            10:16

            Persistent Reinforcement Learning via Subgoal Curricula

            Archit Sharma, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow ICLR 2021