Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Learning the Pareto Front with Hypernetworks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-006-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-006-alpha.b-cdn.net
      • sl-yoda-v2-stream-006-beta.b-cdn.net
      • 1549480416.rsc.cdn77.org
      • 1102696603.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Learning the Pareto Front with Hypernetworks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Learning the Pareto Front with Hypernetworks

            May 3, 2021

            Speakers

            AN

            Aviv Navon

            Speaker · 1 follower

            AS

            Aviv Shamsian

            Speaker · 1 follower

            EF

            Ethan Fetaya

            Speaker · 1 follower

            About

            Multi-objective optimization (MOO) problems are prevalent in machine learning. These problems have a set of optimal solutions, called the Pareto front, where each point on the front represents a different trade-off between possibly conflicting objectives. Recent MOO methods can target a specific desired ray in loss space, however, most approaches still face two grave limitations: (i) A separate model has to be trained for each point on the front; and (ii) The exact trade-off must be known prior…

            Organizer

            I2
            I2

            ICLR 2021

            Account · 114 followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            On the Critical Role of Conventions in Adaptive Human-AI Collaboration
            05:10

            On the Critical Role of Conventions in Adaptive Human-AI Collaboration

            Andy Shih, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Visualizing the PHATE of Neural Networks
            28:24

            Visualizing the PHATE of Neural Networks

            Gal Mishne, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Deep learning model compression using neural network design space exploration
            25:59

            Deep learning model compression using neural network design space exploration

            Ehsan Saboori

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Temporal Kernel Approach for Deep Learning with Continuous-time Information
            04:20

            A Temporal Kernel Approach for Deep Learning with Continuous-time Information

            Da Xu, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            What´s wrong with SotA in Conversational AI: Data, Models and Metrics.
            54:53

            What´s wrong with SotA in Conversational AI: Data, Models and Metrics.

            Verena Rieser, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Representation learning in the brain
            30:26

            Representation learning in the brain

            Yael Niv

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICLR 2021