Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: A Sample Complexity Separation between Non-Convex and Convex Meta-Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-015-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-015-alpha.b-cdn.net
      • sl-yoda-v3-stream-015-beta.b-cdn.net
      • 1963568160.rsc.cdn77.org
      • 1940033649.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            A Sample Complexity Separation between Non-Convex and Convex Meta-Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            A Sample Complexity Separation between Non-Convex and Convex Meta-Learning

            Jul 12, 2020

            Speakers

            NS

            Nikunj Saunshi

            Speaker · 0 followers

            YZ

            Yi Zhang

            Speaker · 0 followers

            MK

            Misha Khodak

            Speaker · 0 followers

            About

            One popular trend in meta-learning is to learn from many training tasks a common initialization for a gradient-based method that can be used to solve a new task with few samples. The theory of meta-learning is still in its early stages, with several recent learning-theoretic analyses of methods such as Reptile <cit.> being for convex models. This work shows that convex-case analysis might be insufficient to understand the success of meta-learning, and that even for non-convex models it is…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation
            13:36

            AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation

            Jae Hyun Lim, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Profiling immunoglobulin repertoires across multiple human tissues using RNA-Seq
            05:11

            Profiling immunoglobulin repertoires across multiple human tissues using RNA-Seq

            Serghei Mangul

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interference and Generalization in Temporal Difference Learning
            10:58

            Interference and Generalization in Temporal Difference Learning

            Emmanuel Bengio, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
            06:44

            SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

            Fabian Fuchs, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 2 viewers voted for saving the presentation to eternal vault which is 0.2%

            Error Estimation for Sketched SVD via the Bootstrap
            15:28

            Error Estimation for Sketched SVD via the Bootstrap

            Miles E. Lopes, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation
            15:45

            Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation

            Marc Abeille, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020