Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-005-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-005-alpha.b-cdn.net
      • sl-yoda-v2-stream-005-beta.b-cdn.net
      • 1034628162.rsc.cdn77.org
      • 1409346856.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization

            Jul 24, 2023

            Speakers

            JS

            Jiwoo Son

            Speaker · 0 followers

            MK

            Minsu Kim

            Speaker · 0 followers

            HK

            Hyeonah Kim

            Speaker · 0 followers

            About

            This paper proposes Meta-SAGE, a novel approach for improving the scalability of deep reinforcement learning models for combinatorial optimization (CO) tasks. Our method adapts pre-trained models to larger-scale problems in test time by suggesting two components: a scale meta-learner (SML) and scheduled adaptation with guided exploration (SAGE). First, SML transforms the context embedding for subsequent adaptation of SAGE based on scale information. Then, SAGE adjusts the model parameters dedica…

            Organizer

            I2
            I2

            ICML 2023

            Account · 631 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Motion Question Answering via Modular Motion Programs
            05:17

            Motion Question Answering via Modular Motion Programs

            Mark Endo, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Accuracy on the Curve: On the nonlinear correlation of ML performance between data subpopulations
            05:12

            Accuracy on the Curve: On the nonlinear correlation of ML performance between data subpopulations

            Weixin Liang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            When Personalization Harms Performance: Reconsidering the Use of Group Attributes in Prediction
            09:04

            When Personalization Harms Performance: Reconsidering the Use of Group Attributes in Prediction

            Vinith M. Suriyakumar, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning
            05:42

            Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning

            Jiatai Huang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Randomized Gaussian Process Upper Confidence Bound with Tight Bayesian Regret Bounds
            05:05

            Randomized Gaussian Process Upper Confidence Bound with Tight Bayesian Regret Bounds

            Shion Takeno, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Continual Learners are Incremental Model Generalizers
            04:51

            Continual Learners are Incremental Model Generalizers

            Jaehong Yoon, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023