Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds

            Mar 9, 2021

            Speakers

            EE

            Ehsan Emamjomeh-Zadeh

            Speaker · 0 followers

            CW

            Chen-Yu Wei

            Speaker · 0 followers

            HL

            Haipeng Luo

            Speaker · 1 follower

            About

            We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least…

            Organizer

            A2
            A2

            ALT 2021

            Account · 1 follower

            Categories

            Mathematics

            Category · 2.4k presentations

            AI & Data Science

            Category · 10.8k presentations

            About ALT 2021

            The 32nd International Conference on Algorithmic Learning Theory

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Non-uniform Consistency of Online Learning with Random Sampling
            12:01

            Non-uniform Consistency of Online Learning with Random Sampling

            Changlong Wu, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            On the Sample Complexity of Privately Learning Unbounded Gaussians
            12:16

            On the Sample Complexity of Privately Learning Unbounded Gaussians

            Hassan Ashtiani, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Stochastic Top-K Subset Bandits with Linear Space and Non-Linear Feedback
            10:28

            Stochastic Top-K Subset Bandits with Linear Space and Non-Linear Feedback

            Mridul Agarwal, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer
            11:52

            A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer

            Manfred K. Warmuth, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Intervention Efficient Algorithms for Approximate Learning of Causal Graphs
            12:24

            Intervention Efficient Algorithms for Approximate Learning of Causal Graphs

            Raghavendra Addanki, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning With Comparison Feedback: Online Estimation of Sample Statistics
            11:41

            Learning With Comparison Feedback: Online Estimation of Sample Statistics

            Michela Meister, …

            A2
            A2
            ALT 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ALT 2021