Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling

            Apr 14, 2021

            Speakers

            QD

            Qin Ding

            Speaker · 0 followers

            CH

            Cho-Jui Hsieh

            Speaker · 1 follower

            JS

            James Sharpnack

            Speaker · 0 followers

            About

            We consider the contextual bandit problem, where a player sequentially makes decisions based on past observations to maximize the cumulative reward. Although many algorithms have been proposed for contextual bandit, most of them rely on finding the maximum likelihood estimator at each iteration, which requires $O(t)$ time at the $t$-th iteration and are memory inefficient. A natural way to resolve this problem is to apply online stochastic gradient descent (SGD) so that the per-step time and mem…

            Organizer

            A2
            A2

            AISTATS 2021

            Account · 63 followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About AISTATS 2021

            The 24th International Conference on Artificial Intelligence and Statistics was held virtually from Tuesday, 13 April 2021 to Thursday, 15 April 2021.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Latent variable modeling with random features
            03:15

            Latent variable modeling with random features

            Gregory Gundersen, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Couplings for Multinomial Hamiltonian Monte Carlo
            02:55

            Couplings for Multinomial Hamiltonian Monte Carlo

            Kai Xu, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            SDF-Bayes: Cautious Optimism in Safe Dose-Finding Clinical Trials with Drug Combinations and Heterogeneous Patient Groups
            03:05

            SDF-Bayes: Cautious Optimism in Safe Dose-Finding Clinical Trials with Drug Combinations and Heterogeneous Patient Groups

            Hyun-Suk Lee, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Faster Kernel Interpolation for Gaussian Processes
            03:03

            Faster Kernel Interpolation for Gaussian Processes

            Mohit Yadav, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
            03:30

            On Riemannian Stochastic Approximation Schemes with Fixed Step-Size

            Alain Durmus, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Differentiable Divergences Between Time Series
            03:09

            Differentiable Divergences Between Time Series

            Mathieu Blondel, …

            A2
            A2
            AISTATS 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow AISTATS 2021