Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization

            Nov 28, 2022

            Sprecher:innen

            BD

            Benjamin Dubois-Taine

            Sprecher:in · 0 Follower:innen

            FB

            Francis Bach

            Sprecher:in · 3 Follower:innen

            QB

            Quentin Berthet

            Sprecher:in · 0 Follower:innen

            Über

            We consider the problem of minimizing the sum of two convex functions. One of those functions has Lipschitz-continuous gradients, and can be accessed via stochastic oracles, whereas the other is “simple”. We provide a Bregman-type algorithm with accelerated convergence in function values to a ball containing the minimum. The radius of this ball depends on problem-dependent constants, including the variance of the stochastic oracle. We further show that this algorithmic setup naturally leads to a…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 960 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
            05:22

            Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization

            Minsu Kim, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Infinite Recommendation Networks: A Data-Centric Approach
            04:32

            Infinite Recommendation Networks: A Data-Centric Approach

            Noveen Sachdeva, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            WT-MVSNet: Window-based Transformers for Multi-view Stereo
            04:41

            WT-MVSNet: Window-based Transformers for Multi-view Stereo

            Jinli Liao, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning General Visual Representations
            36:01

            Learning General Visual Representations

            Lucas Beyer

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers
            05:04

            Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

            Ran Liu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Gradient Knowledge Distillation for Pre-trained Language Models
            06:33

            Gradient Knowledge Distillation for Pre-trained Language Models

            Lean Wang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen