Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

            Jul 24, 2023

            Sprecher:innen

            MNRC

            Mohammed Nowaz Rabbani Chowdhury

            Sprecher:in · 0 Follower:innen

            SZ

            Shuai Zhang

            Sprecher:in · 1 Follower:in

            MW

            Meng Wang

            Sprecher:in · 0 Follower:innen

            Über

            In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction. The recently proposed patch-level routing in MoE (pMoE) divides each input into n patches (or tokens) and sends l patches (l≪ n) to each expert through prioritized routing. pMoE has demonstrated great empirical success in reducing training and inference costs while maintaining test accuracy. However, the theoretical explanation…

            Organisator

            I2
            I2

            ICML 2023

            Konto · 657 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Graph Mixup with Soft Alignments
            05:19

            Graph Mixup with Soft Alignments

            Hongyi Ling, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Settling the Reward Hypothesis
            05:07

            Settling the Reward Hypothesis

            Michael Bowling, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Explaining Reinforcement Learning with Shapley Values
            04:15

            Explaining Reinforcement Learning with Shapley Values

            Daniel Beechey, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            When is Realizability Sufficient for Off-Policy Reinforcement Learning?
            04:36

            When is Realizability Sufficient for Off-Policy Reinforcement Learning?

            Andrea Zanette

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Reliable Neural Specifications
            07:13

            Towards Reliable Neural Specifications

            Chuqin Geng, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            FAENet: Frame Averaging Equivariant GNNs for Materials Modeling
            05:06

            FAENet: Frame Averaging Equivariant GNNs for Materials Modeling

            Alexandre Duval, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICML 2023 folgen