Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Representation Costs of Linear Neural Networks: Analysis and Design
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-012-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-012-alpha.b-cdn.net
      • sl-yoda-v3-stream-012-beta.b-cdn.net
      • 1338956956.rsc.cdn77.org
      • 1656830687.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Representation Costs of Linear Neural Networks: Analysis and Design
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Representation Costs of Linear Neural Networks: Analysis and Design

            Dec 6, 2021

            Speakers

            ZD

            Zhen Dai

            Speaker · 0 followers

            MK

            Mina Karzand

            Speaker · 0 followers

            NS

            Nathan Srebro

            Speaker · 0 followers

            About

            For different parameterizations (mappings from parameters to predictors), we study the regularization cost in predictor space induced by l_2 regularization on the parameters (weights). We focus on linear neural networks as parameterizations of linear predictors. We identify the representation cost of certain sparse linear ConvNets and residual networks. In order to get a better understanding of how the architecture and parameterization affect the representation cost, we also study the reverse pr…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Obnline Lazy Subgradient is Universal on Strongly Convex Domains
            14:59

            Obnline Lazy Subgradient is Universal on Strongly Convex Domains

            Daron Andreson, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification
            15:55

            Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

            Shiqi Wang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Undivided Attention: Are Intermediate Layers Necessary for BERT?
            04:21

            Undivided Attention: Are Intermediate Layers Necessary for BERT?

            Sharath Nittur Sridhar, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Appendix: Proofs and Derivations
            19:22

            Appendix: Proofs and Derivations

            Wee Sun Lee

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Mixture of Basis for Interpretable Continual Learning with Distribution Shifts
            05:02

            Mixture of Basis for Interpretable Continual Learning with Distribution Shifts

            Mengda Xu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            A Regression Approach to Learning-Augmented Online Algorithms
            15:05

            A Regression Approach to Learning-Augmented Online Algorithms

            Keerti Anand, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021