Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

            Dec 6, 2021

            Speakers

            PME

            Pascal M. Esser

            Speaker · 0 followers

            LCV

            Leena C. Vankadara

            Speaker · 0 followers

            DG

            Debarghya Ghoshdastidar

            Speaker · 0 followers

            About

            In recent years, several results in the supervised learning setting suggested that classical statistical learning-theoretic measures, such as VC dimension, do not adequately explain the performance of deep learning models which prompted a slew of work in the infinite-width and iteration regimes. However, there is little theoretical explanation for the success of neural networks beyond the supervised setting. In this paper we argue that, under some distributional assumptions, classical learning-t…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation
            11:20

            Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation

            Jungbeom Lee, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Combating Noise: Semi-supervised Learning by Region Uncertainty Quantification
            07:12

            Combating Noise: Semi-supervised Learning by Region Uncertainty Quantification

            Zhenyu Wang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Exploring Conceptual Soundness with TruLens
            15:54

            Exploring Conceptual Soundness with TruLens

            Anupam Datta, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            SynthBio: A Case Study in Human-Al Collaborative Curation of Text Datasets
            05:46

            SynthBio: A Case Study in Human-Al Collaborative Curation of Text Datasets

            Ann Yuan, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Neural NID Rules
            02:41

            Neural NID Rules

            Luca Viano, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
            13:58

            Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

            Hanxuan Huang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021