Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Adversarial Graph Augmentation to Improve Graph Contrastive Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Adversarial Graph Augmentation to Improve Graph Contrastive Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Adversarial Graph Augmentation to Improve Graph Contrastive Learning

            Dec 6, 2021

            Speakers

            SS

            Susheel Suresh

            Sprecher:in · 0 Follower:innen

            PL

            Pan Li

            Sprecher:in · 0 Follower:innen

            CH

            Cong Hao

            Sprecher:in · 0 Follower:innen

            About

            Self-supervised learning of graph neural networks (GNN) is in great need because of the widespread label scarcity issue in real-world graph/network data. Graph contrastive learning (GCL), by training GNNs to maximize the correspondence between the representations of the same graph in its different augmented forms, may yield robust and transferable GNNs even without using labels. However, GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and pr…

            Organizer

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Gradient Starvation: A Learning Proclivity in Neural Networks
            10:52

            Gradient Starvation: A Learning Proclivity in Neural Networks

            Mohammad Pezeshki, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 1 = 0.1%

            Structural Assumptions for Better Generalization in Reinforcement Learning
            36:03

            Structural Assumptions for Better Generalization in Reinforcement Learning

            Amy Zhang

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits
            10:49

            Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits

            Shinji Ito

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Collective Intelligence of Army Ants and the Robots They Inspire
            1:21:28

            Collective Intelligence of Army Ants and the Robots They Inspire

            Radhika Nagpal

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
            15:03

            Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

            Xinyi Xu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Heavy Ball Neural Ordinary Differential Equations
            04:08

            Heavy Ball Neural Ordinary Differential Equations

            Hedi Xia, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2021