Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-006-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-006-alpha.b-cdn.net
      • sl-yoda-v2-stream-006-beta.b-cdn.net
      • 1549480416.rsc.cdn77.org
      • 1102696603.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules

            Dec 10, 2023

            Speakers

            ZL

            Zhiyuan Liu

            Sprecher:in · 0 Follower:innen

            YS

            Yaorui Shi

            Sprecher:in · 0 Follower:innen

            AZ

            An Zhang

            Sprecher:in · 1 Follower:in

            About

            Masked graph modeling excels in the self-supervised representation learning of molecular graphs. Scrutinizing previous studies, we can reveal a common scheme consisting of three key components: (1) graph tokenizer, which breaks a molecular graph into smaller fragments (subgraphs) and converts them into tokens; (2) graph masking, which corrupts the graph with masks; (3) graph autoencoder, which first applies an encoder on the masked graph to generate the representations, and then employs a decod…

            Organizer

            N2
            N2

            NeurIPS 2023

            Konto · 646 Follower:innen

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Reusing Pretrained Models by Multi-linear Operators for Efficient Training
            03:01

            Reusing Pretrained Models by Multi-linear Operators for Efficient Training

            Yu Pan, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-training Performance
            05:04

            The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-training Performance

            Dario Paccagnan, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks
            05:12

            Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks

            Yaoyu Zhu, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Is Learning in Games Good for the Learners?
            05:00

            Is Learning in Games Good for the Learners?

            William Brown, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning
            04:59

            Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning

            Mathias Lechner, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            The Cambridge Law Corpus: A Corpus for Legal AI Research
            03:46

            The Cambridge Law Corpus: A Corpus for Legal AI Research

            Andreas Östling, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2023