Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery

            Nov 28, 2022

            Sprecher:innen

            YC

            Yezhen Cong

            Sprecher:in · 0 Follower:innen

            SK

            Samar Khanna

            Sprecher:in · 0 Follower:innen

            CM

            Chenlin Meng

            Sprecher:in · 0 Follower:innen

            Über

            Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks. Developing similar techniques for satellite imagery presents significant opportunities as unlabelled data is plentiful and the inherent temporal and multi-spectral structure provides avenues to further improve existing pre-training strategies. In this paper, we present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoenco…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits
            04:00

            A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits

            Ilija Bogunovic, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Maximum Common Subgraph Guided Graph Retrieval: Late and Early Interaction Networks
            04:43

            Maximum Common Subgraph Guided Graph Retrieval: Late and Early Interaction Networks

            Indradyumna Roy, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            When to Update Your Model: Constrained Model-based Reinforcement Learning
            01:02

            When to Update Your Model: Constrained Model-based Reinforcement Learning

            Tianying Ji, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            GOOD: A Graph Out-of-Distribution Benchmark
            04:39

            GOOD: A Graph Out-of-Distribution Benchmark

            Shurui Gui, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Geodesic Graph Neural Network for Efficient Graph Representation Learning
            04:47

            Geodesic Graph Neural Network for Efficient Graph Representation Learning

            Lecheng Kong, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Distributional deep Q-learning with CVaR regression
            05:44

            Distributional deep Q-learning with CVaR regression

            Mastane Achab, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen