Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Positional Encodings for Light Curve Transformers: Playing with Positions and Attention
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Positional Encodings for Light Curve Transformers: Playing with Positions and Attention
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Positional Encodings for Light Curve Transformers: Playing with Positions and Attention

            Jul 28, 2023

            Speakers

            DM

            Daniel Moreno-Cartagena

            Speaker · 0 followers

            GC

            Guillermo Cabrera-Vives

            Speaker · 0 followers

            PP

            Pavlos Protopapas

            Speaker · 0 followers

            About

            We conducted empirical experiments to assess the transferability of a light curve transformer to datasets with different cadences and flux distributions using various positional encodings (PEs). We proposed a new approach to incorporate the temporal information directly to the output of the last attention layer. Our results indicated that using trainable PEs lead to significant improvements in the transformer performances and training times. Our proposed PE on attention can be trained faster tha…

            Organizer

            I2
            I2

            ICML 2023

            Account · 616 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            DMLR: Journal of Data-centric Machine Learning Research
            06:06

            DMLR: Journal of Data-centric Machine Learning Research

            Ce Zhang

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards Stable and Efficient Adversarial Training against l_1 Bounded Adversarial Attacks
            05:10

            Towards Stable and Efficient Adversarial Training against l_1 Bounded Adversarial Attacks

            Yulun Jiang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits
            05:28

            Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits

            Ronshee Chawla, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Complementary Attention for Multi-Agent Reinforcement Learning
            05:10

            Complementary Attention for Multi-Agent Reinforcement Learning

            Jianzhun Shao, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Provably and Practically Efficient Neural Contextual Bandits
            05:22

            Provably and Practically Efficient Neural Contextual Bandits

            Sudeep Salgia, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Great Models Think Alike: Improving Model Reliability via Inter-Model Latent Agreement
            04:26

            Great Models Think Alike: Improving Model Reliability via Inter-Model Latent Agreement

            Ailin Deng, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023