Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks

            Dez 10, 2023

            Sprecher:innen

            YZ

            Yaoyu Zhu

            Sprecher:in · 0 Follower:innen

            WF

            Wei Fang

            Sprecher:in · 0 Follower:innen

            XX

            Xiaodong Xie

            Sprecher:in · 0 Follower:innen

            Über

            Spiking Neural Networks (SNNs) are considered promising brain-inspired energy-efficient models due to their event-driven computing paradigm. The spatiotemporal spike patterns used to convey information in SNNs consist of both rate coding and temporal coding, where the temporal coding is crucial to biological-plausible learning rules such as spike-timing-dependent-plasticity. The time-based training strategy is proposed to better utilize the temporal information in SNNs and learn in an asynchrono…

            Organisator

            N2
            N2

            NeurIPS 2023

            Konto · 648 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            AI-for-climate: A call for impact-guided innovation
            33:54

            AI-for-climate: A call for impact-guided innovation

            David Rolnick

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Norm-guided latent space exploration for text-to-image generation
            04:27

            Norm-guided latent space exploration for text-to-image generation

            Dvir Samuel, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            What Truly Matters in Trajectory Prediction for Autonomous Driving?
            05:22

            What Truly Matters in Trajectory Prediction for Autonomous Driving?

            Phong Tran, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            An (unhelpful) guide to selecting the best ASR architecture for your under-resourced language
            16:49

            An (unhelpful) guide to selecting the best ASR architecture for your under-resourced language

            Robbie Jimerson

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            DropCompute: simple and more robust  distributed synchronous training via compute variance reduction
            05:14

            DropCompute: simple and more robust distributed synchronous training via compute variance reduction

            Niv Giladi, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            DURENDAL: Graph deep learning framework for temporal heterogeneous networks
            05:15

            DURENDAL: Graph deep learning framework for temporal heterogeneous networks

            Manuel Dileo, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2023 folgen