Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Learning rule influences recurrent network representations but not attractor structure in decision-making tasks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-015-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-015-alpha.b-cdn.net
      • sl-yoda-v3-stream-015-beta.b-cdn.net
      • 1963568160.rsc.cdn77.org
      • 1940033649.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Learning rule influences recurrent network representations but not attractor structure in decision-making tasks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Learning rule influences recurrent network representations but not attractor structure in decision-making tasks

            Dec 6, 2021

            Speakers

            BJM

            Brandon J. McMahan

            Speaker · 0 followers

            MK

            Michael Kleinman

            Speaker · 0 followers

            JCK

            Jonathan C. Kao

            Speaker · 0 followers

            About

            Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Addressing these questions is valuable in two ways. First, identification of invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second,…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Spectral embedding for dynamic networks with stability guarantees
            14:13

            Spectral embedding for dynamic networks with stability guarantees

            Ian Gallagher, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learnability of Linear Thresholds from Label Proportions
            12:16

            Learnability of Linear Thresholds from Label Proportions

            Rishi Saket

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Distributed Zero-Order Optimization under Adversarial Noise
            08:12

            Distributed Zero-Order Optimization under Adversarial Noise

            Arya Akhavan, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Cooperative Multi-Agent Reinforcement Learning for High-Dimensional Nonequilibrium Control
            04:59

            Cooperative Multi-Agent Reinforcement Learning for High-Dimensional Nonequilibrium Control

            Shriram Chennakesavalu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning to Generate Visual Questions with Noisy Supervision
            14:54

            Learning to Generate Visual Questions with Noisy Supervision

            Kai Shen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Opening Remarks to Session 1
            02:52

            Opening Remarks to Session 1

            Sebastian Stich

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021