Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Learning rule influences recurrent network representations but not attractor structure in decision-making tasks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-015-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-015-alpha.b-cdn.net
      • sl-yoda-v3-stream-015-beta.b-cdn.net
      • 1963568160.rsc.cdn77.org
      • 1940033649.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Learning rule influences recurrent network representations but not attractor structure in decision-making tasks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Learning rule influences recurrent network representations but not attractor structure in decision-making tasks

            Dec 6, 2021

            Speakers

            BJM

            Brandon J. McMahan

            Speaker · 0 followers

            MK

            Michael Kleinman

            Speaker · 0 followers

            JCK

            Jonathan C. Kao

            Speaker · 0 followers

            About

            Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Addressing these questions is valuable in two ways. First, identification of invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second,…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Searching Parameterized AP Loss for Object Detection
            06:13

            Searching Parameterized AP Loss for Object Detection

            Chenxin Tao, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning Large Neighborhood Search Policy for Integer Programming
            08:54

            Learning Large Neighborhood Search Policy for Integer Programming

            Yaoxin Wu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Panel Discussion 2
            59:12

            Panel Discussion 2

            Susan L. Epstein, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions
            15:30

            Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions

            Alejandro Carderera, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data
            14:32

            Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data

            Feng Liu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection
            13:32

            Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection

            Matteo Papini, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021