Dec 6, 2021
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and in machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data.Here, we characterize the space of solutions associated with a given task, and how hyperparameters bias networks within this space. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network’s initial connectivity, and identify discrete dynamical regimes that underly this diversity. We then examine the neuroscience-inspired Ready-Set-Go timing task. We find a rich set of solutions, even under identical hyperparameters. We uncover this variety by testing the trained networks' ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and to effective algorithms found by the networks. Using a set of features, we define a space of solutions. We find that most networks stay within the vicinity of the solution defined by their initial connectivity. Taken together, our results shed light on the concept of the space of solutions and its uses both in Machine learning and in Neuroscience.In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and in machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that ev…
Account · 1.9k followers
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Yue Wang, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Guang Zhao, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Ze Wang, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Mengzhe Li, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%