Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Network

Dec 15, 2023

Speakers

About

Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model—the modern Hopfield network (MHN)—to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' retrieval capabilities after exposure to a long sequence of (random as well as structured) patterns, updating the parameters of the best-matching memory neurons as each new pattern is presented. We find that K-winner MHN's that compromise slightly on retrieval accuracy of the most recent memories exhibit superior retention of older memories.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023