Neural Network Models of the Hippocampus

For the past few days I've been busy making this presentation on neural network models of memory (here's a PDF version). Specifically, the presentation addresses the question of how the hippocampus interacts with the neocortex, beginning with a review of the role of the hippocampus from single cell recording and neuropsychological evidence, continuing into the phenomenon of catastrophic interference, and ending with a conclusion about some fundamental computational tradeoffs in how memory is represented.

One of the most interesting insights about the function of the hippocampus is that it appears to be specialized for quickly learning about specific features, in specific combinations, in specific contexts. This information is stored in a "compressed" format, with very sparse patterns of activation. The cortex cannot absorb this knowledge so quickly because representations there are far more overlapping, which would cause undue interference from a new learning experience on all older facts.

Instead, the hippocampus serves as a temporary "waystation" for these memories, while they undergo a process of memory consolidation, in which the hippocampus is actually interleaving experiences - during sleep, or as recently demonstrated in rats, even while awake - so that the slower neocortical learning systems can slowly extract the meaningful information from these experiences.

Full citations are provided in the presentation, as well as a basic animation of the phenomenon of spreading activation and hebbian learning.


Anonymous Anonymous said...

Fascinating, but could you please post it in a format one can view without PowerPoint? pdf perhaps?

4/11/2006 04:16:00 PM  
Blogger Chris Chatham said...

I'll update the post with a pdf link, but in order to view the animations you can grab PPT Viewer, a free and lightweight viewer app.

4/12/2006 05:46:00 PM  
Blogger Chris Chatham said...

PS, just in case people with powerpoint are tempted to download the smaller PDF, the animations are crucial for watching activation spread through the network. For this you will need the PPT version. Beyond that, they are simply illustrative (such as on the slide showing progressive differentiation categories).

4/12/2006 09:19:00 PM  
Blogger Dan Dright said...

I am very excited to take a look at your presentation, Chris. I have a feeling that what you are suggesting, via the waystation premise and feature thingie (pardon the technical lingo) may tie in to something that is addling me right now in regards to what I call the "grammar of object identification." It's kind of related to contextual priming.

Anyway, enough babble, I am off to read your work. I'll come back and comment in a while.

4/16/2006 06:57:00 PM  

Post a Comment

<< Home