Blogging on the Brain: April 22 - 29th 2006

The week in brain blogging reviewed:

Thanks to Neurodudes for pointing out a new book on the 23 problems facing systems neuroscience. I actually think we already have the answers to a few of these problems, but interesting nonetheless!

An interesting assessment of the current state of science blogging over at Science & Politics. Will we start seeing original hypotheses and unpublished data in the natural science blogosphere any time soon? An interesting question.

A really nice week over at Myomancy, with my favorite being this post about whether ADHD might be treated with ... coffee! There's also a nice update on the latest research in imaging the brains of those with ADHD.

Thanks to Intelligence Testing for alerting me to a popular article this week, "the intelligence gene." Like Kevin, I am a novice at behavioral genetics, but am nonetheless highly distrustful of any press release claiming to have found "the gene" for homemaking, good bridge playing, fondness for chocolate, much less a concept as complicated, heterogenous, and underconstrained as "intelligence."

A nice post & very well informed discussion over at Neurofuture on the orbitfrontal cortex, and a recent Nature paper.

Finally, nice coverage of recent developments in "brain training" for autism at the wonderful Neurocritic blog.

Have a nice weekend!


Inhibitory Oscillations and Retrieval Induced Forgetting

The connectionist perspective on cognitive psychology is appealing because it naturally explains many complicated phenomena with the relatively simple concept that activation spreads throughout a network of interconnected nodes, and that these interconnections are modified on the basis of repeated use. Furthermore, the theory is not merely verbal; connectionists have implemented these ideas in artificial neural network models of cognition, allowing for more strenuous, quantitative tests of hypotheses than afforded by models that are not mathematically implemented.

However, one challenge for this perspective would seem to be the concept of retrieval induced forgetting, in which retrieving some practiced word-pair (such as fruit-apple) impairs the later recall of another studied - but unpracticed - word pair (such as fruit-pear, even when given the cue "fruit-pe__"). Given the connectionist concept of spreading activation, one might expect the opposite result - that the interconnected networks representing fruits would enjoy a higher activation, and that therefore all the words paired with fruit would enjoy a facilitation in processing.

However, this is the opposite of what you see. Instead, these closely-related word pairs seem to be suppressed relative to baseline retrieval rates. According to some, this reflects an active inhibitory process (a concept which is distasteful for many connectionist modelers because it is both practically difficult and metabolically inefficient for such a representation scheme to be implemented neurally). One alternative theory is that the practiced word pair becomes selectively strengthened relative to all competing but unretrieved word pairs. To continue our example above, this theory suggests that the neural connections representing features unique to the practiced word pair (fruit-apple) are selectively strengthened, while the neural connections representing features shared by both word pairs (fruit-apple and fruit-pear) are selectively inhibited.

Although verbally elegant, it's not clear exactly how this process might be implemented neurally. However, a new paper in-press at Neural Computation by Norman, Newman, Detre, & Polyn provides one possible explanation for how this selective strengthening/weakening might be accomplished in connectionist terms. In their paper, "How Inhibitory Oscillations Can Train Neural Networks and Punish Competitors," the authors describe how biologically plausible oscillations between excitatory and inhibitory neural activity may serve to "tune" representations in the short term, such that "competitors" (fruit-pear) become less active while "targets" (fruit-apple) become more active.

Theoretical extensions to this framework invoke a characteristic response function of many cortical neurons to stimulus onset, in which excitatory activity quickly rises, but is then tamed by inhibitory activity. This inhibitory activity may overcompensate, and then back off, leading to an increase in excitatory activity again, before inhibitory activity clamps down once again. This process repeats, leading to a characteristic neural response function that clearly illustrates how a balance in activation emerges from competing oscillations of inhibitory and excitatory activity. This response function could theoretically account for the computation proposed by Norman et al. because of the way calcium is known to modify synaptic efficacy: at low but above baseline levels of calcium, synaptic efficacy is decreased, whereas at high levels synaptic efficacy is increased. According to this system, then, competitors would effectively be "inhibited" as a result of getting active only when the excitatory oscillation in the neural response function is overcompensating, because they would receive some calcium, but not enough to increase synaptic efficacy. Conversely, neural connections representing the features of target pairs would receive more calcium, leading ultimately to an increase in their synaptic efficacy. However, this theoretical extension has not yet been successfully implemented in connectionist models.

Related Posts:
Entangled Oscillations
Nature's Engineering
Models of Active Maintenance as Oscillation
Profile: Mark Tilden


An Informal Three System Model of Memory

Based on the variety of evidence presented in previous posts, how many memory systems are required to comprehensively explain the existing data on human memory? The idea that there is a prefrontal short-term working memory system is uncontroversial; for example, this system is unimpaired in the famous case of Clive Wearing, who maintains that he has become conscious for the first time roughly every 10 minutes. This patient is capable of carrying on a conversation and clearly manifests the ability to maintain goals and other signs of intelligence.

Converging evidence comes from fMRI studies of dorso-lateral prefrontal cortex (dlPFC), in which dlPFC shows sustained activity throughout continuous performance tasks that appears not to be due to general mental effort or concentration (Cohen, et al., 1997). Instead, active firing in this region may serve to maintain activity in more posterior regions, which represent specific information relevant to the current task. Therefore, working memory qualifies as a system, according to the definition established earlier, because it has unique computational requirements (to maintain active firing) as well as unique functional characteristics, in that it is responsible for maintaining current goals and information relevant to those goals.

It is also clear that a second memory system exists, one that is subserved by structures in the medial temporal lobe and is specifically involved in the longer-term storage of information. Specifically, the entorhinal, perirhinal, and parahippocampal cortices, as well as the hippocampus itself, make up this second system, which is required for the relatively quick learning of new associations. The specific type of association to be learned determines which of these structures is most critical, with context-rich (episodic) memories requiring the entire complex, and relatively context-free (semantic) memories relying more on the surrounding non-hippocampal MTL structures.

As described above, it is not necessary to propose distinct systems for familiarity and recollection within this long-term memory system, because a single process “signal detection” model can account for both the neuropsychological and imaging results. Likewise, it is not necessary to propose distinct systems for semantic and episodic memory within this long-term memory system, simply because these memory types rely differentially on these MTL structures. According to this view, the hippocampal complex is a unique memory system because it has unique computational requirements (the capacity to represent information sparsely; McClelland, et al., 2002) and also has unique functional characteristics, in that destruction of this region results in profound failures to long-term explicit memory (as in amnesia).

Finally, I propose a third general-purpose memory system, on which both of the previous systems rely. This system is actually a conglomerate of many brain regions, each of which is responsible for the processing of information relevant to specific modalities (vision, hearing, speech, etc). This is the memory system ultimately responsible for the long-term storage of semantic and episodic memories, after they have undergone a process of memory consolidation. During this consolidation process, the hippocampus and related structures slowly “farm out” the information they represent to the regions of neocortex that are relative to each characteristic of the to-be-stored memory, through a process analogous to interleaved training in artificial neural network models (McClelland et al., 2002). In this way, the long-term memory system relies on the general-purpose system during consolidation.

Likewise, the working memory system also relies on this general-purpose memory system; for example, in a delayed match to sample task, correct matching behavior relies both on the maintenance of the target item and the representation of the current item (Miller et al., 1996). Because each different item is represented by a different pattern of inferotemporal (IT) activity, and because identical items are represented by the same patterns of IT activity, the degree of match between prefrontal representations maintained and each item’s IT representation can index whether an item is a match or nonmatch. Evidence for this process comes from the phenomenon of “match enhancement” observed in the firing rates of PFC and IT cells in the case of a match.

This three-system framework for memory provides a much clearer view of human memory than the more traditional distinctions examined above. First, the general-purpose neocortical system is responsible for perceptual priming effects and other aspects of implicit memory, whereas the other two systems are crucial for the formation of explicit memory. Second, the hippocampal formation subserves recollection, familiarity, and episodic memory. Third, the medial and dorsolateral regions of prefrontal cortex are involved in the accessing and online manipulation of information from either of the two other systems.

This three-system view also has the advantage of explaining additional phenomena that do not clearly correspond to the distinctions examined above. For example, evidence from early studies of memory seemed to indicate that better retention occurs when items were processed at a more semantic as opposed to perceptual level. A revised view of this phenomenon, known as transfer-appropriate processing, suggests that strength of memory is more influenced by the degree of match between study and test (Morris et al., 1977) than some inherent superiority of semantic processing to more perceptually-based processing. According to this three-system view, results supporting transfer-appropriate processing emerge from an interaction between all three systems, in which the prefrontal system is actively maintaining information in task-relevant parts of the neocortex, while the hippocampal formation is essentially taking “snapshots” of this neocortical activity on a global scale. At test, retrieval cues that more closely elicit the pattern of activity present in neocortex during study will be more effective at eliciting the relevant memory trace from hippocampus, thus resulting in a benefit for test conditions that are compatible with study conditions.

Finally, the three-system view also provides a parsimonious explanation of various consolidation phenomena. Again, only the working memory system and the long-term memory system are sufficient for rapid encoding of arbitrary information; the neocortical memory system requires much slower learning. This provides a natural role for sleep as a means for memory consolidation, during which prefrontal activity is minimized, and the hippocampus is involved in slowly interleaving new memories with preexisting representations in more posterior neocortical areas. Indeed, this pattern of activity is fairly characteristic of sleep.

In summary, many of the traditional distinctions made between memory systems can be clarified by using a computational and function definition of “system,” instead of using operational methods. This allows for several memory phenomena to be understood in terms of three memory systems: a hippocampal system responsible for the rapid encoding of experiences and associations into a consolidated format in neocortex, a prefrontal system responsible for the active maintenance of goals and goal-relevant information by interacting with neocortex, and a general-purpose neocortical system which supports both perceptual processing and implicit memory, such as priming. Together, these three systems can also account for phenomena that were not clearly addressed by previously defined memory systems, such as consolidation and transfer appropriate processing.


Cohen, J. D., Perlstein, W. M., Braver, T. S., Nystrom, L. E., Noll, D. C., Jonides, J., et al. (1997). Temporal dynamics of brain activation during a working memory task. Nature, 386, 604-608.

McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (2002). Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. In T. A. Polk & C. M. Seifert (Eds.), Cognitive modeling. (pp. 499-534): MIT Press, Cambridge, MA, US

Miller, E. K., Erickson, C. A., & Desimone, R. (1996). Neural mechanisms of visual working memory in prefrontal cortex of the macaque. J Neurosci, 16, 5154-5167

Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer-appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519-533


Semantic vs Episodic Memory

A traditional distinction made in the memory literature, similar to those reviewed in this and last week's posts, is that between the semantic and episodic forms of explicit memory. According to this framework, episodic memories contain sequences of events that pertain to specific times and locations, while semantic memories are more purely factual, and have abstracted away the more idiosyncratic elements of episodic memories. What evidence supports this distinction?

The “consolidation” theory of hippocampal function posits that the hippocampus is necessary for the acquisition of both semantic and episodic memories, but that while the hippocampus is permanently necessary for the retrieval of episodic memories, semantic memories can become independent of the hippocampus for retrieval over time (Stickgold & Walker, 2005). This view is more compatible with evidence from amnesia than the alternative “Multiple Memory Trace” model, in which the hippocampus is always necessary for the retrieval of semantic memories, because some amnesics can still show normal performance on semantic memory tests, such as the vocabulary, information, and comprehension subtests of the Verbal IQ scale (Vargha-Khadem, et al., 1997).

A more refined version of this hypothesis is that subhippocampal MTL structures are sufficient for semantic but not episodic memory acquisition, because the greater degree of context specificity associated with episodic memories requires the hippocampus. In summary, some forms of explicit memory are independent of the hippocampus itself and rely on surrounding MTL structures, while episodic memories specifically require the hippocampus.

Tomorrow's post will synthesize the evidence reviewed previously about the distinctions between memory systems into a comprehensive three-system model of human memory.


Vargha-Khadem, F., Gadian, D. G., Watkins, K. E., Connelly, A., Van Paesschen, W., & Mishkin, M. (1997). Differential effects of early hippocampal pathology on episodic and semantic memory [see comments] [published erratum appears in Science 1997 Aug 22; 277(5329):1117]. Science, 277, 376-380.


Familiarity vs. Recollection

A common procedure for assessing the relative impairments of familiarity and recollection in amnesics is the remember/know procedure, in which subjects must indicate whether they recognize items based on familiarity (“I know that I saw it”) or on the basis of recollection (“I specifically remember seeing it”)[1]. Results from several studies with amnesics suggest that recollection is impaired to a greater degree than familiarity, which indicates to some that both that familiarity and recollection may belong to distinct memory subsystems. Further, this data is taken to show that familiarity and implicit memory may belong to distinct memory subsystems (Rugg & Yonelinas, 2003).

An alternate view of this latter conclusion comes from single process models of familiarity and recollection, in which the degree of match between studied items and a target item is calculated in the form of two overlapping probability distributions. One distribution reflects the degree of match between any target stimulus and the vectors stored in memory (the noise distribution) and a second distribution reflects the degree of match between the target item and that item’s stored vector (the signal distribution). According to this signal detection interpretation of recognition memory, familiarity judgments are made more freely than recollection judgments, which require a greater degree of match (Wixted & Stretch, 2004).

Based on a signal detection model of recollection and familiarity, one would predict consistently higher confidence ratings of “remember” than of “know” responses, regardless of whether these judgments were correct (hits) or incorrect (false alarms). This prediction has been verified by empirical data. Furthermore, one might expect that disruptions to the integrity of stored memory vectors (as in cases of hippocampal damage) would result in impaired “remember” responses before resulting in impaired “know” responses, because “remember” responses require a higher degree of match. This prediction, though not explicitly made by signal detection theorists of recognition memory, is indeed demonstrated in amnesic patients with subtotal damage to the medial temporal lobe.

With more complete damage to the system, even familiarity judgments should be impaired; indeed, just such a pattern is observed in patients with total damage to the medial temporal lobe (Wixted & Stretch, 2004). Finally, no patient has ever demonstrated preserved recollection with impaired familiarity, a result that would be expected if the two processes arise from completely distinct systems. The single-process account thus enjoys both parsimony and compatibility with empirical data.

However, recent fMRI evidence suggests that distinct anatomical regions might subserve familiarity and recollection: greater activation is elicited in left lateral parietal regions in response to “remembered” items relative to “known” items, and less MTL activity is seen in response to old items than to new items (Rugg & Yonelinas, 2003). In constrast to the explanation offered by those authors, I argue that successful “remember” judgments may involve the retrieval of spatial information from the hippocampus into parietal regions, whereas no such specific spatial information is available for “know” judgments. Parietal activity can therefore be interpreted as an effect of remembering, rather than a cause. According to this view, both “remember” and “know” judgments rely on MTL regions, whereas remember judgments rely specifically on the hippocampus.

Note: This post is part 3 of a series of posts, in which the traditional distinctions between memory systems are reviewed. The final post in this series will propose a three-system model of memory, which I argue is the minimum number of distinct systems required to explain current behavioral, neuropsychological, and neuroimaging evidence on the nature of human memory.


Rugg, M. D., & Yonelinas, A. P. (2003). Human recognition memory: a cognitive neuroscience perspective. Trends in Cognitive Sciences, 7, 313-319.

Wixted, J. T., & Stretch, V. (2004). In defense of the signal detection interpretation of remember/know judgments. Psychon Bull Rev, 11, 616-641.


Blogging on the Brain: April 8-22, 2006

Some highlights from the week in brain blogging:

The Genius covers theta oscillations and sequence compression in hippocampus.

Biology of Perspective-Taking: A fascinating summary of what happens in our brains during "perspective-taking," when we exercise our theory of mind abilities and put ourselves in someone else's shoes.

When does the brain develop math? Mind Hacks covers an open-access article on the development in math ... in four year olds. I haven't read the article yet, but it involves fMRI, and I can't imagine that this experiment was well controlled. Draw conclusions with caution...

A primer on correlation: A couple of under-appreciated issues with the interpretation of correlation coefficients.

Forest for the trees? Neurodudes points out an interesting recent article in Science, one that confirms a small piece of folk theory: "Contrary to conventional wisdom, it is not always advantageous to engage in thorough conscious deliberation before choosing..."

Re-Inventing the Wheel? Not a Bad Idea: Finally, a nice bit of mechanical engineering from Al Fin, because everybody needs a break from cognitive science every now and then.

Have a nice weekend!


Implicit vs Explicit Memory: Two Distinct Systems?

The distinction between implicit and explicit memory is one of the most fundamental in the memory literature. Explicit memory is any persistent effect of experience which can be attested to, and is usually assessed through procedures such as free or cued recall, or recognition (such as in a remember/know procedure). Implicit memory, however, is frequently tested through priming procedures such as word stem completion, sentence completion, and measures of reaction time. To what extent does the data from these procedures support the idea of distinct underlying memory systems, according to the definition proposed yesterday?

Perhaps the most compelling evidence for this distinction comes from amnesia patients, who show intact implicit memory despite a profound lack of explicit memory. For example, amnesic patients with selective damage to the hippocampus and related structures in the medial temporal lobe frequently show intact priming on both conceptual and perceptual repetition priming tasks (Levy et al., 2004).

However, the case of new-associate priming is more equivocal, in which amnesics show impaired priming in word-stem completion, but not speeded reading or lexical decision making, of unrelated word-pairs. In this case it appears that the hippocampus (and other MTL structures) may be necessary for normal conceptual priming, but not perceptual priming, of arbitrary information in particular (Keane & Verfaellie, 2006). Thus, the critical distinction between what amnesics can and can’t do appears to relate to the arbitrariness of the information to be remembered: if the task depends on the linking of unrelated items, amnesics will likely be impaired.

Nonetheless, the idea that a single exposure is enough to facilitate the processing of a related word pair in amnesics and healthy subjects alike has a profound implication for the type of systems that may support these functions. Because the facilitation resulting from priming is accompanied by inhibition for closely related items (Ratcliff, 2000), it seems unnecessary to propose a separate memory system for priming in general as for normal cognition.

Instead, one need only propose that the bias in processing that results from priming procedures is frequently below the threshold of awareness. Neuroimaging evidence also supports this idea (Schacter, 2005), in that perceptual priming tasks frequently activate areas in the “cortical-perceptual representation system” as opposed to hippocampal or other regions implicated only in memory tasks.What about declarative memory?

Some evidence suggests that successful conceptual new-associate priming is usually accompanied by conscious recollection of having seen those pairs before (McClelland, et al., 2002), suggesting that amnesics are impaired in conceptual new-associate priming to the extent that their explicit memory is impaired. More to the point, extensive damage to hippocampus and surrounding structures results in chance performance on some tests of recognition memory (Levy, Stark & Squire, 2004) and but not equal impairments in recall. This distinction – between familiarity and recollection – is one that has engendered much debate, and will be reviewed in the next post.

Note: This post is part 2 of a series of posts, in which the traditional distinctions between memory systems are reviewed. The final post in this series will propose a three-system model of memory, which I argue is the minimum number of distinct systems required to explain current behavioral, neuropsychological, and neuroimaging evidence on the nature of human memory.


Keane, M. M., & Verfaellie, M. (2006). Amnesia II: Cognitive issues. In M. J. Farah & T. E. Feinberg (Eds.), Patient-based approaches to cognitive neuroscience (pp. 303-314). Cambridge, MA: MIT Press

Levy, D. A., Stark, C. E., & Squire, L. R. (2004). Intact conceptual priming in the absence of declarative memory. Psychol Sci, 15, 680-686.

Ratcliff, R., & McKoon, G. (2000). Memory models. In E. Tulving & F. I. M. Craik (Eds.), The Oxford handbook of memory (pp. 571-581). New York: Oxford University Press

Schacter, D. L., Dobbins, I. G., & Schnyer, D. M. (2004). Specificity of priming: a cognitive neuroscience perspective. Nat Rev Neurosci, 5, 853-862.

Vargha-Khadem, F., Gadian, D. G., Watkins, K. E., Connelly, A., Van Paesschen, W., & Mishkin, M. (1997). Differential effects of early hippocampal pathology on episodic and semantic memory [see comments] [published erratum appears in Science 1997 Aug 22; 277(5329):1117]. Science, 277, 376-380

Related Posts:

How Many Human Memory Systems?
A Primer on Priming


How Many Human Memory Systems?

Appearing in over 125,000 peer-reviewed journal articles[1], the word “memory” may one of the most frequently used terms in the psychology literature. Despite the enormous effort spent researching the concept of memory, a unified picture of the number, type, and defining traits of human memory systems has not yet emerged.

One source of difficulty is that memory can be defined along a variety of dimensions, such as content (e.g., episodic vs. semantic vs. procedural), stage of processing (e.g., consolidation vs. retrieval), accessibility to consciousness (e.g., implicit vs. explicit), and time (e.g., short-term versus long-term) (Brand & Markowitsch, 2006).

A second difficulty is that many of these distinctions are operational, in which a concept is defined in terms of the methods used to demonstrate it. At their extreme, operational definitions can run the risk of describing differences that reflect the operations themselves more than real underlying differences in the systems they measure. So, which of these dimensions capture inherent differences between memory systems, and which capture only the differences between the diverse methods used in psychology?

The answer to this question rests on the definition of “system.” In the next few posts, I will be examining the traditional distinctions made between memory systems according to a new definition, one that I hope will provide much needed clarity to the study of memory as a whole.

I have defined a memory system as any persistent effect of experience which has both
  1. unique computational requirements or characteristics, and is therefore frequently anatomically distinct from other brain regions, and
  2. unique functional characteristics, such that damage to the system results in a complete deficit (or pattern of deficits) that would not be caused by other types of damage.

With this framework in mind, this week's posts will review the evidence for each of the traditional divisions between memory systems, and then account for all of this evidence with a simple three-system theory of human memory.

[1] This statistic comes from a PsycInfo search performed on 4/15/2006, indexing most psychology journals from the last 40 years.


Brand, M., & Markowitsch, H. J. (2006). Amnesia I: Clinical and anotomical issues. In M. J. Farah & T. E. Feinberg (Eds.), Patient-based approaches to cognitive neuroscience (pp. 289-301). Cambridge, MA: MIT Press.


Rethinking Multiple Causality

I recently posted on a fascinating paper about neural network modeling of multiple causality in developmental disorders, and how one might begin to use variability analyses to tease apart homogenously- from heterogenously-disordered groups on the basis of behavioral measures.

However, after spending many more hours analyzing this paper than I care to admit, and despite having decided that the neural network modeling is probably accurate, the behavioral data used as support for these conclusions is completely inadequate.

The data that supposedly "confirms" the model prediction (which, for those who didn't read my earlier summary, is that "deficits originating from multiple underlying causes show less variability in the disorder’s diagnostic measure than on other behavioral measures, while networks manifesting the same deficit as a result of a single underlying cause tend to show equal or greater variability on nondiagnostic measures relative to the measure that defined the disorder"), from Williams Syndrome (WS) and Word-Finding Difficulties (WFD), actually provides only partial support.

Specifically, the models make these predictions in the context of disorders where heterogenous and homogenous groups cannot be dissociated on the basis of means on a behavioral measure. However, the comparisons that can be made between these two *actual* disorders on the basis of behavioral measures show that means probably *would* be sufficient to tell them apart! (Although in the case of the criterial measure, this might be due to the fact that the groups weren't even age-matched - for WS, adults were included, whereas for WFD only children were analyzed). In summary, the necessity of analysis of variability is not clear.

Secondly, the criterial measure used to compare WS with WFD is "words produced in time period," but this measure actually comes from different tasks in each disordered group: naming accuracy in response to semantic cues for the WFD subjects, and number of words produced from a given category (starting with a certain sound, rhyming with a target word, or from a specific semantic category) for the WS subjects. The fact that the criterial measure is different for both groups just underscores a major tautological problem; the skeptic might ask, does this insight from neural nets help us tease apart disorders that we don't *already* know are different?!?

Thirdly, the "non-criterial" measures used to assess whether there are different patterns of cross-measure variability in WS vs. WFD are not sufficiently different from the criterial measure. For example, the WS criterial measure of naming accuracy requires subjects to generate words from a specific category, whereas the WFD criterial measure is the TWF (which requires picture naming of noun, verbs, categories, and sentence completion). However, the "additional" non-criterial measures are the effect of different semantic categories on the accuracy and latency of picture naming. I fail to see what makes this an "additional" task, since it appears to be tapping the same thing. (To the author's credit, it is surprising that WFD showed increased variability on the additional metric than on the TWF, since they seem almost identical.)

Finally, the author actually doesn't explicitly do the analysis of actual behavioral data that was motivated by the models. This is such a big mistake that I didn't believe it at first, but it's true: the models show that cross-measure variability (and its change over time) is different within a homogenous group than within a heterogenous group, and yet the comparison he makes using real data is that WFD shows larger variance on non-criterial metrics than WS. To test the model predictions, he should have examined whether a) WFD shows larger variance on non-criterial metrics than on the criterial metric and b) WS shows equal variance on non-criterial metrics as as on criterial metric.

Finally, the author doesn't provide any longitudinal data from WS/WFD to support the conclusion that variability among non-criterial metrics would decrease over time in homogenous groups. That conclusion is just "left hanging."

Again, while I still feel that the predictions motivated by the networks are accurate, they have not been adequately tested. This is a great opportunity for anyone sitting on a large dataset of autism or ADHD data to send it my way ;)


Neural Network Models of the Hippocampus

For the past few days I've been busy making this presentation on neural network models of memory (here's a PDF version). Specifically, the presentation addresses the question of how the hippocampus interacts with the neocortex, beginning with a review of the role of the hippocampus from single cell recording and neuropsychological evidence, continuing into the phenomenon of catastrophic interference, and ending with a conclusion about some fundamental computational tradeoffs in how memory is represented.

One of the most interesting insights about the function of the hippocampus is that it appears to be specialized for quickly learning about specific features, in specific combinations, in specific contexts. This information is stored in a "compressed" format, with very sparse patterns of activation. The cortex cannot absorb this knowledge so quickly because representations there are far more overlapping, which would cause undue interference from a new learning experience on all older facts.

Instead, the hippocampus serves as a temporary "waystation" for these memories, while they undergo a process of memory consolidation, in which the hippocampus is actually interleaving experiences - during sleep, or as recently demonstrated in rats, even while awake - so that the slower neocortical learning systems can slowly extract the meaningful information from these experiences.

Full citations are provided in the presentation, as well as a basic animation of the phenomenon of spreading activation and hebbian learning.


Blogging on the Brain: April 2-8, 2006

Some highlights from the week in brain blogging:

Mind Hacks covers a particularly tragic story: the man who took 40,000 ecstasy pills in nine years. Fascinating discussion with informed comments from several readers.

Neuron Simulators Galore at Neuronerd (to which I would add PDP++ and Neural Viewer)

The Sad Cingulate at Neurocritic has a wonderful breakdown of the details surrounding deep brain stimulation therapy for depression, one of the week's popular press news items

Neurodudes discusses a new technique that might be used to cure blindness, using light-activated ion channels

Singing Cavemen and Amusia: Myomancy reviews some of the multimodal deficits found in children with learning disorders

Thanks to Positive Technology Journal for pointing out a new special issue of Cortex on Synaesthesia

And finally, a very interesting post on Blind Chess and Working Memory over at Zero Brane

Have a nice weekend!


Nature's Engineering

One criticism of many synchrony- or polychrony-based hypotheses of neural computation is that nature's process of invention is too haphazard to utilize such "engineered" or "engineering"-like solutions. The other major criticism is that the brain is too noisy, and neurons are too sloppy, to achieve such temporal precision (but see this post for evidence that by increasing the diversity of forces acting on an interconnected system of oscillators, the likelihood that synchrony will be observed also increases). Both of these criticisms require any synchrony- or polychrony-based hypothesis to demonstrate how such phenomena might naturally emerge from a biologically plausible and parsimonious architecture. And yet, discoveries like this one reported in a 2003 issue of Proceedings of National Academy of Sciences, show just how advanced nature's "engineering" can be.

In this article, Rizzuto et al. present evidence that during a simple recognition memory task (in which participants had to determine whether a probe stimulus matched one of four previously presented stimuli) theta-clocked oscillations may actually reset upon the presentation of the probe. In other words, before probe presentation, oscillations are out of phase with respect to one another, and therefore the total average power in that frequency band is low. After item presentation, however, all of the electromagnetic waves appear to spontaneously synchronize by resetting their phases at that point (or alternately, by undergoing rapid phase precession).

Further tests showed that prior to item presentation, the distribution of phases was statistically no different from that expected in a uniform distribution, and that after item presentation phases were not uniformly distributed (p<.0001). The authors also explore the possibility that the appearance of phase locking is actually an artifact of simultaneous burst firing, or "transient increases in power," by investigating the total power in each band both before and after item presentation. They found that band power actually decreased significantly after probe presentation in the 7-16 Hz band, which is inconsistent with the idea that transient power increases are giving rise to the appearance of phase-locking. (Aside: this result is reminiscent of findings described previously, in which gamma-band power was seen to be strongest during periods of asynchrony)

Electrodes that showed phase reset to probes were placed in inferior temporal, bilateral occipital, and right parietal regions. Other electrodes showed phase rest to other stimuli, such as the list items (right posterior temporal lobe), and the orienting stimulus (right mesial subtemporal regions). Frontal, prefrontal, and suborbital frontal regions did not show the resetting phenomenon.

As the authors point out, phase resetting may be important for "setting the stage" for other forms of synchrony to emerge, such as gamma- and beta-band oscillations, which have been shown to correlate with successful memory encoding and visual item maintenance, respectively (Fell et al., 2001; Tallon-Baudry, 2001). Jensen and Lisman (1998) have even implemented a computational model in which phase reset (4-12 Hz) initiates serial scanning, which is consistent with Rizzuto et al.'s finding of phase reset on probe presentation.

These results - in which some neural mechanism appears capable of abruptly resetting the phase of multi-band oscillations during item presentation - share striking similarities with phenomena covered in previous posts. For example, results from Nakatani et al.'s 2005 Journal of Cognitive Neuroscience paper (summarized here) can be explained in terms of a slow oscillation (2-3 Hz) operating within a much faster rhythm (38-43 Hz), suggesting that the slower rhythm might be responsible for triggering neural oscillations in response to events in the external world. In other words, slow oscillations may become phase locked with external stimuli, which allows peaks in attentional dynamics (such as capacity or switching) to coincide with the time course of environmental changes.


Fell, J., Klaver, P., Lehnertz, K., Grunwald, T., Schaller, C., Elger, C. E. & Fernandez, G. (2001) Nat. Neurosci. 4, 1259–1264

Jensen, O. & Lisman, J. E. (1998) An oscillatory short-term memory buffer model can account for data on the Sternberg Task. J. Neurosci. 18, 10688–10699.

Tallon-Baudry, C., Bertrand, O. & Fischer, C. (2001) Hebbian reverberating oscillatory synchrony during memory rehearsal in the human brain. J. Neurosci. 21, RC177.

Related Posts:
Entangled Oscillations
Models of Active Maintenance as Oscillation
Chaos, Order, and Coupled Oscillators
Binding through Synchrony: Proof from Developmental Robotics
Sequential Order in Precise Phase Timing
Synchrony and "Perception's Shadow"


Sequential Order in Precise Phase Timing

While many might accept that synchrony is an important part of neural computation, several others would be hesitant to do so. Many of these naysayers are wary of a "strong" form of the synchrony hypothesis, sometimes referred to as "polychrony," which suggests that information can be encoded by precise temporal relationships among multiple cell assemblies. Neurons are too simple, and the brain is too noisy (or so the logic goes) for such precision to be manageable. I tacitly agreed with this perspective until reading two fascinating articles in today's issue of Neuron.

Dragoi & Buzsaki recorded from 256 different neurons in rat hippocampus as the rats wandered around a track in search of a food reward. They recorded from two regions in particular: the CA1, and the CA3 regions of hippocampus. These regions are important because they are critically involved in the pattern separation (CA3) and pattern completion (CA1) processes that allow for effective memory storage and retrieval, respectively.

Before moving on, it's important to know that neurons in CA3 fire only in extremely specific physical locations. For example, a given CA3 neuron may respond to a specific range of movement, in a particular direction in a particular place, and for nothing else. CA1 neurons are slightly less specific, but still far moreso than neurons elsewhere in the hippocampal formation. The range of locations for which a CA3 or CA1 neuron will fire maximally is known as its "place field peak."

As shown previously by other research, the authors found that "the distance between adjacent place field peaks is represented by the precise temporal relations of spikes at a compressed or "theta" time scale on the order of milliseconds." In other words, the firing patterns of specific neurons would lag from the dominant local frequency (think of it as the overall "pulse" at this portion of the network) by an amount that is directly related to the physical distance between the rat and the place field peak for that neuron. Even more surprising is that by examining the phases of all pairs of the 256 neurons relative to the dominant hippocampal theta frequency, one can derive a kind of "temporal map" which relates the distance between each physical location in terms of phase lag in a single theta cycle.

Interestingly, the phases of CA3 & CA1 cell assemblies relative to the dominant theta frequency seems to relate to physical location such that previously visited locations are encoded by neurons on the descending side of the "theta trough," locations to which the rat appears to be heading are encoded by neurons on the ascending side of the theta trough, and in the very center of the theta phase are the neurons that encode the rat's current location. A process of phase precession serves to coax the firing rhythms from the ascending side into the trough of the theta rhythm, as the rat moves through its environment.

This research is important because it clarifies 1) how theta-clocked cycles of interaction between CA1 and CA3 regions might serve to consolidate temporal sequences of information into single "episodes"; 2) the mechanism for storage and retrieval of sequential order information that may underlie both spatial navigation and episodic memory, the two tasks most strongly associated with the hippocampus; 3) the mechanism that might be capable of generating the specific spike train patterns known to drive spike timing dependent plasticity.

Related Posts:


A Primer on Priming

Priming refers to facilitated processing for a particular item based on a previous event. For example, if subjects are asked to rate the likeableness of the words "strawberry" "policeman" and "penguin," and are asked 10 minutes later to name a particular type of fruit, they are more likely to say "strawberry" than they would be otherwise. Or, if the task is to identify whether strawberry is a fruit, they are reliably faster than controls who weren't exposed to that word. Priming effects also occur when subjects are asked to identify briefly flashed stimuli, to complete word stem tasks, or to identify words or objects.

A couple of things differentiate priming effects from other memory phenomena. First, subjects are often unaware of the fact that they were primed. Second, a single presentation of an item is enough to facilitate processing of it and related items for an extremely long amount of time. Third, priming is related to decreased activation of relevant brain regions, as opposed to the increased activation typically seen in recall or recognition tasks. Finally, and most mysteriously, priming appears to be intact in amnesiacs - including patients with hippocampal and even more generalized medial temporal lobe damage.

What mechanism causes priming to occur? One old view is that abstract, long-term memory representations are activated after the presentation of an item. However, some research has shown that the magnitude of the priming effect is affected by changing specific features of the item, such as its font. To what degree is priming related by the semantic properties of an item as opposed to its specific perceptual components?

This question is the topic of a November 2004 Nature Reviews Neuroscience article by Schacter, Dobbins and Schnyer. According to their analysis, when the presented items are in the same modality as the response (aka 'within-modality priming'), occipitotemporal activity is significantly reduced. This reduction in activity suggests that within-modality priming may be a specifically perceptual effect.

In contrast, cross-modal priming results in increased anterior prefrontal cortex (BA10) activity. This area is involved in explicit retrieval tasks, indicating that there may be an explicit component to cross-modal priming effects, and providing some insight on the rather mixed evidence as to whether cross-modal priming is intact in amnesiacs.

One possible explanation of these conflicting results is that there may be two routes to cross-modal priming: the use of phonological areas (left remporoparietal) and/or use of explicit retrieval (anterior prefrontal) can both support priming effects.


Chaos, Order, and Coupled Oscillators

A recent paper in Physical Review letters describes work on a network of coupled oscillators - in this case, interconnected pendulums - and how synchrony selectively emerges only under random external influence. From the Washington University at St Louis press release:

"The researchers noticed that when driven by ordered forces the various pendulums behaved chaotically and swung out of sync like a group of intoxicated synchronized swimmers. This was uexpected — shouldn't synchronized forces yield synchronized pendulums?"

"But then came the real surprise: When they introduced disorder — forces were applied at random to each oscillator — the system became ordered and synchronized."


Word Learning in Feature Space

What developmental mechanisms support the cognitive transition from baby to curious and inquisitive toddler? One period of rapid cognitive development is known as the "vocabulary spurt" at 18-22 months, during which children go from learning 2-3 words per week to learning around 8-9 words per week. Some have proposed that a new information processing mechanism may come online during this time, one that supports "referential" or symbolic processing as opposed to more "basic" associative processing.

In a 2005 article in Cognitive Science, Regier proposes that one need not assume such a mechanistic shift take place, and that the stage-like progression seen during this age can instead be a result of purely associative learning. He argues that the stage can be characterized on the basis of 4 qualities:

1) Ease of learning: associative learning becomes easier at this stage, as reflected by evidence that 13-15 month-olds can acquire object-word association in 9-12 training trials, while 2-3 year olds can learn within 1-3 trials.

2) Increased sensitivity to communicatively relevant sounds: whereas 14 month-olds can create unique sound-object associations only with fairly dissimilar sounds, 18-23 month-olds can associate even similar-sounding names with different objects. This increased sensitivity to communicatively relevant sounds is accompanied by decreased sensitivity to communicatively irrelevant sounds: 13-month-olds but not 20-month-olds are capable of learning an association between a sound made by a noisemaker and a target object. This evidence indicates that phonological processing in older children may have become more selectively "tuned," at the expense of flexibility.

3) Increased sensitivity to communicatively relevant semantics: after learning a new word-object association, 13-month-olds won't generalize the use of these words to new objects with similar shape but different color. However, older children will ignore color and size differences by generalizing words to new objects on the basis of shape alone.

4) Increased ability to cope with second labels: 16-month-olds have trouble learning a new word for an already-named object, whereas 24-month-olds can more easily acquire second labels for familiar objects

Looking at these qualitative shifts in behavior, it's not hard to see why some researchers believe this age shows a mechanistic shift to referential processing. Regier, however, implemented a neural network model which proves that similar phenomena can be observed without any such "symbolic" transition.

In the LEX (lexicon as exemplar) model that Regier proposes, word learning is based on the interaction of two aspects of language: word form, and word meaning. Each of these has its own layer, in which objects are presented to the network both phonologically and semantically, as is often the case in childhood ("see the car? that's a car!"). Two hidden layers connect these two outer layers, and through gradient descent, these layers extract the relationships between specific characteristics features of input. The training corpus consisted of 50 words.

The innovative part of the network is the use of two sets of weights: first, a standard set of "associative" weights between nodes, and a second set of "attentional" weights between nodes. While the associative weights merely pick out correlations among input patterns, the attentional weights are responsible for the further biasing of particular types of features. This is important because some "feature dimensions" assume more importance than others for communication - in the case of #3 above, shape would be more highly biased than size or color.

After some experience with the training input, LEX is also able to show increased ease of learning (#1). This occurs because training allows the attentional weights to be changed in such a way that they begin to define the "feature space" by emphasizing importance differences between items, and collapsing across irrelevant differences.

After some initial training, LEX also shows a stage-like capacity for second label learning (#4). Just as the interference between the representations of different objects is minimized by the distortion of feature space through learning of attentional weights, the interference between multiple representations for the same objects is minimized as well. The model suggests that the hardest words to learn are those which overlap maximally with either previously experienced word form, previously experienced word meanings, or both.


Blogging on the Brain: The Week in Review

Following Mixing Memory's lead, I thought I'd make a habit of linking to choice reading from recent brain-blogging. So here's the first "digest" in what I hope will be a series of weekly posts.

Neurochip followup: Neurofuture does some investigative blogging on one of the biggest "news" items this week, a brain-machine interface which is capable of recording from over 16,000 neurons at once.

Boolean Gene Expression Map of the Brain: Logic of Expression: Al Fin takes a look at how dynamic and complex the underlying logic of gene expression actually is.

Maybe they'll find the OFF switch for my 2-year old: Brainscan Blog points out a nice article by Max Sutherland on transcranial magnetic stimulation, one of the newest technologies in the cognitive neuroscientists' toolbox, and one that promises to deliver causal as opposed to merely correlational explanatory power to brain-based psychology.

The double deficit dyslexia hypothesis revisited: IQ's Corner points out a nice article from the Journal of Learning Disabilities on the 'double deficit' hypothesis of dyslexia.

Brain Imaging Techniques or Technocolor Phrenology? GNIF Brain Blogger takes a skeptical view towards the contribution that fMRI and other imaging technologies have provided to our understanding of human psychology. I'd love to see the folks at Eide Neurolearning respond to some of these claims.

More on Monday; until then, have a nice weekend...