Serial Oscillations and the Frequency Following Response

Several previous posts cover the role of specific frequencies of neural oscillations, in everything from anticipation to face processing. I have also mentioned a neural network model of short-term memory in which multiplexed gamma and theta oscillations give rise to memory capacity limits. A fascinating paper by Burle and Bonnet in Cognitive Brain Research delves into the implications of this "serial oscillation" account by using the Sternberg task in conjunction with the frequency following response.

In the Sternberg task, subjects are presented with a list of items and are later presented with sample items, which they must judge as belonging or not belonging to the previous list. The characteristic result is that reaction time linearly increases with the length of the studied list - in fact, it increases around 40 ms per item. This linear slope has been seen as the "memory scanning time" for a serial processor - but interestingly the RTs are the same for both positive and negative trials, suggesting that the memory search process is necessarily exhaustive.

A recent neural network model by Lisman et al provides an interesting perspective on this data. According to their model, short term memory capacity may arise as an interaction between theta and gamma oscillations, such that each item stored in short-term memory is "refreshed" at a rate of 40 Hz (gamma) once every 100-200 ms (theta). This model can account for many aspects of the RT distributions seen in the Sternberg task, and importantly provides a much-needed connection between prefrontal neural activity and behavioral results.

Burle and Bonnet realized that if these oscillations really are responsible for memory capacity limits, we should be able to manipulate the pace of the oscillations and see an effect on behavior. Consequently, they hypothesized that by playing a repetitive stimulus with a frequency close to that of the neural oscillations, they should be able to shift the neural oscillations slightly in the direction of the external stimulus (this phenomenon is known more generally as the "frequency following response," and is at the heart of the excellent bwgen software).

Therefore, they conducted an experiment in which subjects completed the Sternberg task while auditory 'clicks' were played at frequencies that would be close to 40 Hz (in fact, they used half that frequency, 20 Hz, because of the difficulty for subjects to temporally resolve 40 clicks per second). The results showed that these "click trains" slowed reaction time between 21-21.5 Hz, but quickened reaction time at 22 Hz. This pattern is precisely what one would expect from a system in which some slower click trains slow the neural oscillations important for memory scanning, while slightly faster click trains increase these memory scanning processes.

These results suggest that the pacemaker frequency hypothetically involved in memory scanning has a harmonic somewhere between 21.5 and 22 Hz, and also provides support for the Lisman et al serial oscillatory model of short term memory.

Related Posts:
Entangled Oscillations
Lost Keys: Memory Search Failure
Sequential Order in Precise Phase Timing


Blogging on the Brain: May 20-27, 2006

Cognitive Daily covers a fascinating experiment showing that many children may be synaesthetes.

Mind Hacks covers the recent discovery that Ambien - a common sleeping medication - may actually be capable of waking patients from vegetative comas. Al Fin recently covered this as well.

Speaking of Al Fin, here's a post on how two receptor types may interact via a protein called PSD95, and how this may relate to Alzheimers.

A fascinating comparison between cocaine and ritalin at Myomancy.

An interesting post at OmniBrain about the "directional challenges" that face people like my wife daily.

The Genius covers noise in the brain and gene expression.

And, on a humorous note, Neuronerd covers the two biggest neuroscience discoveries of the century: "The Brain Holds the Key to Unlock Lost Memories", and "the Poverty Gene".

Have a nice weekend!


Conceptual Thinking: Just Out of Touch

Virtually anyone who reads the mainstream science press will have heard of mirror neurons - the nerve cells located in your frontal cortex that fire both when you perform a particular action action and when someone else performs that same action. The relatively recent discovery of these neurons lead to an explosion of theorizing about everything from the origins of theory of mind to autism. Interestingly, the same region of frontal cortex (premotor and inferior ventrolateral PFC) which contains mirror neurons has also been implicated in the encoding of abstract rules.

A recent review paper by Adele Diamond suggests that research on mirror neurons and research on the developmental time course of abstract thinking may be "missing the forest for the trees." In other words, the important feature of premotor and iVLPFC is not that they contain mirror neurons, or that fMRI shows them to be active during generalization tasks. Instead, the important feature of this region is that it is fundamentally responsible for associating things that are not physically connected.

This hypothesis is particularly tantalizing because we know that representations become progressively more abstract as you go from posterior to anterior frontal cortex. Intuitively, it makes sense that the area just anterior to motor cortex would contain mirror neurons.

A variety of behavioral results suggest that rudimentary forms of abstract thinking can be "bootstrapped" by physically connecting the relevant stimuli. For example, in the Delayed Non-Match to Sample (DNMS) task, children are presented with two stimuli and must pick the novel object in order to receive a small reward. Children normally are not able to succeed in this task until they are 21 months old, unless a surprisingly minor change is made: if the reward is physically attached to the underside of the novel object, children can learn this task as early as 9 months of age. To quote Diamond, "this result falsifies the previously held notion that the ability to dedduce abstract rules (such as 'Choose the item that does not match the sample') is beyond the ability of infants less than one year old."

Another example: it takes adult monkeys hundreds of trials to learn that the reward is always underneath the red cover, when the monkeys are presented with red and blue covers in varying locations. However, if the reward is physically attached to the red cover, they learn in just a single trial.

Diamond goes on to present many more examples of how the presence of a physical connection facilitates the learning of abstract relationships. She further argues that the critical region of frontal cortex is not premotor or even inferior ventrolateral PFC, but rather the inferior frontal junction (periarcuate in monkeys). A series of lesion studies indicates that monkeys with a damaged periarcuate fail the DNMS in the same way that human infants do when the task does not involve physical connection between stimuli and reward.

In summary, it appears that this region is involved in abstract rule learning, the learning of conditional associations, imitation, and even empathy (neural activity here is increased during observation of emotions) precisely because it is the first region of cortex where the representations are completely dissociated from physical connections in the real world. This research has profound implications for education - and perhaps even therapy for autistics - because it suggests that activity in this region can be facilitated by making connections between abstract ideas physically explicit.


Tunnels, Funnels and Spirals

Neurofuture found an excellent (but very esoteric) lecture by mathematician Jack Cowan on the origins of various patterns of visual hallucinations as a property of noise in a particular type of neural lattice. The lecture is available here (in 64 and 256 kBit mp4 streams). Jack Cowan has dual appointments in both mathematics and neurology at the University of Chicago, and his approach to computational modeling of neural dynamics clearly falls under the umbrella of the "dynamic systems" viewpoint.

Although much of the lecture is geared towards advanced physicists and mathematicians, and I cannot claim to have understood all of it, the essential points of the lecture seem to be:

1) The retinotopic maps of V1 are actually transformed from retinal coordinates via a complex logarithm. The end result of this transformation is that concentric circles in the visual field would be represented by vertical lines in the cortex, whereas radial lines would become horizontal lines of activity in cortex.

2) The development of orientation and spatial frequency maps in V1 can be simulated with some fairly "simple" (maybe simple to Jack Cowan, but not to me!) self-organizing functions, such that "orientation and spatial frequency are the zeroth and first order spherical harmonics" and "the coefficient of the first order representation is just the dot product of the vector of feature preferences with the vector of stimulus features, where the vectors have components given by the first order spherical harmonics. Similar representation can be found for directional motion, binocular disparity, and color."

3) This theory predicts several phenomena which have been verified experimentally, such as aftereffect interference patterns when a subject views a display of white noise, after fixating a display of a single spatial frequency. This theory also predicts that, at rest, neural activity manifests a 1/f pattern of noise, which can be observed in RT data from a variety of behavioral experiments.

The bottom line of Cowan's simulations are that the functional organization of cortex is quasi-periodic, and thus shares many of the characteristics of quasi-crystals (pictured at the start of this post, and which may or may not look similar to the orientation tuning of hypercolumns in V1, pictured here). Therefore, many of the mathematical techniques used in crystal physics can apply to understanding neural noise.


The Truman Show - For Real?

The BBC is reporting on a MIT professor who has decided to record every minute of his child's preverbal experiences. The technology used to support the recording of nearly 400,000 hours of information includes 14 microphones, 11 omni-directional cameras, and some serious hard drive space: over 350GB of compressed data are created every day.

Once recording is complete, automated data mining software will attempt to uncover the experiences that may have lead to his child's first words, or characteristic patterns of speaking. Professor Deb Roy has referred to this as the "Human Speechome Project" - covered in more detail in this CogSci conference proceedings paper.

The final step in this project is to develop a computational model of language acquisition that will match the child's performance as closely as possible, given the exact same input.

via Seed Magazine.

Reichardt Detectors and Illusory Motion Reversal

If you've ever watched a movie and noticed that the wheels on a vehicle in the movie appear to be moving backwards, you have experienced illusory motion reversal (also known as the wagon wheel illusion; try it here). This illusion arises from the frame rate of the movie, or the refresh rate of a monitor, making it appear as though objects with a periodic spatial frequency (such as wheels, or repeating lines) are moving in the opposite direction of their true motion.

Interestingly, recent reports that illusory motion reversal happens under direct sunlight have revived claims that the human visual system essentially takes perceptual "snapshots" of the visual field, and hence that the human visual system has some sort of framerate or cycle speed. This explanation fits well with theories of neural function that posit an important role for neural oscillations in attention and memory processes.

In contrast, other authors have argued that a particular type of motion detector - a Reichardt Detector - can explain these effects, without invoking some kind of neural frame rate in the visual system. According to this argument, a neural circuit may serve as a Reichardt detector subunit if it receives input from two receptive fields, where one is temporally delayed from the other. Essentially, such a circuit would detect a particular velocity (rate of motion + direction of motion). In a full Reichardt detector, two of these subunits are compared with one another through subtraction. (If you can't visualize this, try this interactive demonstration.)

According to the motion-reversal-via-Reichardt-detectors argument, if the spokes of the wagon wheel are rotating at a specific frequency, some Reichardt detectors will become active for the opposite direction, because they will essentially "mistake" another spoke as the new location of the first spoke. This motion reversal only enters conscious awareness when the Reichardt detectors tuned to the correct direction of motion are fatigued.

This argument has at least two shortcomings which I have pointed out previously. But as it turns out, a recent paper by Rojas et al. has made the same points. Even better, the authors have replied in another published piece. Here's the essence of their reply:

1) Full Reichardt detectors, as opposed to Reichardt subunits, are not velocity-tuned but rather temporal frequency-tuned. As a result, it is possible to explain the increased probability of illusory motion reversal around 10 Hz with full Reichardt detectors.

2) The Reichardt-based explanation does not posit temporal sampling at all, despite first appearances, because they are not temporally discrete. Instead, they are spatially discrete, and thus sensitive to spatial aliasing if not preceeded by the appropriate pre-filters.

3) Although the arguments presented so far do not argue against temporal sampling that is not uniform across the visual field, a new visual illusion (pictured at the top) provides preliminary evidence against this hypothesis. Specifically, when subjects view a stimulus with moving lines superimposed over rotating fan blades, illusory motion reversal tends to happen for one pattern but not the other (i.e., fan blades but not lines, or vice versa.) A spatial perceptual sampling hypothesis would suggest that both items should reverse simultaneously.

However, this final point does not argue against object-based temporal sampling (as the authors themselves note). It is also possible that temporal sampling occurs at multiple levels at once - i.e., both space-based and object-based - just as in other visual attention experiments. In fact, Reichardt detectors would have to be at least partially object sensitive, otherwise we would have the sensation that objects are moving when another object has merely appeared.

Although I think these replies are interesting, I am also not sure that this argument can actually be resolved. Despite several previously mentioned problems (foremost among them being that Reichardt detectors have never actually been found in the mammalian visual system), it seems like these points of view may not be mutually exclusive. For example, it is still possible that the asynchronous activity of Reichardt detectors is temporally sampled, thus giving rise to rivalry between them.

Related Posts:
Attention: The Selection Problem
Illusory Motion Reversal: Rivalry or Perceptual Sampling?
Perceptual Sampling: The Wagon Wheel Illusion


Scientific Paradises

Astronomers have the Paranal observatory in Chile; physicists have Fermilab and CERN; complexity theorists have the Sante Fe Institute; neuroscientists have NSI. But what is there for behavioral researchers? As it turns out, a beautiful island in the Caribbean.

Cayo Santiago is a tiny island off the coast of Puerto Rico which contains 950 free-ranging rhesus macaque monkeys. Tourists are never allowed; in fact, to visit, you must apply to the directors of the island, who ensure that the rhesus population stays healthy. If you're allowed to visit the island, you can simply set up your experiment somewhere in the jungle, and interact directly with the roaming macaques.

The colony was founded in 1938 with 409 monkeys imported from Calcutta. Since then, the monkeys have freely reproduced and now live in several naturally-formed social groups (with the exception of a few loner males). There is a database of information on each individual in the population, as well as the lineage of each individual as identified through DNA fingerprinting techniques. Researchers have also collected extensive information on the social group membership, and maintain that birth/death records are accurate to within 2-weeks.

Current research on the island is being conducted by researchers from Harvard, Yale, Duke, NIH, University of Chicago and other institutions. Check out a video of the free-ranging monkey population here (sorry, spanish only).


The Origins of Memory Distortion

In a series of posts last week, I have reconceptualized the “seven sins of memory” as a group of memory inaccuracies arising from just three types of system failure. To recap briefly:
  1. failures related to maintenance can explain rapid forgetting, prospective memory failures, and absent-mindedness (claim supported here).
  2. memory search failures can result in interference, levels-of-processing phenomena, infantile amnesia (claim supported here), as well as directed forgetting, tip-of-the-tongue phenomena, retrieval-induced forgetting, and consolidation-related forgetting (claim supported here).
  3. monitoring failures can explain false memory effects, consistency biases, source misattribution errors, and incorrect recall from the DRM paradigm (supported here).

I have described how each of these failures can cleanly account for much of the evidence used to support a classification scheme with seven categories (although it should be noted that I have not covered Schacter's sin of "persistence," which in truth is not really a memory failure anyway). However, the current classification scheme has a few additional benefits, as described below.

First, this framework is capable of accounting for memory phenomena not considered in Schacter’s original proposal. For example, I have described how infantile amnesia and interference in the AB-AC task can both result from failures of cue specification in the process of memory search.

Second, this framework can be extended to explain yet other memory inaccuracies as resulting from multiple failures. For example, déjà vu might be considered a failure of both maintenance and search, in which sufficiently precise cues are unavailable (which would otherwise elicit the original experience giving rise to the sensation of familiarity). However, déjà vu is also characterized by a simultaneous failure to maintain the item or characteristic of the current experience that originally triggered the sensation of familiarity.

Third, this framework is more parsimonious (by virtue of positing fewer constructs), and is much more compatible with the functional anatomy of memory than Schacter’s “seven sins.” For example, some evidence suggests that maintenance, cue specification, and monitoring may be implemented by distinct neural regions in prefrontal cortex (Dobbins et al., 2002). In contrast, it is almost unthinkable that concepts as abstract as bias or transience could be localized to a specific brain region.

In conclusion, Schacter’s “seven sins” did an admirable job of classifying memory inaccuracies in an intuitively appealing way. Further examination of these phenomena, however, has afforded the re-classification proposed above. This proposal is just as complete, while both more parsimonious and more compatible with an emerging view of the neuroanatomy supporting memory functions.


Dobbins, I. G., Foley, H., Schacter, D. L., & Wagner, A. D. (2002). Executive control during episodic retrieval: Multiple prefrontal processes subserve source memory. Neuron, 35, 989-996.


Blogging on the Brain: May 13-20, 2006

Some recent brain blogging in review:

Chocolate is Cold Comfort: An interesting post from Mind Hacks about how chocolate may actually depress mood rather than elevate it. Should be interesting to see how this plays with the public, given there's a pretty entrenched popular opinion to the contrary...

The Genius covers an excellent talk by David Eagleman that we both saw this week. You're definitely familiar with visual illusions, but what about time illusions? If not, check out this summary...

The Microeconomics of Anticipation: Neurocritic reviews a popular news item this week, the "neurobiological substrates of dread."

What is "g"? Some thoughts: A very interesting post (and discussion!) about the nature of general intelligence, and to what extent we can think of it as being a cross-species construct as well.

Cognitive Flexibility: Eide Neurolearning covers a brand new and very interesting paper by Badre and Wagner in PNAS, called "computational and neurobiological mechanisms underlying cognitive flexibility."

Chances Are: A brief review of what looks like an excellent new book on the role of probability and statistics in our everyday lives... I know that the mere mention of the word "statistics" probably turns off many potential readers, but until you've read detailed information about statistics from a source _other_ than a textbook, you don't know what you're missing.

A really cool visual illusion found by the Neurophile.

A nice post from Thinking Meat on how dolphins actually create names for one another! As far as I know, this kind of behavior has not been previously observed in the wild.

Neuroweapons: A-Bomb of the Future? (Brain Waves)

For the more biologically inclined, a great review of the role of calcium in LTP & LTD over at Retrospectacle.


Memory's Gates: Failures of Monitoring

So far in the current series of posts, I have covered how Schacter's "seven sins of memory" can be more parsimoniously explained as arising from three types of system failure. The first, failures of maintenance, roughly map on to his sin of transience. The second, failures of search, explain both suppression related phenomena as well as interference and transfer appropriate processing. But what about the third type of system failure, what I have termed "monitoring failure"?

A particular memory inaccuracy is likely to involve monitoring failure if a memory search has returned results, but these are taken to be valid results of the search cues when in fact they should not. Schacter categorizes this type of failure in three different ways: suggestibility, misattribution, and bias. It is not clear that these are fundamentally different. In fact, as discussed below, all of the evidence Schacter used to exemplify these “sins” can be more parsimoniously interpreted as failure to correctly evaluate the validity of results from a memory search.

For example, source misattribution errors occur when people insist that they saw someone in one context when in fact they saw them in another, or may incorrectly attribute some piece of trivia as coming from a newspaper, when in fact it was provided by the experimenter. These results can clearly be taken as instances of faulty monitoring of memory search results. Similarly, false recall or recognition in the Deese-Roediger-McDermott paradigm can be seen as a kind of monitoring failure, in which the feelings of familiarity are falsely monitored as indicating that the target word was present, when in fact it was not. The “false fame” effect is yet another example, where previously studied names are considered more likely to be famous than new names.

Although Schacter classifies these misattribution inaccuracies as distinct from suggestibility, this distinction may not be necessary. In a strict sense, errors resulting from suggestibility (for example, the subtle influences of question phrasing on the retelling of traffic accidents) can be viewed as misattribution errors, where the emphasis that originates from the question is instead falsely monitored as originating from the content of the memory itself.
Schacter also categorizes memory biases as distinct from both of these, although they too can be considered instances of monitoring failure, as explained below.

For example, humans show a consistency bias when asked how similar their previous attitudes are to their current attitudes, and tend to overestimate the similarity between current and previous attitudes. This might easily result from a memory search returning current as well as past attitudes towards an issue, despite the proper cues being provided to memory. In this case, faulty monitoring of memory search results might allow some current attitudes to “taint” the perception of previous attitudes, and result in the perception of increased similarity.

On the other hand, when people have reason to believe that their skills or opinions have changed substantially, retrospective biases tend to move in the opposite direction, such that people tend to overestimate the amount they may have changed. Interestingly, faulty monitoring may also be the culprit in this case: despite accurately probing memory, the memory search may also return related current attitudes (presumably those that strongly represent the idea that change has occurred). These results may influence the magnitude of perceived difference between current and previous time points, such that failure to exclude the current attitude (with a strong representation of “change”) may exaggerate the differences between current and previous attitudes.

The aforementioned grouping of bias, suggestibility, and misattribution under the umbrella of “monitoring failure” has additional advantages. For example, “faulty monitoring” might also be at work in other phenomena, such as confabulation. This refers to the phenomenon where some patients with frontal damage will unintentionally fabricate stories about their past. This is not a failure of memory search, because it is clear that the patients know the kind of information they should be looking for – autobiographical facts. However, the results of this search are mistakenly regarded as accurate, as opposed to being correctly monitored (and discarded) as inaccurate.

In the next (and final) post in this series, I will give additional evidence on how these categories of memory failure are superior to the "seven sins" (despite not being as catchy!). Furthermore, this framework makes some interesting predictions about how more complicated memory failures - such as déjà vu - may arise as an interaction between multiple systems.

Note: This post is part 5 (part 1, part 2, part 3, part 4) in a series, in which I'll review and revise Schacter's "seven sins of memory" according to a new framework of memory failure, one that is both closer to neuroanatomy and wider in scope.


Lost in the Network: Failures of Memory Architecture

In the last few posts, I have reviewed how interference in the classic AB-AC task, transfer-appropriate processing, infantile amnesia, rapid forgetting, and prospective memory failure, can all be explained with only two concepts: memory maintenance failure, and memory search failure. I have also argued that all memory search failure results from two distinct causes: information is either inaccessible because the proper “search parameters” are not being used as cues for the memory system, or the search mechanism itself is suppressing certain “search results.”

So far, we have only discussed the first type of search failure, which I referred to as failures of "cue specification." What about this mysterious second type of failure, involving the computational architecture of memory, and sometimes even the suppression of memory search results?

Contrary to popular belief, "suppression" is not merely a Freudian folk tale, but may have a real neurological basis. Memory suppression - also known as "directed forgetting" - seems to be a capacity that we all may have to some extent, modulated by individual differences.

How might something like this occur in the brain? Consider the following: even if correct and precise cues are used to probe memory content, closely related items may be mistakenly retrieved first. These incorrect search results may be discarded, and memory will be probed again, perhaps with updated search cues. If incorrect items are retrieved again, this process will repeat; eventually, the target item still residing in memory may actually become suppressed because so many related items have been previously identified as incorrect.

The behavioral consequence of this iterative memory failure is what is known as the Tip-of-the-Tongue phenomenon, in which many fairly precise memory search cues are available (such as the number of syllables of the target item, the first or ending letters, and several semantically related concepts) but the target item itself is inaccessible (Fletcher & Henson, 2004). This is a perfect example of suppression-related memory search failure.

A related symptom of suppression-related memory search failure is known as retrieval induced forgetting. This robust effect has been succinctly characterized as when “remembering makes subjects forget related memories” (Levy & Anderson, 2002), and has been demonstrated with a variety of stimuli, including visual objects, word pairs, mock crime scenes, and personality traits. In this case, retrieval practice on related items impairs the retrieval of other previously learned items. For example, a witness who is asked to describe certain features of a crime scene will be significantly more likely to forget those features of the scene that were not asked about. This suppression-related failure of memory search roughly maps on to Schacter’s sin of blocking.

Both of these examples - retrieval-induced forgetting and Tip-of-the-Tongue - may result from a little-discussed feature of neural processing, the dark twin of the popular concept "spreading activation." This dark twin is known as priming-related reduction in neural activity.

This describes a situation in which subsequent presentations of a primed item will show the usual priming-induced facilitation of neural processing (in terms of reaction time, and other measures), but paradoxically a reduction in neural activity. This may happen because the neural circuits representing these concepts essentially become "tuned" to produce this memory trace, as a result of small weight changes after the prime. This results both in faster reaction times and less overall activity.

How does this explain retrieval-induced forgetting and Tip-of-the-Tongue? Consider this: if an item has been repeatedly associatively primed through the neural processing of related information (as in tip-of-the-tongue or retrieval-induced forgetting), then the likelihood of producing these memory traces may be facilitated, while the overall amount of activity required to do so is decreased. Ultimately, this makes it highly unlikely that related items will become active, resulting in a short-term suppression effect.

Another computational characteristic of the memory search process that can sometimes result in memory search failure comes from the complementary learning systems perspective. This view holds that some brain regions are specialized for quick, one-shot learning of associations, while others are specialized for slow, context-invariant learning of underlying statistical regularities.

According to this framework, memories undergo a slow transition from one neural memory system to another, in which the initial learning of associations is accomplished by the hippocampus, and information from those memories is subsequently transitioned into neocortex over a longer period of time (McClelland, 2002; see my presentation for additional background). But if the hippocampus is damaged, as in amnesia, some recent memories will not have been successfully transitioned into neocortex. No matter how specific and precise the memory search cues are, the relevant information cannot be retrieved – simply because it is no longer there! This results in the well-known pattern of temporally graded retrograde amnesia in patients with medial temporal lobe damage. Furthermore, this consolidation process can be interrupted by ongoing neural activity (Wixted, 2004).

The final kind of memory failure is "monitoring failure," and will be covered in tomorrow's post. As we shall see, this simple concept can explain some of the most infamous failures of memory, including suggestibility, bias, source misattribution, and even confabulation.

Note: This post is part 4 (part 1, part 2, part 3) in a series of posts, in which I'll review and revise Schacter's "seven sins of memory" according to a new framework of memory failure, one that is both closer to neuroanatomy and wider in scope.


Fletcher, P., & Henson, R. N. A. (2004). Prefrontal cortex and long-term memory retrieval. In R. S. J. Frackowiak, K. J. Friston, C. D. Frith, R. J. Dolan, C. J. Price, S. Zeki, J. Ashburner & W. Penny (Eds.), Human Brain Function (2nd ed., pp. 499-514). London: Elsevier

Levy, B. J., & Anderson, M. C. (2002). Inhibitory processes and the control of memory retrieval. Trends in Cognitive Sciences, 6, 299-305.

McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (2002). Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. In T. A. Polk & C. M. Seifert (Eds.), Cognitive modeling. (pp. 499-534): MIT Press, Cambridge, MA, US

Wixted, J. T., & Stretch, V. (2004). In defense of the signal detection interpretation of remember/know judgments. Psychon Bull Rev, 11, 616-641.

Related Posts:
The Seven Sins of Memory (Part 1)
The Transience of Memory (Part 2)
Lost Keys: Memory Search Failure (Part 3)


Lost keys: Memory Search Failures

The "seven sins of memory" can be more productively viewed as three types of memory failure. Although maintenance failure was discussed yesterday, the second type of memory failure, and by far the most well-known, is the topic of today's post: failures of memory search. This category includes long-term forgetting, in which information that was initially remembered is now inaccessible, either temporarily or permanently.

Originally, these phenomena had been seen by Schacter (author of the "seven sins of memory") as an instance of the sin of transience, but there are subtle problems with this account. Under normal circumstances, information is probably not ever actually “lost” from storage. Instead, I argue that all memory search failure results from two distinct causes: information is either inaccessible because the proper “search parameters” are not being used as cues for the memory system, or the search mechanism itself is suppressing certain “search results.”

In the first type of memory search failure – failures of cue specification – memories appear to be lost only because attempts at retrieval are either providing the wrong retrieval cues, or these cues are insufficiently precise. This conceptualization of forgetting provides a natural explanation for one of the most common task paradigms in the memory literature, the AB-AC task. In this case, the recall of one list of word pairs is impaired by the subsequent studying of a second list of word pairs when both lists share a cue. In this case, the “A” cue is ambiguous in a memory search, and thus results in the appearance of forgetting for the first list, in addition to slowed learning for the second list of words.

Other evidence for the idea that cue specification failure underlies some types of forgetting is that successful recall can be predicted on the basis of how well the current state of neural activity matches the neural activity at the time of encoding (Polyn, et al., 2005). The striking neuroimaging used to confirm this hypothesis provides a window onto exactly the kind of “cue specification” process that is frequently at fault in forgetting phenomena.

Cue specification failure also explains phenomena such as transfer-appropriate processing, in which memory is improved in tasks where the cues provided to a memory search more closely match the content of neural activity at the time of encoding (Morris et al., 1977). Furthermore, a “deeper” level of processing may provide a memory advantage because it results in more processing, such that any given cue is more likely to activate a related memory. Therefore, one interpretation of the levels-of-processing advantage is that “width” as opposed to “depth” of processing is the critical feature that results in improved memory search.

Finally, cue specification failure may underlie some aspects of infantile amnesia. Although there is reason to think that encoding failures are also involved (Bauer, 2004), one interpretation of infantile amnesia is that waves of neural pruning during childhood make it essentially impossible to provide memory search cues that are sufficiently similar to the state of neural activity at the time of encoding. Hippocampus is already well developed by 1 year of age (Bauer, 2004), and although prefrontal cortex undergoes development well into the teens, most people would probably report fairly good memory before that age. Therefore, I argue that infantile amnesia results primarily from a failure to reinstantiate the neural context that was present during encoding in infancy – in other words, to provide sufficiently precise memory cues.

The second cause of memory search failure involves computational characteristics of neural mechanisms that accomplish memory search and memory representation. One example of this is the phenomenon of memory suppression - a topic to which we will return tomorrow, when I more fully explore this second aspect to memory search failure.

Note: This post is part 3 in a series of posts, in which I'll review and revise Schacter's "seven sins of memory" according to a new framework of memory failure, one that is both closer to neuroanatomy and wider in scope. Here is part 1, and here's part 2.


Bauer, P. J. (2004). Getting explicit memory off the ground: Steps toward construction of a neuro-developmental account of changes in the first two years of life. Developmental Review, 347-373.

Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer-appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519-533.

Polyn, S. M., Natu, V. S., Cohen, J. D., & Norman, K. A. (2005). Category-specific cortical activity precedes retrieval during memory search. Science, 310, 1963-1966.

Related Posts:
The Seven Sins of Memory
The Transience of Memory
Overgrowth, Pruning and Infantile Amnesia
Neural Network Models of the Hippocampus


The Transience of Memory

The first of Schacter’s “sins” of memory is transience, which covers both rapid and long-term forgetting, as well as problems at the time of encoding that may contribute to transience. There are several problems with this account, not least of which is the fact that the next sin, absent-mindedness, would seem to be a cause of transience. Which, then, is the “original” sin?

A second problem is that the mechanisms underlying short and long-term transience may be quite different. Although Schacter tries to argue that information can simply be “lost” from long-term storage, it seems more likely that this loss results from a failure to specify the correct search cues. Short-term or rapid forgetting, on the other hand, often clearly results from the complete loss of information. For example, on tasks such as digit span (where all possible task-relevant cues are available to memory), some subjects can show almost instantaneous forgetting and yet intact long-term memory performanc. This dissociation underscores how different short and long-term forgetting actually are.

For these reasons, I have combined short-term forgetting and absent-mindedness into a single type of memory system failure, failures of maintenance. Failures of this memory system frequently result from lapses of attention (such as may occur in divided attention tasks). Other times, information may seem to “disappear” from consciousness; notably, these moments are often characterized by a lack of concentration. In both cases, information is no longer maintained online to guide behavior.

Another symptom of maintenance failure is impaired prospective memory. Prospective memory refers to our ability to "remember to remember." For example, early in the morning you might notice that you are nearly out of milk, and you decide to stop at the store on your way home from work. However, when the time comes and you're driving home from work, you completely neglect to stop and buy more. This is a classic failure of prospective memory. In what way might this arise from maintenance failure?

One obvious possibility is that you failed to maintain your plans for the duration of the day, and so this information was not immediately available to guide behavior when you drove past the store. Your plan to stop at the store may have even occurred to you while driving, but that information succumbed to the sin of transience, or in our terms here, failed to be maintained. However, it seems unlikely that we are constantly maintaining all our future plans in working memory; although prospective memory failures would likely be less common if this were the case, it seems like another aspect of memory may be at fault here.

The second possibility is that you failed to search your memory for "driving home," or some other search cue that may have prompted you to remember your plan to stop for milk. This, however, is not a symptom of maintenance failure. Instead, this is a failure of what I call "memory search," in which the relevant information is accessible in memory, but is not retrieved. As it turns out, this type of memory problem is by far the most common. Look out for tomorrow's post, which will explain how two simple types of memory search failure can explain the majority of human memory inaccuracy.

Note: This post is part 2 in a series of posts, in which I'll review and revise Schacter's "seven sins of memory" according to a new framework of memory failure, one that is both closer to neuroanatomy and wider in scope. Here is part 1.

Related Posts:
Active Maintenance and The Visual Refresh Rate
Models of Active Maintenance as Oscillation
Anticipation and Synchronization
Task Switching in Prefrontal Cortex


The Seven Sins of Memory

Psychologist Daniel Schacter has argued that memory's trespasses can be divided into seven types: transience, absent-mindedness, blocking, misattribution, suggestibility, bias, and persistence.

In the original paper (which was hugely popular and is still highly recommended), the first sin ("transience") refers to the gradual loss of information from both short- and long-term memory. Although Schacter is unclear as to whether the primary cause of this sin is decay, interference, or "overwriting," he does argue that information can be merely lost from memory without any cause other than the passage of time.

Absent-mindedness refers to a failure of attention, either at retrieval or encoding. Neuroimaging evidence suggests that memory success is largely a function of the kinds of processing that take place during encoding, so this "sin" seems fairly well-established in the literature.

Blocking is failure to retrieve information that has been correctly encoded. One famous example of this is the "tip-of-the-tongue" phenomenon, in which people know that they know something, and may even be able to describe several features of the known item, but are unable to successfully retrieve the name of this item from memory. Typically, TOT happens with names of people and places.

Misattribution includes all failure of source memory, both when the incorrect source is identified and when it is not. The Deese-Roediger-McDermott paradigm offers a wonderful example of misattribution errors.

Suggestibility refers to the powerful influence that subtle things like question phrasing can exert on memory. Elizabeth Loftus has done a lot of work on this area of memory inaccuracy, and has found several surprising facts: people can be easily made to believe that certain things happened to them which never did (Were you ever lost in the mall as a child? Did you kick over the punch bowl at a family wedding? Loftus can make a significant portion of you believe so.)

The sixth sin is bias. One example of bias is the well-known consistency bias, in which people tend to overestimate the similarity between their current attitudes and previous attitudes.

The final sin, persistence, refers to the fact that we can't forget some of the memories we would most like to. This infamous trait of memory is at the root of problems like post-traumatic stress disorder.

In contrast to Schacter’s “seven sins of memory” (1999), I argue that all types of memory inaccuracy arise from three distinct types of memory system failure: those of maintenance, of search, and of monitoring. Failures of maintenance include problems involving prospective memory (“forgetting to remember”), rapid forgetting, and absent-mindedness. Failures of search include retrieval-induced forgetting, tip-of-the-tongue phenomena, and amnesia. Failures of monitoring include source misattribution, memory biases, and suggestibility. Finally, other memory inaccuracies may actually result from interactions among multiple sources of failure.

In this week's upcoming posts, I will review each of these categories of memory failure in turn, and describe how they can account for all types of memory inaccuracy when taken together.

PS: Here's the next in this series of posts.

Related Posts:
Redeeming Freud: Memory Suppression
How Many Human Memory Systems?
Prefrontal Cortex and Long-Term Memory


Secret Agents: Robotic Cockroaches

Scientists have recently created tiny robotic cockroaches so lifelike that they are actually capable of fooling other cockroaches. Perhaps cockroaches aren't the most difficult creatures to trick, but impressive nonetheless. From the press release:

"When dropped into a small experimental area with a maze of curved walls, the robots move, turn and stop. They can navigate their way safely by avoiding the walls, obstacles or each other, follow the walls, congregate around a lamp beam or even line up. When placed in the same area with cockroaches, the robots quickly adapt their behaviour by mimicking the animals’ movements. Coated with pheromones taken from roaches, the infiltrator robots even fool the insects into thinking they are real creatures.

Not only did the insbots act like and interact with the insects, they even succeeded in changing the roaches’ behaviour. For example, the darkness-loving insects followed their artificial cousins towards bright beams of light and congregated there. This process took up to two hours, but it showed how humans might soon be able to manipulate the behaviour of a whole colony of insects. A trick that would delight pest-controllers the world over!

Firstly, by changing the way animals behave or inducing collective behaviour, scientists can learn much about animal communications and information processing. Secondly, the ability to create ‘mixed systems’, where artificial agents interact with natural ones, is a long-held dream for many in the scientific community – including those working on nanotechnology. Moreover, these systems are in keeping with emerging European research such as collective robotics and FET-funded projects such as Swarmbots."

See the full press release here.

Related posts:
Giving the Ghost a Machine
Imitation vs Self-awareness: The Mirror Test
Profile: Mark Tilden


Blogging on the Brain: May 1-6, 2006

Some highlights from the week in brain blogging:

LTP in CA1 from low-frequency stimulation: Typically, low frequency stimulation leads to depression of synaptic activity, but Neuronerd finds a case in hippocampal cells, in which it leads to long-term potentiation! Very interesting, and should tell us more about the way this fascinating brain structure works.

Memory juggling: Eide Neurolearning briefly covers a fascinating paper from PNAS on developmental changes in working memory.

A social contructivist look at ADHD, and whether it's all in our heads, at Myomancy blog.

Social isolation delays the positive effects of running on adult neurogenesis (Neurodudes)

A neuroscience podcast series (including interviews with Eric Kandel) from Neurofuture

You can’t hide your lying eyes from Thinking Meat

Wanted: Psychometricians over at IQ's Corner

And finally, some nice neurotransmitter jewelry, discovered by Mind Hacks.

Have a nice weekend!


Prefrontal Cortex and Long-Term Memory

Typically, medial temporal lobe regions are thought to be the major players in long-term memory functions. Prefrontal cortex's role in these functions has only been appreciated more recently. In addition to the rehearsal processes traditionally thought to facilitate the transition of information from short-term to long-term memory, the prefrontal cortex is intimately involved in the encoding and retrieval processes that are critical to effective long-term memory functioning.

A recent presentation (PPT; PDF) that I gave reviews recent evidence about the PFC's role in these functions, with special attention to the functional anatomy of both ventral and dorsolateral frontal regions.

Sorry for the reduction in posts lately; it's finals week, and I'm taking an extra class (bad idea). Expect a more regular pattern of posting to begin in the next few weeks.


Age of Acquisition

Do you learn best what you learned first? Evidence supporting the Age of Acquisition effect (Zevin & Seidenberg, 2002) would suggest so: in some cases, you show better reading performance on words you learned early in life, even controlling for the effect of cumulative exposure frequency.

Computational models were used to investigate this phenomenon, because they allow for precise control of properties that are normally confounded in behavioral work, such as word frequency and word length. Age of acquisition was not seen among words that are typical in terms of orthography and phonology of english, but was seen in a test in which the words chosen differed substantially in their orthography to phonology mappings. AoA effects would thus be expected in any situations where learning the name for one pattern does not help you in learning the name for another pattern, such as in the mappings from semantics to phonology. In the case of orthography to phonology, however, this is frequently not the case.

Connectionist principles suggest that there should be a differential effect of age of acquisition, such that initially there is a large advantage for early-learned words, because regularities across pronounciation have not yet been fully extracted from experience. However, as the weights become entrenched in representing these regularities over the course of experience, the disadvantage conveyed by learning words late decreases. Accordingly, there are two factors to consider: cumulative frequency and frequency trajectory. Controlling for cumulative frequency, this account would suggest that there should be an advantage for words trained with a high frequency early, relative to those words trained with high frequency later in development.

To test these predictions, authors Zevin and Seidenberg used a four layer network with 100 input orthographic units, 20 hidden units, and a 250 output phonological units. In addition, a second hidden layer was bidirectionally connected only with the output units, which serve essentially to "cleanup" phonological production in a way that is not possible purely through orthographic to phonological mapping.

The results show the expected trend only when the words to be learned differ in their orthography to phonology mappings. This is relatively unlikely to occur in a language such as English, which has a relatively consistent orthography to phonology mapping, unlike, say Kanji. However, Age of Acquisition effects are more likely to be seen in tasks involving semantic to phonological mapping. To my knowledge, this prediction has not yet been confirmed behaviorally, nor has the distinction been verified in simulations, despite following naturally from the work presented here. Batter up!