7/31/2006

Working Memory Capacity: 7 +/- 2, around 4, or ... only 1?

Ever since Miller's estimation of short term memory capacity, confusion has grown over both whether there is a capacity limit in memory per se (i.e., the appearance of memory limitations may reflect limitations in the deployment of attention, or interference from other memorized items), and how to measure the memory limit if one exists (e.g., in terms of digits, non-words, chunks, objects, features, etc.). Cowan's 2001 article "The magical number 4 in short term memory" revised the field's best estimate of short-term memory capacity from "7 plus or minus 2" to around 4 items, based on a painstaking and comprehensive review of the literature. Yet a new article in the Proceedings of the National Academy suggests that even 4 may be an overestimation.

In their article, Olsson & Poom write that many studies' use of easily identifiable stimuli (for example, red- and blue-colored squares) may have led to an overestimation by contaminating the working memory capacity measurement with that of long-term memory (where your representations of red, blue, and squares are presumably activated). In order to isolate the contributions of working memory, one needs to use stimuli for which subjects' do not yet have long-term memory representations.

Olsson & Poom proceeded to design an experiment in which a sample display (13 x 13 degrees) of one to four randomly-arranged objects were presented for 500 ms each, and then after 1 second, a "test display" was presented, containing a single object in the center of the screen. This was repeated 50 times for each set size. Subjects had to decide whether that object was present in the previous display, but were each placed into one of three conditions: an "easily categorized" condition, where the stimuli had discrete shapes & colors; a "not easily categorized" condition, where the stimuli had continuous shapes (varieties of ovals) & colors (selected from a two dimensional color space with axes red/green and blue/yellow); and finally, a "difficult to categorize" condition, in which stimuli were the same ovals as in the previously described continuous dimension, but differed from each other only in terms of the relative size of a second oval that was placed inside each stimulus. The authors confirmed with piloting that all stimuli used were easily perceptually discriminable, and also confirmed that the "easily categorized" condition did indeed contain the most easily categorizeable stimuli, and the "difficult to categorize" condition contained the most difficult to categorize stimuli, with the other condition in between these two. Cowan's K formula was used to estimate capacity for each set size (again, one through four).

In the "easily categorized" condition, capacity was estimated at slightly below 3 objects (averaged across set sizes), which is lower than the standard estimate of 4 for stimuli of this type. The authors suggest that their estimate may be lower because they isolated capacity from the influence of "spatially relational" encoding by always presenting the test object centrally in a test display; in this case, relational encoding - in which objects are stored in relation to their spatial neighbors - would be uninformative.

In the "not easily categorized" and "difficult to categorize" condition, the maximum estimate of capacity for any set size was always below 2 objects. For the "difficult to categorize" condition, the estimate of capacity was actually lower for set sizes 3 and 4 than of 1 and 2, suggesting that interference between items was lowering capacity for set sizes greater than 2, and strongly suggesting that working memory capacity is around 1 item in the absence of long-term memory support.

This study has several strong implications, both theoretical and methodological:
  • easily identifiable stimuli should not be used in estimates of capacity; but if traditional stimuli are used, individual differences in long-term memory must be used as a covariate to isolate WM capacity from the influence of long-term memory
  • WM capacity analyses should examine the role of practice effects during testing, because long-term memory representations may form during the process of an experiment
  • relational encoding may also contaminate capacity estimates, and so location information should be made uninformative (through single probe tests rather than change-detection paradigms) during measurements of WM capacity per se

One large caveat: it is possible that strategies like "relational encoding," and even the strategic use of long-term memory representations to support working memory, may be of more theoretical interest to individual difference researchers than a more pure measure of working memory. It is also almost certainly true that the traditional measurement, although probably contaminated by long-term memory, has more real-world validity than a more isolated or controlled measure of WM capacity.

Finally, it may be that strategy differences are what actually underlie individual differences, if WM capacity itself is truely limited to just a single object.

Related Posts:
Memory Bandwidth and Interference
Multiple Capacity Limitations for Visual Working Memory

7/30/2006

Blogging on the Brain: 7/23-7/30

Highlights from the week in brain blogging:

MindHacks: If you're not reading Mind Hacks on a daily basis, you are probably not interested in cognitive science at all. But in the off chance you missed a few posts this week, check out The Flynn Effect is Reversing (also at American Scientist), a recap of the mirror neuron hubub, and an fMRI mind-reading competition (with support from Nature Neurosci!).

MindBlog covers articles in J Neurosci and Nature Neurosci on language and the role of anterior cingulate in remembering past rewards, respectively.

Brain Ethics covers a recent slew of articles about how electromagnetic cell phone radiation may affect cognition.

Over 10 years of lectures from the Irvine Health Foundation on autism, memory retrieval, language acquisition, dreams, and memory architecture.

All in the Mind is an excellent weekly show on Australian national radio, focusing on advances in cognitive sciences.

Al Fin has a fascinating post on military robots (also see my previous post on the same topic).

7/28/2006

Video Game Violence and Desensitization

What are the cognitive effects of violence in video games? A new study by Carnagey, Anderson & Bushman shows that as little as 20 minutes of playing a violent video game reduces the physiological stress responses usually evoked by violence. In other words, violent video games appear to have a desensitizing effect. However, there are a few caveats to this story.

First, the methodology: the authors measured baseline heart rate (HR) and galvanic skin response (GSR) in 257 college students over the course of 5 minutes, and then had each play either a violent (Carmageddon, Duke Nukem, Mortal Kombat, or Future Cop) or non-violent (Glider Pro, 3D Pinball, 3D Munch Man, Tetra Madness) video game for another 20 minutes. Then they took another 5 minute HR and GSR measurement, and then showed each subject a 10 minute film of actual violence (including beatings, stabbings, and shootings) while continuing to measure HR and GSR. At the conclusion of the experiment every subject rated the video game they played on a variety of traits (e.g., frustrating, boring, fun, etc).

The data relevant to violence-densensitization showed relatively small, yet statistically significant, differences between violent and non-violent video game players (hereafter: VGPs). First, there were no significant differences between the violent VGPs and nonviolent VGPs in terms of heart rate or GSR, neither directly after playing the video game, nor as a function of change from baseline to immediately post-game. However, sometime after playing the video game and while watching the filmed violence, the following two things happened:
  • Violent VGPs showed no change in HR - and a decrease in GSR;
  • Non-violent VGPs showed a large increase in HR - and no change in GSR.

These results are important, and while I believe that violent video games probably do have a densensitizing effect, there are a couple of alternative explanations for these data.

1) UHH, CAN I JUST PLAY THE VIDEO GAME SOME MORE?

If you spend 20 minutes committing (simulated) violence, another 10 minutes of merely watching violent events may be relatively unexciting (even if the violence is "real"). Therefore, maybe the violent VGPs were already at a relatively maximum state of arousal after playing the video games, and thus were more likely to show no change (or even a decrease) in arousal while watching filmed violence. Likewise, perhaps the non-violent VGPs were relatively uninterested in their games, and thus more likely to show a change in arousal when viewing something unusual, like actual violence.

In fact, there are some reasons to believe this alternative explanation might be partly true. Violent games were rated as being WAY more action-packed (F(1,252)=53.48, p<.0001) and more frustrating. Although the authors say that violent & nonviolent video games did not significantly differ in several other traits (including "arousing," "exciting," and "stimulating"), they use a surprisingly strict requirement for statistical significance here (they mysteriously set alpha at .08). There are also trends in the data suggesting that non-violent video game players had a higher baseline GSR.

2) JUST SHOW ME SOMETHING DIFFERENT!

Humans can get used to just about anything. From this perspective, these results are not surprising: if I show someone violence, and then show them more violence, they don't react as much as if I had first shown them a game of pinball. It would be far more compelling if violent video games were more "real-violence-desensitizing" than tennis video games are "real-tennis-desensitizing."

A simple control for this explanation would have been a delayed testing group (in which violent VGPs waited another 1, 6 or 12 hours before watching more violence), or even a distractor task in between the video game and the violent film. But because we don't have either type of control group, the effects here could simply be due to boredom rather than densensitization per se.

3) IF A TREE DOESN'T FALL IN THE FOREST, AND THERE'S NO ONE THERE TO HEAR IT...

As it turns out, the non-violent group showed no significant GSR change between playing the video game and watching the violent film. In fact, there was no significant change in GSR in either group throughout the whole experiment (only a specific contrast turned up significant for this, and even that was a pretty small effect). In other words, one might argue that there is no clear GSR change evoked by their "violent" stimulus. So if there is no effect to begin with, how can one group possibly be desensitized?

SUMMARY

Above, I reviewed three reasons for doubting that this study has demonstrated a desensitizing effect of violent video games. The question addressed by this research - do violent video games desensitize people to violence - is an important one, but this study alone does not unequivocally answer this question.

It's also important to consider the practical aspects of these studies. For example, if the data had gone exactly the opposite direction - in other words, if the violent film increased the HR and GSR of violent VGPs relative to non-violent VGPs - would the authors have suggested that video games are good for you? No. In fact, the study is framed in such a way that any difference from "normal" (i.e., the group playing non-violent video games) will likely be viewed as deviant.

And then there are yet other important questions that haven't even been asked. Is desensitization fundamentally different from any other kind of habituation? Is it any more dangerous or long lasting than the effects of viewing violence in Hollywood movies, or on the nightly news? And, most critically, does desensitization have a causal relationship with aggressive behavior?

Related Posts:
Review of Carnagey, Anderson, and Bushman (Terra Nova blog)
Video Games - Mental Exercise or Merely Brain Candy?
Mind Games: Humans, Dolphins and Computers
Intelligent Adaptive Toys
Review: Everything Bad Is Good For You

7/27/2006

Multiple Capacity Limitations for Visual Working Memory

A fundamental debate concerns whether memory capacity limitations may be due to some central bottleneck, or whether there is an array of lower-level limitations that give rise to the simple "span" measures, or to the magic number "seven plus or minus two," frequently observed in behavioral experiments.

One aspect of this debate concerns whether a central bottleneck might be attention - that is to say, according to this perspective memory span is limited by how many things you can pay attention to. This debate has largely been resolved, thanks in part to a paper discussed last week, showing that both types of limitations have a role to play, such that task specific bandwidth limitations reduce memory capacity above and beyond the (substantial) role played by attentional limitations.

[In retrospect, it would have been incredibly surprising if memory capacity was determined solely by a single bottleneck. Even if the utility of memory capacity for ensuring survival plateaus at around 4 items, thus rendering natural selection incapable of selecting for a higher capacity limit, it seems unlikely that the cause of any limit would be centralized, simply because of the brain's massively parallel architecture. Instead, it seems much more likely that the capacity limitations arise as an emergent property of the type of information being processed and the architecture of the dynamic system - between the the "energy" and "medium" one might say - as in the picture accompanying this article, where the topology of the water funnel is dependent both on the viscosity of the water and the amount of rotational energy.]

If we agree that memory bandwidth is not limited by a single cause, the next step is to map and define the architectural features of the brain that give rise to each capacity limitations. In the March issue of Nature, Xu & Chun describe their work to dissociate the contributions of inferior and superior intraparietal sulcus to visual short term memory, based on a visual short-term memory task in which both set size and object complexity was manipulated. In the process, they discover independent capacity limits for each region.

However, some research has reported that capacity limits are lower if the complexity of the items is increased (although there is also evidence to the contrary). To further investigate whether VSTM capacity limits are due to item complexity, number of items, or both, Xu and Chun had subjects attempt to detect a change in the shape of 1 item from a display of 1, 2, 3, 4, or 6 "simple" or "complex" objects, while inside an fMRI magnet. Based on Cowan's K, the authors were able to estimate the visual span of each subject as a function of both their "hit rate" and "correct rejection" rate in detecting shape changes, and correlate these behavioral results with neural activity.

The authors found that VSTM capacity was around 4 for the simple shapes, and around 2 for the complex shape. Interestingly, superior intraparietal sulcus (as well as lateral occipital) activity closely tracked behavioral performance for the simple task alone, such that the neural activity in these regions increased with set size for simple shape features, but not for complex shape features. In contrast, inferior intraparietal sulcus activity only tracked neural activity from 1 to 2 objects, regardless of complexity. A subsequent experiment with different visual stimuli replicated the essential trends of this result, which is not surprising, given that these findings are basically the "fMRI version" of a study by Vogel and Machizawa with EEG.

Based on a final experiment, in which the authors controlled the location of visual stimuli during the display, the authors argue that inferior IPS representations are primarily spatial, while those in superior IPS combine spatial representations with the object identity information that appears to be maintained in the lateral occipital complex. Thus, based on Xu and Chun's work, it appears that there are independent capacity limits for stimulus complexity and for stimulus location, which become subsequently bound together in more superior (& anterior, presumably) regions.

And yet, a new study by Olsson and Poom in the Proceedings of the National Academy of Sciences suggests that memory span is as low as one (yes, 1) in the absence of long-term memory support. Could long term memory be yet another source of capacity limitation in working memory, paradoxical though it may seem? Tomorrow's post will further explore this issue and the implications that this surprising finding has for our understanding of memory architecture.

Related Posts:
Memory Bandwidth and Interference
Visualizing Working Memory
Neuroindices of Memory Capacity
Functional Anatomy of Visual Short Term Memory

7/26/2006

Neural Noise and Information Theory

Neural activity in a given population of neurons is never the same twice, even when recording from exactly the same neurons, after exposure to exactly the same stimulus. Reassuringly, there is thought to be some static underlying "tuning curve" to which the neural population optimally responds, and any deviations from this ideal curve represent merely the noisy nature of neural processing - at least, many neuroscientists feel safe in assuming so.

Unfortunately, the deviations of neural population activity from this ideal "tuning curve" are correlated - in other words, the so-called neural "noise" actually appears to convey some kind of information. Consider the picture at the start of this post, in which the ideal tuning curve is represented as a solid line, with uncorrelated and correlated noise contrasted side by side. It's easy to see why this kind of stochastic behavior would cause a problem: sure, over time, neural activity averages out to this ideal curve, but on any given day the actual measured activity will deviate from the norm in some kind of systematic fashion. Furthermore, this noise is also temporally correlated at a smaller scale, so that any two measurements taken closer in time will tend to manifest similar deviations from the norm.

The focus of Averbeck, Latham, and Pouget's recent NRN review article is on the implications this fact has for neuroscience at large. They identify several important points:

First, one must know the correlations among individual neurons within a given population in order to determine how much and what kind of information is represented by that population; depending on the degree and direction of correlation, the information represented by a neural population can increase or decrease.

These correlations can also be stimulus-modulated, such that two neurons may show positive correlations in their firing rate for one stimulus but negative correlations for another.

Information theoretic analyses of how correlated noise affects the information communicated by neural populations consisting of only two neurons show that, on average, the effects are small, and can be positive or negative; whether this also holds for large populations is largely unknown. However, based on a simple simulation of the effect of both positively and negatively correlated noise within larger neural populations, the authors (not surprisingly) discover that the effect of noise on Shannon information is a nonlinear function of population size. Therefore, studies that shuffle the temporal order of trials may in some cases be grossly misstating the amount and kind of information contained in measured neural activity.

Finally, a titillating conclusion: if the noise correlation is positive, it's possible that the information capacity of large neural networks "saturates" or levels off relatively quickly. Does it saturate far below the information capacity of, say, the retina, or the cochlea? Unfortunately, in the absence of a technology capable of recording individual spike trains from thousands of neurons simultaneously, this questions is likely unanswerable.

7/25/2006

Listening to Yourself: Inner Speech Across the Lifespan

Kray, Eber and Lindenberger recently investigated the role of inner speech (i.e., talking to yourself) on executive functioning, the "higher-level processes that organize lower-level processes in order to regulate and verify behavioral activity." These functions are frequently impaired in children and the elderly alike, as measured by tasks of inhibition (how quickly you can stop executing a certain task) as well as by task-switching (how quickly you can go from one task to another). These functions are thought to rely on a critical prefrontal neural network that is among the last to mature in the developing brain, as well as the first to degrade in old age.

To what extent can these executive control functions be influenced by inner speech? Both children and the elderly can be observed to talk to themselves aloud, which according to Kray et al., "can be seen as an attempt to use language as a tool to plan, guide, and monitor goal-directed activity." In Baddeley's model of working memory, the articulatory rehearsal process of the phonological loop represents exactly this pathway, in which information can be actively maintained in a "verbal buffer" of sorts.

One previous study found that relevant inner speech was helpful relative to irrelevant inner speech in a task switching paradigm, but only when the interval between cue (which specifies which task should be performed) and target (when a response should be made) was very long (Goschke, 2000 cited by Kray et al.). Another previous study (Emerson & Miyake, 2003, cited by Kray et al.) found that inner speech most strongly affected global switch costs, i.e. set-selection costs, when external task cues were not present - or in other words, when subjects had to internally maintain the order of tasks, and which task should be performed next.

[Incidentally, the same authors also verified that this global switch cost increase was not simply a result of the dual-tasking, by replacing the articulatory suppression with a tapping task; then the same increase in global switch cost was not observed.]

To further examine the role of inner speech in executive functioning across developmental time, the authors used a task switching paradigm in which 48 subjects (16 young adults, 16 older adults, and 16 children) had to identify whether the target picture was either a fruit or an animal, OR they had to identify whether the target picture was colored or gray. The specific task that a subject had to perform on any given trial was indicated by a cue, presented 1400 ms before the target picture. During this cue-target interval (CTI), subjects had to read aloud either a task compatible word (i.e., if the current task was to judge gray vs. color, the word might be "colored") a task irrelevant word (i.e., the word might be "round"), or a task incompatible word (i.e., the word might be "animal" or "fruit"). Note that in the language used here (German) all of these were 4 letter, one syllable words. Two control conditions were also included, to ensure that any difference in switch costs seen in any of the three above conditions were due to disruption of inner speech, and not simply to the extra cognitive demands of performing a task during the CTI. These control conditions were either to do nothing, or to do a simple motor task.

Leaving aside further methodological and analytical details, the major results of the study are as follows:

  • Children and older adults show greater set-selection costs (aka global switch costs) on average than young adults, but there are no age differences in terms of local switch costs.
  • It is possible to both positively and negatively prime task-switching with verbalization, even when the response modality is non-verbal - in other words, task-incompatible verbalization hurts task switching, while task-compatible verbalization helps task switching; Furthermore, children "more strongly profit" from task-compatible verbalizations, while the elderly 'suffer' more strongly from task-incompatible verbalizations, than young adults;
  • By mature adulthood, the use of inner speech is so well automatized that it does not result in a significant dual-task RT cost - in other words, the older adults were not significantly slower when task-switching while verbalizing as compared to task-switching alone;
  • Dual tasking during a task-switching paradigm has its strongest effects on both children and the elderly when the dual-task involves a reponse in the same modality as the primary tasks; however, for young adults the pattern is reversed, such that a secondary verbal task is more disruptive to tasks requiring motor outputs than a motor task.

With regards to the last point, it's also possible that young adults rely on inner speech more strongly for task set selection than children and older adults. It's also possible that, as the authors and many others have argued, children and the elderly have increased difficulty in resolving interference (or inhibiting alternative responses) at output stages.

7/24/2006

Gerald Edelman and The Remembered Present

This video of Gerald Edelman (Nobel prizewinner, director of NSI, and promoter of the Theory of Neuronal Group Selection, or "Neural Darwinism") giving a presentation to an audience of IBM engineers contains a description of his newest work with the Darwin robots. He and his research team have recently implemented a hippocampal model in Darwin X (including layers for entorhinal cortex, dentate gyrus, CA1 & CA3), and trained it on a robot-adapted version of the water maze task. In this version of the task, the robot is turned on inside a room with 4 walls, each of which has a unique visual stimulus. At around 42 minutes into the video, you can see the robot train on the water maze task, and ultimately learn to find the "hidden platform," which in this version of the task is a wall with red pieces of paper on it.

Unfortunately, the next high point in the lecture actually doesn't come until the Q&A session, during which the following rather uncomfortable exchange takes place between Edelman and a member of the audience:

Q: "[I'm] confused at your distinction between consciousness and science, or programs, or whatever. Because, I may be a reductionist at heart - I'm an electrical engineer - but, it seems to me, there is an underlying machine that executes some set of actions. You can write programs that are stochastic in nature. You can have randomness as part of that. What is the distinction?"

Edelman: "Let me ask you a question. Do you think that evolution is a Turing machine?"

[awkward silence.]

Q: "Do I think that evolution is a Turing machine?"

Edelman: "Yes. Think on that a while. In fact, [...] there are no programs - there is a set of variations. There is a set of unpredicted events which then make selections, which then themselves change their repertoire..."

Q: "But as many people who build genetic algorithms will say, I can write a computer simulation that will simulate many of the effects that underlie evolution."

Edelman: "Yes you can, but what you can't do, is do it ab initio without you involving yourself. That is to say, that is what happens during evolution, and I believe that is what happens in your brain. But yes there is a discussable point here: after the event has been described, you can always write a program..."

Q: "Let me take it a different way. You say the brain is an adaptable thing, but there has to be some underlying process ... a process over which there are some remarkable similarities between the behaviors of all the resulting systems."

Edelman: "Sure, and that's because we have value systems acting as constraints, and phenotypes acting as constraints. If I were a violinist, and someone offered to replace my right arm - my bow arm - with the most remarkable tentacle of an octopus, that's even more flexible than my arm, I'd reject it. Because, in fact, the joints are responsible for doing staccato.

So ... all of what you're saying is true; after you know what's going on you can write a program. The challenge is: can you write a program - and I don't take it that genetic algorithms are a program for evoluion - to explain the evolutionary features of the system. Because I don't believe that evolution is a Turing machine. For example, if it is, why don't you write a program and tell me what we'll look like in a million years.

There are these uncertainties in these systems, implicit in the variation. From a biological point of view, that's not the challenge. That's what Darwin did to add to physics. Physicists in general have done magnificent things, but they haven't really dealt with this point of Darwin: namely, the variation is the substrate... What is amazing is that from the bottom up [evolution] actually takes variation, and from variation [evolution] makes consistency towards species. That is the essential idea, and I don't think we have in fact made a program that does anything except reflect known properties after the event.

There is an interesting problem here... if there is dice tossing, in the event, can the thing remain a Turing machine? [...] I believe, if there is dice tossing at the fundamental level, you can't have what we call a Turing machine. Turing was a genius, and he started to talk about other kinds of machines - Oracle machines, and so on - but didn't spell them out. Alas."

And a few minutes later:

Edelman: "Suppose we did understand everything about how your brain works ... So, do you think [it] would not work by beliefs, desires, and intentions? [...] Do you believe that your illusion of time, namely of your movement from the past, to the present, to the future, is actually a correct descriptor, when in fact the past and future are concepts, and only the remembered present is the one you are experiencing right now?"

Related Posts:
Binding through Synchrony: Proof from Developmental Robotics

7/23/2006

Distributed Processing: Is cognitive enhancement overhyped?

In an attempt to liven things up around here, I'm starting a series of weekly posts, called "Distributed Processing." Each of these posts will pose a question to my readers - weigh in and let's see where the conversation takes us!

The question(s) for this week:

Are "cognitive enhancement" technologies overhyped?

If there is real potential for cognitive enhancement, what are the implications?

Would you use a mind-amplifying technology yourself, or if there were "critical periods" for their use, on your children?

And a few conversation starters: The Atlantic Monthly features an article on the "Baby Genius Edutainment Complex," and asks whether the products marketed by companies like Baby Einstein might actually be harmful, given that so little research actually supports the claims that they are beneficial. "Smart drugs" like piracetam and modafinil claim to increase "mental agility," and are by some reports already in use by the US military, but have yet to see widespread impact. Then there are new genetic and molecular technologies still in the works - as discussed in this fascinating video (PDF here) by Nick Bostrom on cognitive enhancement. Even Eric Kandel - a Nobel Prize winner - has gotten in on the hype, and started his own neuropharmaceutical company.

7/22/2006

Blogging on the Brain: 7/15 - 7/22

Recent highlights from the week in brain blogging:

Videos from the 6th International Conference on Complex Systems

A Thirst for Knowledge, or Thirst for Opioids? MindHacks discusses an article by Irv Biederman (of geon fame), in which he argues that understanding is accompanied by opioid release.

Localizing Self-Consciousness: The excellent Neurocritic covers recent work on the "default network" of neural activity, which repeatedly implicates the precuneus in representing the self - or what some people might call consciousness. Could this be a neural correlate of consciousness?

Neuromarketing to Thinking Meat: TM blog mentions a recent study in which participants preferred a brand more strongly after solving an anagram - perhaps reflecting a source memory error for the feeling of recognition induced by solving the puzzle.

Paralinguistic human communication: Howard Nusbaum at the University of Chicago finds that people use a higher tone of voice to say "it's going up," relative to "it's going down," and that they speak these words at a speed related to the motion of the stimuli they're describing. In fact, unbiased observers can pick up on these paralinguistic cues - what these researchers call "spoken gestures."

Building a Better Brain: ScienceBlog found an interesting press release on the impact that positive early life experiences have on society at large.

Training Attention: MIT Tech Review has a short piece on the possibility of using real-time fMRI biofeedback to treat ADHD, in the same way this technology has previously been used to treat chronic pain.

Is Consciousness Like God?: Jaron Lanier (an early pioneer of virtual reality technology) asks whether it's possible to ever answer fundamental questions about consciousness.

And the eye-candy link of the week: visual field diffeomorphism (via Metafilter)

7/21/2006

Anthropology, Psychology, and John Hawks

Cognitive sciences and anthropology share many of the same topics of interest, but often differ vastly in terminology, methodology, and perspective. In fact, with the exception of blogs like Mixing Memory, which successfully integrate research from cognitive psychology and philosophy, cognitive psychology is generally quite isolated from the other social sciences.

One exception to the rule is a blog maintained by John Hawks, an anthropologist at the University of Wisconsin Madison, whose posts frequently integrate ideas from anthropology and cognitive psychology. For example, in the past few weeks he has covered topics as diverse as the origins of altruism, lateralization of linguistic functions in the brain, the temporal organization of goal-directed behavior - many of which would not be out of place on this blog.

John recently wrote about teaching behaviors recently observed in primates (meerkats, specifically); John floats the idea that some teaching behaviors may require internal simulations of another's mental state ... in other words, theory of mind. I took the opportunity to ask for more detailed information on his perspective on theory of mind in the context of teaching.

In your recent post about Meerkats, you conclude that primates are "successful at using the limited communications to model other individuals." But you also quoted the following section: "viewed from a functional perspective, teaching can be based on simple mechanisms without the need for intentionality and the attribution of mental states." In other words, natural selection would convey an advantage to animals that trained their children, and so would have allowed for the emergence of a "teaching curriculum" of sorts in which no theory of mind is necessary; it could be entirely instinctive.

John Hawks: "It clearly is entirely instinctive for the ants. My thinking is that most people have vastly overestimated how complicated "theory of mind" has to be. What the ants have is an ability to respond appropriately to signs that the learner emits, no intentionality on the part of either. I don't think we need to assume that the meerkats do any more than this, nor do I think they have access to any more information about the learner's mental state than the ants do. They just need to be able to respond appropriately to signs that the learner is emitting."

This is related to one of my "hobby horse" topics: comparing our romanticized views of human cognition with our almost derogatory interpretations of startlingly human-like behaviors in animals (e.g., teaching, language use, etc).

John Hawks: "I basically agree. Humans do differ in one respect: we have much greater communication bandwidth. So human teachers have access to more signs emitted by learners, and can shape their teaching more accordingly. But I don't think there is necessarily anything special about human cognition in this regard; to my mind "theory of mind" is a kind of catchall category that invites "ghost in the machine" interpretations.

So why do I think primates are modeling based on limited information?

It's because primate communication isn't very different (at least in its bandwidth) from any other mammals, or even the ants. To the extent that primates do interesting things with communication (like coupling vocal and visual, or gestural-procedural), it mostly takes
looks like error-correction (like sending the same message through multiple channels). So if you're going to evolve a better means of modeling other individuals, you pretty much have to do it all within the brain; there just isn't going to be much more to go on as far as signs emitted by other individuals."

7/20/2006

Reversing Time: Temporal Illusions

Everyone's familiar with a variety of visual illusions, but what about temporal illusions? Consider the following example, from a recent poster by Stetson et al.: you are wandering through the forest and you hear a twig crack ... but did it crack when you set your foot down, or just before? For much of our evolutionary history, questions such as these were of dire importance; a tiny temporal offset can be the difference between life, and death by a large predator. Unfortunately, these survival-based mechanisms can sometimes lead us astray.

To see how minimizing temporal offset can lead to a temporal illusion, first remember that every sense has its own processing latency - for example, as Stetson et al. note, visual processing is actually slowed in low light conditions. Other times there is variable latency within a single sense, such as touch (because it takes longer for a signal to reach the brain from your toe than from your arm). Given these small temporal discrepancies in each of our senses, how do humans reliably perceive simultaneous multi-sensory events as actually being simultaneous?

The answer is surprisingly simple: we literally live in the past. In order to correctly perceive the temporal order of events in the world, our brain is constantly recalibrating the temporal relationship between the motor system and our perceptual systems. It does this by implementing a variable delay in the perceived onset of our own motor actions, so that we are able to dynamically adapt to changing environmental and sensory conditions.

In other words, if the twigs consistently cracked some number of miliseconds after you put your foot down on that walk through the forest, by the end of the day you would perceive them as cracking in complete synchrony with your footsteps. Thus, our brains appear to have a relatively simple - and usually safe - assumption built-in: miniscule and yet highly consistent temporal offsets between motor output and sensory response are more likely due to delays in sensory processing than to some strange doppelganger, capable of copying your every move with 50 ms precision. Therefore, this recalibration mechanism simply adds some "delay compensation" to the perception of motor outputs, and thereby synchronizes the motor and sensory systems.

If such a "recalibration mechanism" does exist, Stetson et al. conjectured, then using modern technology, we should be able to force subjects to recalibrate to an artificially long delay. This would then cause them to perceive their own motor actions as happening later in time than they actually did. Then, if we abruptly expose subjects to the real world, in which delays are not artificially lengthened, it might seem to them that the world reacts to things they have not yet done! (Imagine hearing your own footsteps before you think you've put your foot down.)

By repeatedly pairing a keypress with a flash of light, in which the light flash lagged behind the keypress by 100 ms 75% of the time, and then recording subjects' perception of whether the light flashed before or after their keypress, Stetson et al. were able to cause a reversal of perceived temporal order - at the end of the experiment, the light flashed immediately after the keypress, and subjects reliably thought the light flashed before they pressed the key!

In fact, their analysis showed that this recalibration process can occur in as little as 20 trials - just 20 exposures to an artificially lengthened delay is enough to kick the recalibration mechanisms into action. Comparison of the neural activity belonging to the flash-lag group with a baseline group (in which the experimenters did not insert an artificial lag between flash and keypress) showed selective activity in the anterior cingulate - often considered the "error detector" or "conflict monitor" of the cortex. The same activation patterns are seen when comparing the "illusory simultaneity" and "illusory temporal reversal" trials within the flash-lag group; this is also compatible with a view of anterior cingulate as involved in error or conflict detection. However, this does not suggest that the anterior cingulate is part of the recalibration mechanism itself - these comparisons show only that it is activated during illusory reversals of temporal order.

Currently, Stetson et al. have an article in press at Neuron, titled "Motor-sensory recalibration leads to an illusory reversal of action and sensation," which I'm guessing will cover much of the same topics as this poster. Other experiments from the Eagleman lab look equally fascinating - such as this experiment, in which subjects free fall from a 80 foot tower, in order to test possible time distortion effects. The same lab has also weighed in heavily on the illusory motion reversal debate, covered here previously.

7/19/2006

Symmetry in Visual Search

Can you identify which item doesn't belong in picture A, to the right? How about in the picture b? As you may have noticed, the second picture is merely a rotated version of the first; nonetheless, people are on average much faster at visually searching the first picture than the second. As it turns out, simple rotations often profoundly alter the efficiency of visual search, but up until an article in the June issue of Psych Science, the reasons for this were unclear.

Authors van Zoest, Giesbrecht, Enns, and Kingstone first ask whether the difference in search efficiency could be due to the fact that it's easier to search through tall objects than wide objects. Based on the reaction times of 45 participants each searching 360 displays showed that the differences in reaction time could not be attributed to target & distractor width. However, the authors did find that black-top targets were easier to find than white-top targets; this is consistent with the idea that viewers expect objects to show coloration consistent with "lighting from above," thus making the objects with black-tops appear odd (incidentally, it is this same expectation that forms the basis for this visual illusion).

Extending this rationale, the same authors next asked whether viewers' lighting expectations could account for the search efficiency differences between pictures a and b above. To test this hypothesis, the authors designed stimuli that did not have a simple "lighting" interpretation, similar to the stimuli in picture C, above. Although these new stimuli removed the difference seen between different distractor/target pairs, they didn't remove the difference between upright and rotated displays.

In a third experiment, the authors considered that perhaps the upright/rotated difference could be caused by "internal symmetry," in that in picture A the items are internally symmetric about the vertical axis, while in picture B the items are internally symmetric about the horizontal axis. To test this hypothesis, the authors used the stimuli in picture C. Unfortunately, this also did not remove the advantage for "upright" vs. rotated displays.

If none of these things can explain the difference in reaction time, then what on earth could? As it turns out, some research has shown that items that are symmetric across the vertical axis (such as the letters "b" and "d") are perceived as more similar to one another than items that are symmetric across a horizontal axis (such as the letters "b" and "p"). What if search is easier when the items are seen as more different from one another, and thus when the targets and distractors are vertically symmetric rather than horizontally symmetric?

To answer this question, the authors used the stimuli in picture D, above. Unlike the stimuli in C, D's targets & distractors are different in that they are not reflections of one another, but instead 180 degree rotations of one another. So, if the direction of interitem symmetry is what causes the differences in search efficiency between upright and rotated displays, then this set of stimuli should show no RT difference between upright and rotated displays.

Indeed, this is exactly the result these authors found. This discovery can be viewed as a refinement of the predictions motivated by Feature Integration Theory (FIT), in that search efficiency is a function of target/distractor difference. But in contrast to the typical interpretation of FIT, these authors have shown that the ways in which targets and distractors differ is not necessarily straightforward - not only does it have to do with lighting expectations, but also interitem symmetry, and this target/distractor need not be "spatially local" in order to affect search efficiency.

But why should interitem symmetry make a difference? As yet, that question remains unanswered.

Related Posts:
Selection Efficiency and Inhibition
The Attentional Zoom Effect
The Attentional Spotlight

7/18/2006

Deprogramming Through Meditation and Hypnosis

In a fascinating review of the cognitive neuroscience of attention, authors Raz and Buhle note that most research on attention focuses on defining situations in which it is no longer required to perform a task - in other words, the automatization of thought and behavior. Yet relatively few studies focus on whether thought and behavior can be de-automatized - or as I will call it, deprogrammed.

What would count as deprogramming? For example, consider the Stroop task, where subjects must name the ink color of each word in a list of color words (e.g., "red" might be written in blue ink, and the task is to say "blue" while suppressing the urge to automatically read the word "red"). Reaction time is reliably increased when subjects name the ink color of incongruent words ("red" written in blue ink) relative to congruent words ("red" written in red ink), presumably because the subjects need to inhibit their prepotent tendency to read the words. But is it possible to regain control over our automatized processes - in this case, reading - and hence name the ink color of incongruent words as quickly as we would name the ink color of congruent or even non-words?

The Role of Meditation in "Deprogramming"

Some meditative practices purport to reverse automatization of thought and behavior, such as transcendental or mindfulness meditation, and indeed there is some evidence that these techniques can reduce interference on the Stroop task.

For example, in a study by Alexander, Langer, Newman, Chandler, and Davies from the Journal of Personality and Social psychology, 73 elderly participants were randomly assigned to either no treatment, a transcendental meditation program, mindfulness training, or relaxation training. Note that transcendental and mindfulness techniques are frequently described as inducing a state of "pure consciousness" during which the mind is "silent," and yet not empty: in this state, meditators claim to be intensely aware only of awareness itself. Less cryptically, this state is also referred to as "restful alertness." Subjective reports aside, this state is also accompanied by increased interhemispheric phase coherence in frontal alpha EEG (Alexander, 1982, cited by Alexander et al, 1989), the amount of which is highly correlated with subsequent measures of fluid intelligence (Dillbeck & Vesley, 1986, cited by Alexander et al, 1989).

Those subjects who underwent training met with instructors for 30 minutes each week, and were instructed to train 20 minutes twice daily for 2 months. Transcendental meditation (TM) required the use of a mantra, and other specific techniques, as described in Maharaishi (1969, cited by Alexander et al., 1989). Mindfulness training (MF) involved a structured word generation exercise, in which subjects must think of a word, then think of another word beginning with the last letter of the previous word, and then repeat this process throughout training without ever repeating a word. Subsequently subjects were afterwards simply asked to generate words belonging to specific categories, and then undergo a fairly generic "creative thinking" exercise (think of novel uses for various objects, but don't daydream). Mental relaxation simply involved focusing on a pleasant or relaxing thought.

Various statistical procedures were also used to equate instructor effectiveness, subjects' expectancy of benefits, or regularity of practice; the study was double-blind, in that the instructors and the subjects were unaware of the hypotheses being tested. After training, subjects were tested on a variety of cognitive and personality tests, including associate learning, word fluency, depression, anxiety, locus of control, and of course Stroop. Results showed that the TM and MF groups together scored significantly higher on associate learning and word fluency than the no-training and relaxation-training groups. Perhaps most surprisingly, over a 36 month period, the survival rate for the TM and MF groups was significantly higher than for the relaxation and no-training groups (p<.00025). But more to the point, both TM and MF scored higher than MR and no-training on the Stroop task (p<.1; one-tailed test).

The Role of Hypnotism in "Deprogramming"

According to Raz & Buhle, the studies showing effects of hypnotism on reducing automaticity in the Stroop task are even more compelling than those that use meditation. Several of these studies are written by Raz himself, such as a fascinating article by Raz, Fan & Posner from a 2005 issue of the prestigious Proceedings of the National Academy of Sciences.

In this study, the authors used fMRI and scalp EEG to record the neural correlates of Stroop performance. Eight (4 male, 4 female) of the sixteen participants were assigned to the experimental group, and had been previously selected from a pool of 95 potential participants for being "highly hypnotizable" (as determined through administration of the Harvard Group Scale and the Stanford Hypnotic Susceptibility Scale), whereas the control subjects all scored very low on these tests.

After a "standard hypnotic induction" (described in full on page 6, here), subjects were told the following:

‘‘Very soon you will be playing the computer game. Every time you will hear my voice talking to you over the intercom system, you will immediately realize that meaningless symbols are going to appear in the middle of the screen. They will feel like characters of a foreign language that you do not know, and you will not attempt to attribute any meaning to them. This gibberish will be printed in
one of four ink colors: red, blue, green, or yellow. Although you will only be able to attend to the symbols’ ink color, you will look straight at the scrambled signs and crisply see all of them. Your job is to quickly and accurately depress the key that corresponds to the ink color shown. You will find that you can play this game easily and effortlessly. As soon as the scanning noise stops, you will relax back to your regular reading self.’’


Incredibly, behavioral data showed that the standard stroop effect (again, a cost in reaction time when reading incongruent words relative to congruent words) was completely eliminated in terms of both reaction time and accuracy for both the experimental and control groups. [ERP analyses revealed decreased visual activity under suggestions , including suppression of early visual effects commonly known as the P100 and N100, while fMRI showed reductions in a variety of regions including anterior cingulate]. The bottom line, then, is that even strong suggestion is enough to accomplish some amount of deprogramming, as measured through the Stroop task.

Related Posts
"Unbinding" Imagery Via Attention
A New Mode of Visual Perception?
Imaging Lapses of Attention
The Attentional Zoom Effect

7/17/2006

Memory Bandwidth and Interference

Does memory's capacity limit (7 +/- 2 items according to Miller, but now considered to be closer to 4) result simply from limitations of attention, or do more (modality-, feature-, or stage-) specific limitations play a role as well? In the new issue of Psychological Science, Fougnie & Marois present data from 7 experiments on this very topic.

The authors begin by describing how both visual working memory (VWM) and attention (as measured by multiple-object tracking) seem to have capacity limits of around 4 items, and how this has led many to hypothesize that most capacity limitations might be traced to a central attentional bottleneck.

To investigate whether attention does cause this limitation, the authors used a dual-task design in which subjects must remember the location & color of three circles (the VWM task) during the time they are performing a multiple-object tracking (MOT) task. According to their logic, if the capacity limitation is purely due to attentional constraints, then the MOT task should interfere with the VWM task just as much as a second, concurrent VWM task would, assuming that the MOT and VWM tasks were equated for their attentional demands. On the other hand, if the capacity limitation results even in part from content-specific processes, as opposed to solely resulting from an amodal and content-general pool of attentional resources, then the MOT task should interfere with the VWM task less than a second concurrent VWM task would.

The details of the methodology and analysis of results are all in italics, as follows:

40 subjects participated in this task, in which their VWM capacity was calculated via Cowan's K (N*[hit rate + correct rejection rate - 1], based on their performance in remembering the color & location of displays containing three circles each. During the retention interval between display and test, subjects had to track either 1 (low load) or 3 (high load) white circles as they moved randomly throughout a display containing many identical white circles. To prevent participants from using their phonological system to store information, participants performed articulatory suppression, in which they repeated the word "the" 2 times per second throughout each trial.

MOT and VWM tasks showed mutual interference, in that performance on both tasks was lower than on either task independently. Furthermore, the amount of interference increased between the low- and high-load MOT tasks, indicating that there's not simply a constant level of "performance cost" incurred by the dual tasking - instead, that the additional storage demands resulting from the high memory load causes additional interference between the tasks.

Subsequent experiments showed two VWM tasks interfere with each other even more than VWM and MOT mutually interfere. The authors found a level of interference between a simple verbal task and the VWM that was equivalent to that found between VWM and MOT, but less than between two VWM tasks. This shows that the VWM-MOT interference is likely due to a central attentional bottleneck, and not due to a more specific shared process.

The final three experiments indicated that this "central source" of interference was not related to the similarities of the features used between the two tasks (such as color or location), nor was it related to the degree to which each task was spatial in nature, nor was it affected by the use of a rapid serial visual presentation task instead of multiple object tracking. In other words, no dual task paradigm was found that could create the same level of interference as a VWM-VWM dual task, whereas a variety of other tasks showed the same level of interference as the original VWM-MOT dual task.

These experiments strongly support the idea that attention and visual working memory have distinct capacity limits, and contribute jointly to observed capacity limitations. Although it's possible that equivalent levels of interference can occur for different reasons in different dual-task paradigms, it seems unlikely that the precise amount of interference would be so similar among so many different dual-tasks, unless that interference originates from a single, central source.

Furthermore, these data tend to support the idea that content- or stage-specific subprocesses of visual working memory have their own capacity limitations that are distinct from those of the attentional system, based on the fact that two VWM tasks interfere with each other more strongly than a MOT task and a VWM. On the other hand, one might argue that this additional interference is merely caused by embedding one task inside another identical task; for example, there may be additional goal- or task-related demands required by this "embedding" that are unrelated to visual working memory capacity limitations. Regrettably, a control condition for this alternate explanation was not included in the current study.

7/14/2006

Blogging on the Brain: 5/27 to 7/14

MIT Tech Review has an interview with the ever-controversial figure Marvin Minsky, who is perhaps best known for his book "Perceptrons" (with Seymour Papert), or maybe for killing neural net research for 20 years ... or maybe for the invention of "Logo," or maybe for thinking of the plot of Jurassic Park, or maybe... just read the interview.

Psychology press has just unveiled a new cognitive neuroscience site, full of associated web resources.

Mixing Memory looks at a fascinating study of the game "chicken" - and the implications it has for theory of mind.

MindHacks has an interview with Sherry Turkle, author of "Life on the Screen," one of the most influential tracts on how computing technology affects our minds. The same blog also covered a fascinating article in the NYT about persistent déjà vécu - which, in contrast to deja vu, refers to having an immersive (as opposed to merely visual) feeling of having "lived through" an experience before.

Neurodudes mentions a series of presentations at the IBM Almaden research center on "cognitive computing," after an earlier post on the brain regions responsible for value prediction, in the context of gambling.

John Hawks excellent blog covers the computational geometry of music and the implications for the neuroscience of musical experience. John Hawks also covers recent work from Cambridge on how meerkats teach their young, a behavior not often observed in the wild.

Neurophile covers the neuroscience of various psychedelics (with fascinating comments, too), perhaps in response to another hot topic in the blogosphere this week: psilocybin and mystical experience.

The Neurophilosopher earmarked a new Grossberg paper that purports to explain autism. Should be interesting reading...

Mindblog covers a recent paper in the Journal of Neuroscience that dissociates the representations of future and previous goals in prefrontal cortex.

Finally, BrainTechSci asks whether "mirror neurons" have hogged too much of the limelight.

7/13/2006

Imaging Lapses of Attention

A new Nature Neuroscience article by Weissman, Roberts, Visscher, and Woldorff uses fMRI to identify the neural activity during momentary lapses of attention. The researchers used a version of the local/global task, in which subjects must identify whether one of two letters is an S or and H. On each trial, the subject should either attend to the "global letter," which is made up of a bunch of "local letters," or to the local letters themselves. The target letter is changed on some trials, and repeated on others.

The researchers discovered that lapses of attention (identified through slowing of reaction time) can be predicted on the basis of prestimulus-onset activity reductions in anterior cingulate and right prefrontal regions. After stimulus onset, activity markedly increases in right inferior frontal gyrus, and right temporo-parietal junction. This activity is hypothesized to reflect stimulus-driven reoirenting of attention.

As one would expect, ACC showed greater activation during incongruent (where global & local letters lead to different judgments) than congruent trials, consistent with the emerging view that ACC is responsible for managing conflict and error detection.

7/12/2006

Task-Switching: A Role for Inferior Parietal Cortex

Much recent research addresses "cognitive flexibility," which refers to the ability to flexibly switch between tasks and use mental resources appropriately. Task switching is a useful paradigm in which to measure flexibility, because people usually incur a "switch cost" - a slowing of reaction times - after switching tasks. However, as Badre and Wagner note in a recent issue of PNAS, the source of this switch cost remains controversial.

Some have conceptualized "switch cost" as arising from the need to inhibit the old task set. Others have suggested that it may arise from the need to more powerfully activate the new task set (in the absence of any directed inhibition against the old task set). Yet others believe it to be a mixture of the two. Badre and Wagner also mention what they term "reconfiguration theories" of switch cost, in which slowed reaction times are thought to arise from an intentional process of task set reconfiguration which is largely independent of target stimulus presentation.

As Badre and Wagner point out, all of these theories share the idea that a new task set must be activated. Citing fMRI evidence, the authors note that activity in ventrolateral prefrontal cortex (VLPFC) is frequently interpreted as serving to overcome proactive interference, in both semantic and episodic tasks. A network of regions appear active during the simplest task-switching contrast (between task-repeat and task-switch trials), including VLPFC as well as supplementary motor area (SMA) and inferior/superior parietal cortices. To Badre and Wagner, the fact that VLPFC is seen to be active in both reconfiguration as well as interference theories supports the idea that VLPFC is involved in overcoming interference.

Badre and Wagner thus sought to experimentally verify their hypothesis, by using measures of interference from a computational neural network model (named "CAM-TS") as quantitative predictions in an fMRI experiment. In the first experiment, subjects were given a cue as to what task they should perform on the next stimulus (either an odd/even judgment or a vowel/consonant judgment), and the stimulus was then presented (a digit/letter pair). On half of the trials, the judgment repeated from the previous trial, and on the other hald, the judgment switched. The authors also manipulated the delay between cue and stimulus (thus changing "preparedness") as well as the type of manual response (on some switch trials the correct response would require pressing the same button as the previous trial, despite the fact that the judgment required had changed).

As usual, task switching resulted in a switch cost, including an effect where switch cost was greatest on trials where the response had remained the same as the previous trial. In addition, shorter cue-stimulus intervals resulted in larger switch costs. These findings were then used to evaluate the neural network model mentioned earlier, such that the cue-stimulus interval was found to negatively correlate with switch cost only when the model included a "cognitive control" parameter that selectively up-regulated the gain on a "task" layer, which suggests that the idea of an intentional and effortful process of task set selection is a real part of task switching cognition. The network also successfully simulated the effect in which switch cost is greated in response-repeat trials, because the irrelevant task set information had always left it's strongest impression on the last response pathway that was used - resulting in additional proactive interference that must be overcome during a task-switch.

Using a comparison of the activity in competing units as a measure of conflict (on the output layer as a measure of "response conflict" and on the internal representations layer as a measure of "switch-related conflict"), the authors showed that switch-related conflict declined with increasing cue-stimulus intervals, whereas response conflict was seen to increase with increasing cue-stimulus intervals. This provided a prediction that was subsequently investigated in a replication of the first experiment.

In this second experiment, the only region to show negatively correlated activity with increasing CSI was mid-VLPFC, suggesting again that VLPFC activity indexes conceptual or switch-related conflict. In contrast, the only region to show a positive correlation with increasing CSI was inferior parietal cortex, suggesting that inferior parietal activity may index response related conflict.

This finding is consistent with other suggestions that inferior parietal activity may be more related to switching per se, or in other words may be specific to the remappings between stimulus and response. VLPFC, in contrast, is thought to have a more general role in active maintenance processes, which could be useful in overcoming high cognitive or conceptual conflict during a task-switch.

However, Badre and Wagner argue that task switching should be possible even with a damaged VLPFC, given that it is merely involved in overcoming cognitive conflict, and they cite relevant neuropsychological evidence that supports this claim. This assertion contrasts with many other models which tend to conceptualize task-switching as highly dependent on VLPFC maintenance to overcome the interference built up by experience with the preswitch task - particularly those which view age-related perseveration as resulting from a lack of prefrontal development. In contrast, Badre and Wagner might argue that age-related changes in task switching ability (i.e., remappings between stimulus and response) are more directly related to either inferior parietal development. In contrast, "learning rate" (i.e., plasticity) and prefrontal development are relevant only insofar as they counteracts the effects of interference from previous tasks.

Related Posts:
Task Switching in Prefrontal Cortex
The Rules in the Brain
Models of Active Maintenance as Oscillation
Selection Efficiency and Inhibition
An End to the Tyranny of Inhibition
Attention: The Selection Problem

7/11/2006

Smarter than the Average Primate

In 1948, Alan Turing wrote: "An unwillingness to admit the possibility that mankind can have any rivals in intellectual power occurs as much amongst intellectual people as amongst others: they have more to lose." Accordingly, comprehensive comparisons between the intellectual powers of great apes and humans are rare - perhaps because we feel safe in assuming that the human intellect is in all ways superior to that of other primates. But recent work suggests this assumption may not be entirely sound.

For example, a recent New Scientist article (via NeuroEthics) contains a provocative statement by primatologist Tetsuro Matsuzawa, who argues that chimps may systematically outperform humans in certain tests of short-term memory. Given a task in which subjects must press stimuli in the order in which they appeared, all of the adult chimps tested have been equal or superior in performance to humans. This trend holds when the comparison is between adult chimps and adult humans, as well as between chimpanzee infants and human infants.

Perhaps just as surprisingly, children are frequently better at these games than adults, both in humans and chimpanzees. This finding meshes well with previous indications that, like the great apes, children may in specific cases be intellectually superior to human adults. For example, Elman has shown how limitations in working memory span - characteristic of children - can in some cases result in language learning benefits (although the applicability of this model to the real world has been hotly debated). Other work has shown that human children are in some cases better able to inhibit false memories than human adults are. Finally, Fisher & Sloutsky have shown that children in some cases have better recognition memory than adults.

Mindblog has recently pointed out this new Science article by Pennisi, in which the author argues that the higher cognitive capacities of animals have been only recently appreciated, based on the recent emergence of the view that intelligence arises from the demands of social living. According to this view, higher cognitive skills become an exaptation, in which they found use not only in social settings but also in more everday situations (e.g., finding ripe fruit and later remembering its location).

In the same article, Pennisi details how we managed for so long to overlook the startlingly advanced cognitive skills of higher primates. "Apes rarely did well on self-awareness, memory, gaze-following, gesture, spatial learning, and other tests at which even young children excel," writes Pennisi, but "6 years ago, Hare and his colleagues showed that under the right circumstances, chimps could pass some of these tests with flying colors. The secret was that chimps are exquisitely tuned in to their competition, particularly when food is involved, and will do everything they can to get a treat."

The implications for developmental approaches to intelligence are startling. We know that social interaction is a critical part of human development; to what extent might the current lack of "social interaction" among artificial intelligences influence their potential? Of course, there are a few notable exceptions to the idea that current AI is socially isolated, but in large part, this aspect of intelligence development remains relatively unexplored.

Related Posts:
Scientific Paradises
Neurorobotics
Intelligence Tradeoff

7/10/2006

Video Games - Mental Exercise or Merely Brain Candy?

In many circles, video games are still considered to be a waste of time. However, recent work in cognitive neuroscience has shown that certain types of video games can result in a variety of positive changes to visual attention, hand-eye coordination, and other perceptual skills.

As Green and Bavelier showed in an already-classic 2003 Nature article, action video game players (VGP's) were shown to have increased visual attention capacity on a flanker distractor task, as well as thought to show improvements in their ability to subitize (this is basically the process by which you can tell how many items are in a display without actually counting each item serially). In fact, VGP's could correctly subitize up to 5 items on average, while non-VGP's could subitize only three on average.

The authors also showed that action video games result in "enhanced allocation of visual attention" as measured by the 'useful field of view' task, in which subjects must identify along which spoke of a 8 spoked-wheel on the visual display contains the target stimulus, while the location, eccentricity from fixation, and number of distractors is varied.

Finally, the authors showed that action video game players also have enhanced task switching abilities and a decreased attentional blink, as assessed through a variant on the traditional attentional blink task. In this variant, subjects must identify stimulus 1 and then switch to detecting a target (stimulus 2), while the lag between stimuli is varied; VGP's were able to correctly detect stimulus 2 at lower lags than nVGPs.

Control tests confirmed a causal effect of action video games by training non-video game players on an action video game ("Medal of Honor," to be precise; interestingly,m control subjects were trained on "Tetris"!). However, it is unclear from these data whether the increase in attentional/visual ability is a result of improved target detection, faster processing overall, increased stabilization of information in memory, or a total increase in capacity.

An article by the same authors released in the current issue of Cognition elaborates the findings above, and revises them in important ways. First, it appears that the data previously interpreted as supporting an increase in subitizing may actually reflect the deployment of a serial counting strategy on behalf of the VGPs.

VGPs were also found capable of tracking two more objects on average than nVGPs in a multiple-object tracking task. Similar to the findings discussed above regarding nVGPs and VGPs, the differences only became apparent at levels of higher load, such that nVGPs showed larger performance decrements than VGPs.

As for the precise mechanism which is enhanced in VGPs, and which results in the benefits in the tasks reported above, the authors suggest two possibilities. First, it is possible that VGP's simply have more durable memory traces. A second possibility is that VGP's have an increased "cycle speed" (their words, not mine) with which they refresh existing representations in working memory, thus translating into increased memory capacity.

Finally, the authors also note that their results do not show a connection between subitizing and multiple-object tracking abilities, which had been hypothesized in the literature previously. Instead, MOT appears to increase with serial enumeration ability. According to my interpretation of this fact, it seems that the most parsimonious explanation of the data is that VGPs have an increased "cycle speed," which conveys benefits both to serial counting processes as well as to serial working memory "refresh" processes. This interpretation, it should be noted, is very compatible with the Jensen & Lisman model of working memory capacity, mentioned on this blog several times previously.

Related Posts:
Cognitive Daily: Video games can improve performance in vision tasks (with an excellent set of comments following the post)
Mind Games: Humans, Dolphins and Computers
Active Maintenance and The Visual Refresh Rate
Visualizing Working Memory

7/09/2006

What is the Value of fMRI?

Starting with an excellent short article at Seed magazine, a discussion across the blogosphere has focused on the contribution of fMRI technology to our understanding of the brain and cognition. While fascinating, these discussions were wide-ranging both in the topics covered and the blogs on which they were posted, making the thread rather difficult to follow. Here's what I hope is an accurate and chronological summary of the discussion so far:

The original article (Seed Magazine, June 27)

Yale professor Paul Bloom suggests that fMRI has been overhyped, both in the media and within funding agencies, mostly because it produces pretty pictures. According to Bloom's analysis, more mundane techniques, such as reaction time measurements, have contributed far more to the field than fMRI. Bloom argues that this has occurred because of an implicit bias that any theory mentioning the brain is more satisfying than theories which do not, regardless of the underlying validity of the theories. A second cause of this bias is the fact that we are "natural dualists," as Bloom puts it, and are therefore more impressed by any demonstration of how our abstract and conceptual mental experience is connected to anatomy.

A Lot of People in White Coats (Mixing Memory, June 27)

On Chris's excellent (but regrettably anti-cog-neuro) blog, he suggests that in most cases, cognitive neuroscience 'tells us little more than that cognition happens in the brain.'

Commenter Hyperion counters that the distinction between behavioral and imaging work may be a false dichotomy, "with the implicit assumption that they're somehow incompatible."

The Allure of fMRI (Cognitive Daily, June 27)

Here, author Dave Munger cites an experiment mentioned in Bloom's article, in which even completely irrelevant neuroscience terminology was perceived as making scientific explanations more satisfactory, among novices and experts alike. Commenter Matt Weber then poses the interesting question of why EEG, TMS and other quantitative cog neuro techniques don't "attract the same attention"as fMRI. Finally, commenter jbark opines that the quality of imaging studies will always be more dependent on the quality of the behavioral task than the quality of the scanning technologies.

A dissenting view (Small Gray Matters, June 27)

Small & Gray (at the excellent, and new to me Small & Gray Matters blog) attempts a "spirited defense" of fMRI technology. This is where the discussion gets heated; the primary points are:
  • Even if fMRI is overhyped, it may not be detracting from coverage on non-fMRI cognitive science
  • Cognitive neuroscience and cognitive psychology occupy different levels of analysis, and comparisons of "theoretical sophistication" between the two is one between apples and oranges
  • Imaging is a complementary approach to more traditional cognitive approaches
  • The focus on media hype is misplaced; we'd be better to think about how these techniques contribute to our understanding of cognition, rather than the impression they make on lay people
  • Bloom's comparison between RT and fMRI is stacked to favor RT studies, simply because they've been around hundreds of years longer
  • Preference for some techniques, or levels of analysis, above others is merely a question of personal preference

Commenter VTW notes that the lack of sophistication of early imaging studies is an essential part of how science progresses, and that current studies should not only build upon the early work, but replicate it in order to address the power issues inherent to those early studies.

The Dirty Secrets of fMRI (Frontal Cortex, June 28)

Author Jonah Lehrer suggests that a common perception of fMRI research ("pick a sexy question, stuff some people in a magnetic tube, and get technicolor pictures on the cover of Nature") comes from the fact that fMRI has not yet earned it reducitionism credentials. In other words, methodological limitations of the technology itself call into question fMRI's validity, above and beyond the more general question of whether it is overhyped due to the public's biases.

For example, Lehrer cites papers in which increases in blood flow do not always correspond with increases in neural firing, and that even chemically silenced neurons still generate a BOLD signal that appears active. Lehrer also cites a paper showing that BOLD signals emanate only from brain regions with dense vascular networks, which apparently has "little, if any, relationship to our neural activity."

Commenter Steve_HT notes that one of the works cited by Lehrer actually suggests that fMRI "provides a reasonable measure of the net neural input" into a particular brain region, because it is indexing local field potential (LFP), not the spiking rate of individual neurons.

A dissenting response (Small Gray Matters, June 28)

Small & Gray mounts a second defense of fMRI, by responding to Lehrer's post described above. The essentials of the argument are as follows:
  • The timecourse of the BOLD signal directly reflects monotonic changes in neural activity, though at the level of LFP rather than spiking rate
  • Correlation between LFP and BOLD is nearly 67%, reflecting 67% shared variance
  • It is fortunate that BOLD correlates with LFP, and not with spiking rate, since individual neurons rapidly habituate to stimuli, and since BOLD has a well known temporal lag; if this were not the case, fMRI would have insurmountable power issues
  • Because of this, the chemical silencer mentioned by Lehrer affects only individual neurons, and not LFP, thus explaining why the BOLD signal continues after chemical silencing
  • Vascular density likely develops based on the historical neural activity of a brain region; so even if fMRI does detects regions with greater vascular density more easily than others, at least it is indexing more active regions! Furthermore, the study showing these effects is questionable in terms of general applicability, partly because it used chinchillas, a rather unusual choice for this kind of study
  • The temporal lag of fMRI is well known and well understood, and can be computationally corrected for

fMRI Redux (Frontal Cortex, June 29)

Lehrer responds to Small & Gray's critique by reiterating his point that the connection between a BOLD signal and underlying neural activity is highly complex, and may not be commonly appreciated. Lehrer also argues that fMRI researchers would do well to earn their "reductionistic credentials" by focusing on anomalies such as the lack of correlation between spiking rate and LFP, and aptly concludes with the phrase, "the devil is always in the details."

fMRI studies overrated? (Neuromarketing, July 3)

NeuroGuy notes that similar problems exist in the burgeoning neuromarketing literature, in which the "flashy, full-color pictures sometimes obscure the fact that the actual marketing conclusions that can be drawn from the work are tenuous at best." Neuroguy also argues that fMRI may have become so popular because it is a real-time view at the actual anatomy that connects "input" to "output." NeuroGuy ends with a word to the wise: "today’s marketers should avoid being seduced by brain images that suggest it’s now possible to read the minds of customers."

Flickering Lights: One-Shot Wonders versus the Network Model (Smooth Pebbles, July 4)

Author David Dobbs notes that Bloom's article covers much of the same ground as his excellent article in Sci Am Mind, and expresses his agreement that skepticism of fMRI work is justified in the current climate of extreme overhyping by the media.

Dobbs argues that one feature of fMRI research that has been heretofore overlooked is the distinction between two types of imaging study: relatively simple correlational work, versus network analysis of brain regions involved in specific tasks.

As he points out, these more sophisticated inquiries involve a systems-level understanding of how inter-regional brain dynamics contribute to complex phenomena such as depression, attention, and cognitive control.

[My apologies if I left out contributions from blogs not listed here; I did my best to compile the major points into this post. If you feel that your contribution to the discussion was overlooked or mischaracterized, please let me know!]