12/28/2006

Moved to ScienceBlogs.com!

This blog has now moved to scienceblogs.com!

Click here to visit the new site.

Or
subscribe to the new RSS feed! I'm not planning to add any new posts at this URL, so update your bookmarks.




12/20/2006

fMRI of the Stop Signal Task: What computations support "stopping"?

Right inferior frontal cortex (rIFC) is thought by some to implement "inhibition" of motor responses when they must be abruptly stopped (as in the Stop Signal paradigm). It is still unclear how rIFC might actually accomplish this, and even if it does that in the first place. Nonetheless, this paper from the Journal of Neuroscience makes great strides in establishing the neural mechanisms of response inhibition.

Authors Aron & Poldrack argue that rIFC may target a region of the basal ganglia known as subthalamic nucleus (STN). The STN is a special region of basal ganglia in that it is thought to excite the internal/medial globus pallidus as well as the substantia nigra pars reticulata, both of which then inhibit the thalamus. Thus, the end result of rIFG activation may be a general "clamping down" of thalamic output, if the rIFG does indeed target the STN - this is known as the "hyperdirect pathway."

It is not controversial that STN accomplishes such inhibition both neurally and behaviorally. Aron & Poldrack review evidence showing that STN stimulation improves stop signal reaction time (thought to be a measure of how much time is required for subjects to "cancel" a motor action). Likewise, STN damage decreases stop signal reaction time.

What is controversial, however, is the existence of the "hyperdirect pathway" whereby rIFG directly activates STN and thus inhibits cortical activity. The more commonly accepted mechanism is known as the "indirect pathway," in which cortical areas activate striatal pathways, which themselves disinhibit STN, by inhibiting the external globus pallidus (you can probably see where it gets its name!).

To identify which of these two possible neural networks are active in motor inhibition, the authors put 18 healthy, right-handed adults into an fMRI scanner, where they were presented with a screen containing either a left- or right-pointing arrow. Each subject was told to press a corresponding left or right key, unless a sound was played after the presentation of the arrow, in which case subjects had to refrain from pressing the button (although this sound was only played on 25% of trials). The delay between the display of the arrow and the onset of the sound was dynamically calibrated to each subject while in the scanner, such that each subject was getting around 50% of the "stop trials" correct and 50% incorrect. The value of this delay is known as the stop signal delay, or SSD. This is then subtracted from the median reaction time on go trials to arrive at the stop signal reaction time, or SSRT.

The classic interpretation of SSRT is that it represents the rate of diffusion of a "stopping process," whereas the reaction time on normal go trials represents the rate of diffusion of a "go process." This is known as Logan's race model. The behavioral results were largely in line with this conceptual model, and showed that SSRT and RT on go trials were uncorrelated, supporting the "race model's" assumption that processes involved in "stop" and "go" trials are distinct. Secondly, the median reaction time for incorrect stop trials (i.e., where subjects incorrectly made a response) was faster than the median reaction time on correct go trials, which is again consistent with the race model. SSRTs were within the normal range of around 120 ms.

The fMRI results showed that all "stop" trials were associated with activity in right inferior frontal cortex and STN, consistent with the hyperdirect account. However, failed "stop" trials demonstrated strongly decreased motor cortex activity (M1) towards the end of each trial relative to both Go trials and correct "stop" trials. This was interpreted to indicate that inhibition was "triggered at the neural level, even if it was ineffective at the behavioral one." Conversely, correctly inhibited "stop" trials were associated with increased putamen activity relative to incorrect stop trials, which Aron & Poldrack persuasively argue to reflect higher conflict, resulting from a slower "go" process on those trials.

In support of the "hyperdirect" pathway, activity in rIFC was correlated with activity in STN , and activity in both regions was stronger for subjects with faster SSRTs. Other analyses suggest that while the degree to which rIFC is recruited does not depend on SSD, activity in STN, pSMA and globus pallidus does, such that those regions are more active the longer the SSD (and thus the closer to execution the response is).

There are several complications in interpreting these results. First, the decreased M1 activity on failed stop trials may have to do with the length of those trials as compared to the length of correct Go trials, which showed a very similar but protracted temporal profile. Furthermore, if signal strength reductions are to indicate inhibition, it is strange that "go" trials would show this pattern - because clearly inhibition is neither needed nor performed on correct "go" trials! This suggests that inhibition may not be an accurate term for what is observed in this case.

Secondly, there are at least three reasons that STN, pSMA and globus pallidus might be more active on trials in which the stop signal occurs later relative to trials where it occurs earlier. The possibility endorsed by Aron & Poldrack is that in late-signal trials motoric inhibition must be performed (since the movement may already be initiated), whereas in early-signal trials more "cognitive" inhibition must be performed (since only movement planning will have begun). However, it's also possible this is an an artifact of the longer "go" process which is necessarily present on these trials, as was argued to be the case for putamen activity. Or it could be that this activity is akin to the "lateralized readiness potential" detected by EEG studies of stop signal, in which there is increasingly negative frontal activity as a stop signal trial wears on. Either way, the conclusion that this reflects greater motoric inhibition is simply premature.

Third, if rIFC actually executes inhibition, it is strange that activity in that region is insensitive to the type of inhibition that would have to be performed. For example, one would expect different patterns of activity in rIFC for trials requiring motoric inhibition relative to those requiring "cognitive inhibition," and yet this was not observed. Instead, the activity in this region is more compatible with an account where rIFC is actively monitoring task cues/performance and orchestrating subsequent control or selection processes, rather than accomplishing inhibition per se.

This perspective is also compatible with a wide variety of findings on the functional role of rIFC. For example, right inferior frontal activity has been associated with both deviance and novelty detection in an oddball paradigm. Detection of unusual stimuli would be important for a region involved in "task monitoring." Likewise, rIFC is more activated by negative than positive feedback. Finally, rIFC is also most active during conditions of high WM load, also suggesting a role for that region in selection or monitoring processes, but not necessarily inhibition.

Related Posts:
Noninhibitory Accounts of Negative Priming
Inhibition and rTMS
Backward Inhibition: Evidence and Possible Mechanisms
Inhibition from Excitation: Reconciling Directed Inhibition with Cortical Structure
The Tyranny of Inhibition

12/19/2006

Is "Executive Function" A Valid Construct?

The term "executive function" is frequently used but infrequently defined. In attempting to experimentally define executive functions in terms of their relationship to age, reasoning and perceptual speed, Timothy Salthouse reviewed the variety of verbal definitions given to construct of "executive function." Although these differ in terminology and emphasis, they are clearly addressing a similar concept:
“Executive functions cover a variety of skills that allow one to organize behavior in a purposeful, coordinated manner, and to reflect on or analyze the success of the strategies employed.” (from this book)

"Executive functions are those involved in complex cognitions, such as solving novel problems, modifying behavior in the light of new information, generating strategies or sequencing complex actions” (Elliott, 2003).

“Executive functions include processes such as goal selection, planning, monitoring, sequencing, and other supervisory processes which permit the individual to impose organization and structure upon his or her environment” (Foster, Black, Buck, & Bronskill, 1997, p. 117).

"The executive functions consist of those capacities that enable a person to engage successfully in independent, purposive, selfserving behavior” (Lezak, 1995)
Given such a wide variety of definitions, Salthouse notes that it is not surprising to see a correlation between executive function (EF) and intelligence (g). But as with any measure, its correlations depend on how it is measured - and executive function, due in part to its overly broad definitions - is measured in many different ways. In fact, a measure is often considered to index executive function simply if it has subjective "face validity."

Salthouse argues that psychometric techniques for establishing validity - i.e., a detailed investigation of EF's correlations with other measures - could help this sad state of affairs. If there are in fact distinct sources of variance underlying performance on complex tasks that are not accounted for by variation in age and "non-executive" processes (such as visual skill, speed of processing, etc), then executive function may be a valid construct.

Specifically, if the tasks thought to measure EF have unique predictive value of a participant's age, above and beyond the predictive value conveyed by other non-EF measures, then it appears to have good construct validity. Likewise, if the EF measures do not share variation with non-EF measures, then it also appears to have good construct validity. In Salthouse's own words:
"The rationale was that if the target variables represent something different from the cognitive abilities included in the model, then the variables not only should have relatively weak relations to those abilities but also should have significant unique (direct) relations with an individual-difference variable such as age if they are reliably influenced by another construct, such as executive functioning, that is related to age."
In pursuit of this goal, Salthouse analyzes data from over 7,000 adults on a variety of tasks. The most important findings from the study are reported next, with the methodological details of this study included at the end of the post in italicized text.

The results showed that many putative measures of "Executive Function" are strongly related to reasoning ability (as measured through Raven's Progressive Matrices) and processing speed (as measured through extremely simple tasks involving replacing number words with digits, etc). The vast majority of putative executive function measures did not share variance with age that was not also present in the simpler tasks. What does Salthouse conclude from these results?

Salthouse's First Conclusion: These findings are "inconsistent with the interpretation that [Executive Function] represents a distinct construct" from the other non-executive measures.

This conclusion is problematic for several reasons. First of all, it is arguable that every task involves some amount of executive function, whether it is coordination, planning, strategizing, inhibition, or any of the variety of processes mentioned in the definitions of executive function reviewed at the beginning of this post. Therefore, it is unreasonable to expect to find a task in which there is no relationship with executive function (except, perhaps, simple reaction time measures, which were not included here).

Second, performance on any given task includes variance that is incidental to the construct thought to be measured by that task. Salthouse clearly appreciates this fact in the case of the nonexecutive measures (reasoning, processing speed, etc) which is why latent variables are constructed from these measures. The same thing holds for measures of executive function, and yet no latent variables were constructed for these measures. This results in a decrement in statistical power to detect unique age-related variance in executive function measures.

Salthouse's Second Conclusion: EF measures may be of little use for the measurements of individual differences, since many nonexecutive tasks seem to measure the same things and have superior reliability/sensitivity.

In contrast to the conclusion above, this conclusion may indeed be accurate insofar as executive measures are often difficult to administer and have relatively low retest reliability. However, the issue of sensitivity - how well EF measures can detect things like brain damage, functional outcome, age, or other individual differences - is not clearly addressed by this paper (although this paper would suggest that EF measures may be more sensitive than many traditional psychometric tests). It is true that lower reliability may result in lower sensitivity, but this is not necessarily the case.

Tests of executive function may also have lower specificity than other tests - i.e., low performance on EF tests may reflect poor executive functioning or impairments in the processes on which executive function acts. Although this is generally a disadvantage, the fact that a single test might detect deficits in a variety of processes may be advantageous for situations in which cognitive function needs to be rapidly assessed (i.e., at the scene of an accident).

Salthouse's second conclusion is reminiscent of Arthur Jensen's claims in "Clocking the Mind" about the high correlations of simple and choice reaction time measures with IQ. Reaction time measures have several advantages compared to EF measures, in particular their relative simplicity and the fact that they do not rely on task novelty. However, it remains to be seen whether executive functions mediate the relationship of simple reaction time to IQ, or whether these represent distinct contributions to intelligence.

Related Posts:

Intelligence and Executive Function
Under The Rug: Executive Functioning
The Rules in the Brain
Localizing Executive Functions in Prefrontal Cortex
Clinical Neuropsychology and Executive Function
Factor Analyses of Executive Function Impairment Due to Brain Injury
Theory of Mind, Working Memory and Inhibition



Below are the construct variables used in Salthouse's structural equation modeling analysis on a group of 300 adults, along with the tasks used to measure them:
  • Measures of Executive Functioning
    • Wisconsin Card Sorting Test
    • Letter, Category and Alternating Fluency Tasks
    • Connections Test (a variant of Trail Making)
  • Measures
    • Synonym Vocab
    • Antonym Vocab
    • Wechsler Adult Intelligence Vocab Subscale
    • Woodcock–Johnson Psycho-Educational Battery—Revised Picture Vocab
  • Measures of Reasoning Ability
    • Ravens Progressive Matrices Set II
    • Shipley Institue of Living Scale - Abstraction Subscale
    • Letter Sets
  • Measures of Spatial Processing
    • Spatial Relations
    • Paper Folding
    • Form Boards
  • Measures of Memory Performance
    • Free Recall
    • Paired Associates
    • Logical Memory
  • Measures of Processing Speed
    • Letter Comparison
    • Pattern Comparison
    • Digit Symbol
In the structural equation models, each of the non-executive construct variables was permitted to correlate with each other, as well as with age, while none of the underlying measures themselves was permitted to correlate with anything except for the construct it was purported to measure. Each of the executive construct variables was then examined to see whether a) it shared unique variance with age relative to the nonexecutive constructs, and b) whether it had significant loadings from non-executive measures. For each putative EF measure this was the case, with most loading on reasoning or perceptual speed ability.

This analysis was then repeated with a variety of different measures collected from over 7,000 adults. The factor loadings from thsi much larger sample were very similar to those in the smaller sample, reported above. Leaving aside for the moment the particular patterns of correlations discovered, the general finding was that no putative measure of executive functioning showed unique variation with age that could not be predicted by variation in "nonexecutive" tasks (with the sole exception of "Anti Cue," a type of anti-saccade task). The executive measures included in this larger analysis were Ruff Figural Frequency, Tower of Hanoi, Sort Recognition and Proverb Interpretation from the Delis-Kaplan Executive Function System, Trail Making, Stroop Color Word, switch costs from a task involving either "odd/even" and "greater/smaller than 5" judgments, RT and accuracy from the "Reading with Distraction" task, Anticue, computation span, listening span, N-back, Keeping Track task, Matric Monitoring, and the Running Memory task.

12/18/2006

Review: Clocking the Mind

Arthur Jensen is a controversial figure in psychology, due in large part to his claims about racial differences in intelligence. In his newest book, "Clocking the Mind," Jensen turns his attention to a more focused topic: how is it that extraordinarily simple measures of reaction time can correlate so highly with intelligence?

To understand the importance of this question, consider the following. First, as Jensen notes, almost all reliable measures of cognitive performance are correlated. Across a large number of such tests, a single number - termed g, for "general intelligence" - can account for a large portion of individual differences on each task. Because no single test is "process pure," the correlations between g and scores on any given test are typically rather small; high correlations emerge from these measures only when they are considered in aggregate, with the following exception.

Despite the fact that g is commonly assessed with tests of vocabulary, memory for associations, reasoning ability on the Raven's Progressive Matrices (where subjects must discover a visual pattern within a matrix of stimuli, and select what the next pattern in the sequence would look like), and a wide variety of other very abstract and untimed tests, it appears that the variance they share can be reliably and accurately indexed by reaction time on a task where subjects must merely press a lighted button. The correlations between such simple tasks and g is around .62, which is higher than the correlation between many subscales of IQ tests and the g factor to which they contribute.

If you are skeptical of these results, you are not alone. Jensen notes a deep-seated bias against the idea that such simple measures could reveal important traits of the cognitive system, and reviews several historical reasons for this bias. However, in just over 200 pages, Jensen creates a persuasive argument for the RT-IQ correlation based on dozens of factor analyses, and both developmental and genetic work. In the process, he covers issues related to statistical methodology, procedural variations on simple RT tasks, and correlations between simple RT and Sternberg memory scanning, working memory, short-term memory, long term memory, and a variety of other cognitive constructs.

In the end, it appears that simple RT and g may be very closely related, if not indexing the same thing. Jensen advocates the "bottom-up" interpretation of the RT-IQ correlation, suggesting that individual differences in processing speed allow those individuals to think faster, accumulate more information per unit time, and provide other advantages that subsequently translate into g. Jensen notes that the "top-down" interpretation - for example, that increased IQ leads to better strategy-use, and for that reason result in lower RTs on simple tasks - is plausible but relatively uninteresting for those interested in mechanistic rather than merely descriptive accounts of intelligence. Whether or not you agree with Jensen's "neural oscillation" hypothesis of the RT-IQ correlation, these facts beg for a mechanistic explanation.

Jensen's writing is clear and concise, and every chapter is densely packed with information. The historical treatment of chronometry is perhaps most enjoyable, filled with personal anecdotes and unique insight into the politics of 20th century psychology and psychometrics. My only complaint is the index seems sparse for a book so rich in detail.

"Clocking the Mind" is not a popular science book; it is a scholarly work directed towards professionals and graduate students. Yet, anyone with a scientific interest in individual differences, intelligence, or executive functions will find much to consider here. After all, if Jensen is right, relatively simple and extremely reliable measures of reaction time might be a good replacement for the "fancy tasks" cognitive scientists have spent decades refining.

12/15/2006

Blogging on the Brain: 12/15

Some highlights from recent brain blogging:

Psychological Operations: MindHacks discovers an archive of information warfare propaganda.

Hemisphericity: Handedness may not be the proper way to control for left or right-brain dominance, according to a recent article reviewed by BPS Research Digest.

The Neural Prediction Challenge: Can you predict a subject's responses to new stimuli given "recordings from visual and auditory neurons during naturalistic stimulation" ?

Psychedelic Treatments for OCD? It appears that psilocybin mushrooms may temporarily allieve obsessive-compulsive symptoms. [Though I have to ask, wouldn't large doses of any profoundly mind-altering drug be likely to change the profile of OCD behavior?]

IFG in mitigating interference: Aron points to the inferior frontal gyrus as the location of cognitive inhibition. This post at Cognitive Daily describes transcranial magnetic stimulation of IFG and its possible role in episodic interference.

Quantum Mechanics in the Brain? In contrast to theories of consciousness that invoke neural quantum mechanics, the Neurophilosopher reviews a viable theory of how quantum mechanics may be involved in the sense of smell.

Enhancing Memory During Sleep: SCLin's blog covers a fascinating recent study showing that minute electrical stimulation during sleep can enhance memory consolidation.

Localizing Intuition in the Brain: MindBlog reviews a recent fMRI study of intuitive judgments and their connection with orbito-frontal cortex.

Is the hand faster than the eye? Another BPS post reviews research suggesting that your eyes may not be fooled by magic tricks, even if "you" were! This is reviewed in the context of dissociable action/perception systems, but is also compatible with graded representation accounts of knowledge, where weaker representations suffice to guide eye movements but stronger representations are required for explicit knowledge (this has been demonstrated in A-Not-B tasks where infants fail to reach to the correct location of a hidden object, yet nonetheless gaze towards the correct location).

Two new blogs from Nature Publishing Group: Nautilus and Peer-to-Peer. And don't forget Action Potential!

Have a nice weekend!

12/14/2006

Non-Inhibitory Accounts of Negative Priming

The concept of "inhibition" is central to cognitive sciences, and yet some have raised doubts about the usefulness of this psychological construct. In their excellent 2003 chapter "In Opposition to Inhibition," MacLeod, Dodd, Sheard, Wilson and Bibi review the history of the construct (dating back at least to William James in 1890) and describe the experiments in which "inhibition" is used to explain the results. Below, I summarize their complaints about the use of this construct in the context of negative priming, as well as their proposed non-inhibitory accounts for this phenomenon.

Negative Priming

In the Stroop task, subjects must name the ink color of words like "RED;" subjects are much slower to name the ink color of GREEN if preceeded by RED than if preceeded by YELLOW. This has been interpreted as reflecting inhibition of the concept "red" so that the ink color can be named, which translates into slowed reaction time on a subsequent trial where the concept "red" must become activated. This slowed reaction time is called negative priming.

MacLeod et al review an alternative account of this effect which seems much more in line with current evidence. The "feature mismatch" account states that slowed reaction time reflects conflict arising from the stimulus itself rather than the response that is required. According to this view, it takes longer for the subject to process the stimulus if it shared overlapping yet mismatching features with the stimulus on the previous trial. So, in this case it takes longer to name the ink color of GREEN after naming the ink color of RED because the stimuli overlap in features - in terms of the concept "red" - but this feature overlap is mismatching in terms of how it relates to the first and second stimuli. Specifically, "red" refers to the ink color of the first stimulus, but the word name of the second stimulus.

Additional support for the idea that negative priming results from feature mismatch are several demonstrations that negative priming disappears altogether if the stimulus features do not mismatch between the previous trial and the current trial - regardless of whether the response requires attending to previously ignored information. To be perfectly clear, MacDonald & Joordens showed that "stimulus features" can include a visible feature of the stimulus itself or the features/semantic associates used to selectively attend to them.

For example, in a verbal task where subjects are presented with two words (MOUSE & AMERICA) and must pick the larger object, large negative priming (over 5 times the standard effect) is observed if the subsequent trial contains MOUSE & FLEA. In this case, the selection feature of "MOUSE" is [smaller] on the first trial and [larger] on the second trial, thereby creating a mismatch.

If the subsequent trial involved picking the smaller of WORLD & AMERICA, again there is a selection feature mismatch: the selection feature for AMERICA was [larger] on the first trial, and [smaller] on the second trial. Again, here you see large negative priming (which is impressive given that subjects usually show positive priming when making the same response twice in a row).

In contrast, no negative priming is observed if the first trial involves picking the larger of MOUSE & AMERICA and the subsequent trial involves picking the smaller of MOUSE & AMERICA. In this case, the selection feature of MOUSE is [smaller] both on the first and second trials, and thus there is no negative priming. In contrast, inhibition accounts would predict slowed reaction time on any trial where the subject is required to attend or respond to previously ignored information.

This demonstrates that mismatching selection features are the source of reaction time slowing, not difficulty in disinhibiting the previously ignored concept, or difficulty at the response level. Note that this effect has been replicated with a variety of types of stimuli mismatch, including physical size, numerical magnitude, and word color.

Conclusions

As argued by MacLeod et al. and as demonstrated by MacDonald & Joordens, negative priming can be explained as a result of stimulus feature mismatch between previous and current items. Whether those features were "ignored" to give a correct response seems immaterial; the important determinant in reaction time slowing is whether the features associated with a stimulus are congruent or incongruent with the features/associations activated on a previous trial.

It is interesting to note that while current computational models of the Stroop task do not explicitly model trial sequence effects of the kind that give rise to negative priming in Stroop (nor, for that matter, do they address increased errors/reaction time on incongruent trials following congruent trials, a phenomenon sometimes attributed to goal neglect), relatively minor modifications may allow them to successfully simulate these sequence effects.

Consider the case presented at the start of this post, where a neural network would process two trials in succession: GREEN and then RED. If activation is not "zeroed out" between events, lingering activity in the "red" output unit may cause slowed reaction time (aka cycles to settle) on the second trial, where the correct output is "blue." This would be a successful simulation of negative priming without recourse to directed inhibition, in line with the proposal of MacLeod et al. However, this might be argued to reflect mismatch at the output level, whereas Macdonald & Joordens showed that mismatch effects occur at the level of stimulus processing. It is not clear that current network models can account for this effect.

Consider another case, where the network would process GREEN and then RED. Here, the first trial could contribute to a "word reading" task unit to become more strongly active, which would linger into the second trial . This lingering activity could either result in either an outright error ("red") or would slow the number of cycles to settle on the correct output ("blue") because the "color naming" task unit would experience more competition. This would be a successful simulation of goal neglect.

12/13/2006

IQ & Skew, or Why Not to Log-Transform RTs

In his excellent book "Clocking the Mind," Arthur Jensen examines the striking correlations between intelligence and simple measures of reaction time. The word "intelligence" often invokes concepts like fluid reasoning ability, depth of spatial processing, and other sometimes ill-defined (but always high-level) constructs. In each chapter, however, Jensen demonstrates how simple reaction time measures - how long it takes you to press a lighted button, for example - are more strongly correlated with measures of general intelligence (g) than anyone might expect.

As it turns out, however, the variability of reaction time in such simple tasks is as much or even more predictive of IQ than the mean reaction time. Why should this be the case?

Variability in reaction times may index the integrity of dopaminergic thalamocortical connections. For example, in Kane & Engle's color-naming Stroop task, incongruent trials are characterized both by a positive shift in the distribution relative to neutral trials (thought to represent the time it takes to resolve interference from word-reading processes), but also by a positive skew in the distribution. This positive skew is even more pronounced among those with low working memory spans (which likely have decreased general intelligence as well).

Interestingly, the positive skew and positive shift of reaction times are uncorrelated. This suggests they index two distinct cognitive processes. Based on work from computational models of the Stroop task, the positive shift probably indexes to the efficiency with which the "ink color" representation can overpower the "word name" representation. However, positive skew has not been reproduced in such simple models, except for this one, which includes a term for the stochastic dopaminergic activity of subcortical areas that may be responsible for keeping information maintained in working memory. In this case, positive skew (termed "goal neglect" by Kane & Engle) could relate to the integrity of this dopaminergic circuit; in low-spans, this thalamocortical projection may have less predictable patterns of firing, leading to the spontaneous "neglect" of the current goal.

Positive skew in RT distributions could therefore reflect a very low-level aspect of neural architecture which may show large individual variation, just like IQ.

However, many datasets may fail to detect this correlation, or even the positive skew to begin with. Why? The positive skew of RT distributions is a well-known phenomenon, but it violates a primary assumption of statistical analysis: normality. If a dependent variables' distribution is non-normal, many statistics books will recommend a logarithmic transform, which essentially compresses the high end of the distribution. This returns the distribution to a nice gaussian curve.

This practice has unfortunate consequences beyond its primary effect (to erase the positive skew in a distribution, which as I've indicated above may be very important). Log-transforms will also mean that your data no longer conforms to an equal interval scale. This leads to problems in the calculation of variance - in other words, your measure of variance no longer corresponds to the variance of the actual data.

While the coefficients derived from ANOVAs of log-transformed data can always be interpreted in terms of equal-interval scales (by an inverse log transform), the p values are not so easily fixed, because they are based on a fundamentally unreal measure of variance.

The moral of the story: don't log-transform your RT data without performing a second analysis on the skew of the un-transformed RT data. You may just come up with something very interesting.

12/12/2006

Cost-Benefit Analysis and Conflict Monitoring: the Anterior Cingulate Cortex

The anterior cingulate cortex (ACC) is often activated during tasks with multiple conflicting responses. One theory is that the ACC itself detects conflict, and then up-regulates attention or "cognitive control" to improve the chances that a correct response is chosen. Another theory is that strong activity within dlPFC signals this heightened conflict, and the ACC "kicks in" as an additional resource to help mitigate competition.

However, recent evidence from Milham & Banich suggests that ACC activation may not be limited to situations involving conflict, but may instead be a functionally diverse area where the more anterior or rostral part is involved in error-related processing and only the posterior portion is activated by conflict.

To demonstrate this, the authors used a version of the Stroop task in which the to-be-named color was either the color in which a word was written, or whether it surrounded the word (i.e., the word was superimposed on a block of that color). As in the classic Stroop, the words could also be congruent color words (i.e., RED), incongruent color words (i.e., GREEN) or neutral non-color words (i.e., LOT) (note that all these examples demonstrate the condition where the word was written in that color; in the "surround" condition, all the words were written in grey ink on top of a block of color). Using this design, the authors calculated the percent facilitation (i.e., how much faster subjects were to respond to congruent than neutral trials) and percent interference (i.e., how much slower subjects were to respond to incongruent than neutral trials) for both the classic and "surround" conditions. 18 right-handed subjects completed this task inside a 1.5 T fMRI machine.

The results showed that RT interference was twice as strong for the classic than the surround condition, suggesting that subjects had a particularly difficult time naming the visible color when it was the same color used to write the word. Disjunction analyses showed that the anterior portion of the rostral ACC was specifically sensitive to interference (but not facilitation) whereas the posterior rostral zone of ACC was sensitive to interference and facilitation. Surprisingly, no regions in frontal or parietal cortex showed differential activation between the classic and surround conditions except for anterior inferior prefrontal cortex, which was more highly activated in the classic than the surround condition (but this was not reported as significant).

The authors conclude that they identified two functionally distinct subregions of ACC: one that is sensitive to response conflict (posterior rostral ACC; prACC) and one that is sensitive to error-related processing (anterior rostral ACC; arACC. They suggest that prACC may be involved in "response selection, facilitating infrequent or novel responses, and inhibiting prepotent responses" whereas arACC may be involved in conflict monitoring proper.

Of course, it is possible to suggest that even prACC is involved in conflict monitoring, but that conflict includes not only response conflict but also conflict related to which aspect of a stimulus to process, or "stimulus evaluation."

In a related review article, Botvinick, Cohen & Carter discuss a variety of interpretations of the function of ACC. They also make the distinction between prACC and arACC in the context of errors: some work shows that prACC responds both during errors and during high-conflict settings, whereas arACC responds preferentially to error trials. Finally, prACC may be the source of the error-related negativity signal detected with ERP in response to errors.

Botvinick et al. conclude their review with a fascinating hypothesis: data from gambling tasks suggests that the conflict monitoring subserved by ACC may be sensitive to rewards and response efforts - in other words, calculating some sort of cost/benefit analysis. In this view, ACC is a "action-outcome evaluator."

Related Posts (primarily related to ACC):
Imaging Lapses of Attention
Selection and Updating Frequency in the Attentional Blink
Reversing Time: Temporal Illusions
Developmental Change in the Neural Mechanisms of Risk Perception

12/11/2006

Eyes, Window to the Soul - and to Dopamine Levels?

The ancient proverb "the eyes are the window to the soul" may in some ways be validated by cognitive neuroscience. Pupil diameter is gaining currency as an index of mental effort ("cognitive workload") as well as arousal. In the most compelling finding from this literature, pupil diameter has been observed to increase with each successive item maintained in memory, up until each subject's working memory capacity - and then to contract incrementally as each item is reported back to the experimenter. Some recent work suggests that spontaneous eye blink rate - how quickly the eyes blink in normal, everyday situations - may also be an index of prefrontal or executive processes.

Blinking is a behavior that can be triggered voluntarily, reflexively (to protect from foreign objects) or "spontaneously." Spontaneous eye-blink rates can vary substantially between individuals (in adults, children, and nonhuman primates), and can be distinguished from reflexive and voluntary blinks both in terms of their shorter duration and possibly smaller amplitude. Spontaneous eye blink rate is measured over the course of several minutes (generally ranging from 1 to 5), and analyzed in terms of mean blink rate or inter-blink interval, the maximum inter-blink interval, and the modal eyeblink frequency. One unintersting determinant of spontaneous eye blink rate is the relative moisture or "tear film" of the eye.

However, a far more interesting determinant of spontaneous eye blink rate appears to be levels of cortical dopamine. If dopamine receptor 1 activity is enhanced (with D1 agonists), eye blink rate increases, and if it is blocked, eye blink rate decreases. Likewise, monkeys treated with MPTP (known to cause Parkinsonian type symptoms, a disorder associated with decreased dopaminergic activity) show reductions in eye blink rate, which is remediated by administration of D1 agonists (as are the Parkinsonian symptoms). This relationship is also robust in human populations, where people with schizophrenia show elevated blink rates and those with Parkinson's show the opposite trend. Some research implicates the rostral ventromedial caudate nucleus in the dopaminergic modulation of eye blink rate; however, eye blink rate may be modulated only by D1 and not D2 activity.

One hypothesized function of cortical dopamine is the "gating" of representations into prefrontal cortex. In other words, new items become updated into working memory by increases in dopamine activity, where they are maintained until another phasic increase in dopamine. In this way, dopamine levels are thought to maintain a balance between flexibility (being able to switch attention to new items) and stability (being able to ignore distracting items). This is sometimes called the flexibility/stability dilemma. There is also evidence that this "balance" of dopamine is arrived at through reward conditioning, such that stimuli associated with rewards tend to evoke larger increases in dopamine and thus are more likely to be attended.

One study of cognitive flexibility/stability supports the above claims for dopamine function as well as its relationship of eye-blink rate. Subjects with high spontaneous eye blink rates were less affected than those with low blink rates by switching to a task that required attention to a new aspect of the display. However, they were more affected than those with low blink rates by switching to a task that required attention to a previously ignored aspect of the display. In this way, eye-blink rate does appear to be a behaviorally-relevant marker of the relative balance between flexibility and stability in human adults, and by extension dopaminergic function. (Interestingly, the DRD4 and COMT genetic polymorphisms associated with differences in dopaminergic function were not correlated with these "switch costs," although they did interact with eye blink rate. In this case, blink rate is more sensitive than genetic analysis!)

Similarly, a study of retarded elderly adults shows that blink rate is reduced among those with highly stereotyped behavior (i.e., a lack of flexibility), suggesting that similar mechanisms govern spontaneous blinking across the lifespan. Studies with human children show elevated eye-blink rate among unmedicated children with ADHD, schizophrenia, epilepsy, as well as those with autism.

Relatively little work has been done with eye blink rate in infants, but it does not appear sensitive to feeding method, time of day, time since feeding, and body weight, changes in heart rate or body movement. In contrast, it is affected by social interaction or the presence of novel or moving stimuli. Individual variation among infants may be even greater than that among adults, mostly because blink rate among infants is surprisingly low. According to some reports, 2 month olds blink less than one time per minute, which steadily increases to an average of 14-17 blinks per minute by age 20.

Other studies have shown that blink rate is affected by cognitive workload (speaking, memorizing, and mental arithmetic are associated with blink rate increases, whereas daydreaming and object tracking decrease blink rate) as well as behavioral state (some report a higher blink rate during deception). Finally, blink rate seems stable within individuals across most of the day, only beginning to decrease after 8pm.

Related Posts:
Thinking about Thinking Harder: Pupil Dilation as an Index of Cognitive Workload

References:

Bacher L.F. & Smotherman, W.P. (2004). Spontaneous eye blinking in human infants: A review. Developmental Psychobiology, v44 issue 2 p95-102.

Dreisbach G, Muller J, Goschke T, Strobel A, Schulze K, Lesch KP, Brocke B. Dopamine and cognitive control: the influence of spontaneous eyeblink rate and dopamine gene polymorphisms on perseveration and distractibility. Behav Neurosci. 2005 Apr;119(2):483-90.

MacLean, W. E., Jr., Lewis, M. H., Bryson-Brockmann, W. A., Ellis, D. N., Arendt, R. E., & Baumeister, A. A. (1985). Blink rate and stereotyped behavior: Evidence for dopamine involvement? Biological Psychiatry, 20, 1321–1325.

Taylor JR, Elsworth JD, Lawrence MS, Sladek JR Jr, Roth RH, Redmond DE Jr. Spontaneous blink rates correlate with dopamine levels in the caudate nucleus of MPTP-treated monkeys. Exp Neurol. 1999 Jul;158(1):214-20.

Wallace, K. , Bacher, L. F., Norton, J. , Lewis, K. , Wynkoop, K. , Hubbard, L. and Zielinski, N.
(2006, Jun) Spontaneous eye blinking: links to temperament and attention Paper presented at the annual meeting of the XVth Biennial International Conference on Infant Studies, Westin Miyako, Kyoto, Japan

12/08/2006

Theory of Mind, Working Memory and Inhibition

The ability to understand other minds (aka Theory of Mind, or ToM) seems to develop rather abruptly between the ages of three and four. Some suggest that this reflects a true "conceptual shift" in the mental capacity of children, and point to the fact that several tasks thought to measure ToM show improvement around the same time.

However, other researchers view ToM as resulting from a more domain-general change in cognitive skills - in particular, those related to executive function (EF; e.g., inhibition, updating, working memory, etc). Advocates of this perspective point to variance shared by ToM and a wide-variety of EF tasks, which remains even after removing variance due to age and verbal ability. This researchers suggest that ToM is either made possible by improved EF, or that the capacity for ToM is simply made easier to express by improved EF.

Support for the latter idea - that competence on ToM tasks is revealed but not directly caused by improvements in EF - comes from several studies which have found that ToM performance is very sensitive to task demands. For example, performance on false-belief tasks can be manipulated by changing the inhibitory demands of the task (for example, using a "magic pointing stick" in a deceptive pointing task seems to alleviate the inhibitory demands by physically separating children from the item they are acting upon). Others have found that ToM is sensitive to differences in memory demands - for example, having children draw and later view a picture of their previous beliefs improved performance on a task where they then were asked to recall this earlier belief that was actually inaccurate.

So, is it inhibition or memory (or both) that allows children to demonstrate their competence on Theory of Mind tasks? A 2005 article by Hala, Hug and Henderson explores this question by administering a series of executive function and ToM tasks to three and four year-old children.

In the past, the relationship between working memory and ToM performance has been somewhat unclear, often disappearing altogether after the effects of age and verbal ability (VA) are partialled out. Hala et al. suggest that the few WM measures that still correlate with ToM after removing covariation due to age and VA are characterized by their strong inhibitory demands. For example, backwards but not forwards digit span correlates with ToM after partialling age and VA, and putatively requires "inhibition" in that one must not perform the more natureal forward task. [Although the authors do not mention this, the updating and maintenance requirements of backwards digit span are higher as well, so this does not clearly implicate inhibition alone].

According to Hala et al., the case for inhibition's relationship to ToM performance is more clear-cut, in that tasks with both inhibitory and memory demands are consistently related to ToM, whereas inhibitory tasks along are not so strongly related. For example, tests of inhibition that involve only delay were not as strongly related to ToM as those that also involved competing or conflicting responses.

In the current investigation, Hala et al. attempted to use a more "pure" measure of memory demands, although they acknowledge that almost all memory tasks may involve some inhibition. Casual readers may wish to skip the methodological details that follow in italics.

Here are the tasks they used:
  • Inhibition tasks: gift delay , in which kids had to not peek at a gift that was being wrapped. Time until peeking (if the kids peeked at all) was the dependent variable.
  • Working memory tasks: Stroop-control task and the six-box scramble task were used, in which kids simply had to respond with "day" or "night" to two arbitrary visual stimuli, and in which kids had to search for six stickers hidden inside a set of six differently-patterned boxes whose locations were "scrambled" after each search.
  • Combined Working memory and Inhibition tasks: Day-night Stroop and Luria's tapping task were used, in which subjects had to respond "day" to a picture of a night-time scene (and vice versa), and in which subjects had to tap twice if the experimenter tapped once, (and vice versa), respectively.
  • False-Belief tasks (thought to measure ToM)
    • "Uncued tasks"
      • Unexpected location task
      • Deceptive location task
      • Unexpected contents task
    • "Cued tasks"
      • In these tasks, kids were asked to draw a picture of the initial "expected" location or object contents. Kids were also told that this picture would help the protagonist later remember where the object was (which is different wording than typically used in FB tasks). There was a cued version of each of the uncued false-belief tasks listed above.
  • Control measures:
    • PPVT (a test of verbal ability)
The results showed, surprisingly, that performance was not significantly better on the "cued" false-belief tasks than on the "uncued" false belief tasks. If you believe that the cueing procedure eased the memory demands, then this might suggest that memory alone is not important for false-belief performance or ToM.

Interestingly, the false-belief tasks involving locations did not correlate with those involving contents, although each of the location tasks correlated with each other, as did each of the contents tasks. Performance was numerically higher on the content tasks than on the location and deception tasks. This may indicate that content false-belief tasks may pose less memory or inhibitory demands than false-location tasks.

Even after partialling out age and verbal ability, the false-belief contents task was significantly related to the WM EF tasks and the Combined-WM&IC EF tasks, whereas the false-belief location tasks were related only to the Combined-WM&IC tasks. Only the Combined-WM&IC tasks significantly predicted false-belief performance.

Based on these results, the authors conclude that ToM, as measured by false-belief tasks, is related to a combination of working memory and inhibition, but not to working memory alone. The critical pieces of evidence that support this conclusion are

1) only the tasks involving conflict and memory demands (i.e., day/night Stroop and the tapping task) significantly predicted false-belief performance, and

2) manipulations designed to reduce the WM demands of false-belief tasks did not significantly improve children's performance on these tasks.

However, there are a few reasons to think that this conclusion is premature:

Regarding the first point above, these tasks are extraordinarily process impure: the six-box scramble task is said to measure memory alone, but seems to clearly involve a strong inhibitory component and yet did not correlate with false-belief performance after partialling out the effects of age and verbal ability; likewise, the control Stroop task was very easy for all the kids in the experiment, and is a very limited and unusual test of working memory to begin with.

Regarding the second point above, clearly these manipulations may not have substantially reduced the memory demands of the tasks. Hala et al. themselves note that drawing may have also distracted children from the task at hand. Manipulations involving emphasis of the differences between the expected and unexpected locations or contents may be a stronger test of the hypothesis that false-belief performance is improved by reduction of memory demand. This would explain why false contents tasks are somewhat easier for all age groups.

Related Posts:
What Matters for Theory of Mind?

12/07/2006

Presentation on Clinical Neuropsych and Executive Function

This presentation (PPT; PDF) summarizes the essential findings from three studies on the clinical neuropsychology of executive function. It begins with a preface on the basic differences between cognitive neuroscience and clinical neuropsychology, then delving into the difficulties facing any attempt to use theories of executive dysfunction in clinical neuropsych (this section is based on the Royall et al paper, covered here).

The second half of the presentation deals with factor analyses of executive dysfunction among patients with traumatic brain injury, and ways in which patients with traumatic brain injury can be assessed and possibly rehabilitated.

12/06/2006

Factor Analyses of Executive Function Impairment Due to Brain Injury

The last few posts on executive dysfunction among clinical neuropsychiatric patients has probably made clear that both frontal damage and behavioral impairments are typically very "messy," with more than one region and more than one function frequently suffering damage. Factor analytic methods have particular promise for identifying the behavioral impairments of such patients, since these methods can remove "shared method variance" and ultimately identify the real number of underlying impairments.

In this study, Busch, McBride, Curtiss & Vanderploeg used a related technique - principal components analysis - on behavioral measures of executive function from 102 adults with traumatic brain injury.

These adults were assessed 453 days (on average) after sustaining non-penetrating head injuries resulting in traumatic brain injury. A laundry list of tasks was used, including Wicsonsin Card Sort (WCST), Stroop, backwards digit span, semantic fluency, and trail making tests, among others. The results of each subject on each test was entered into a principal components analysis which derived three factors that account for 52.7% of the variance of the original measurements.

The strongest factor received the highest loadings from semantic fluency, WCST perseveration and trail making tasks, among others. The authors intuit that this factor involves both self-generation of behavior and set-shifting. The second strongest factor received high loadings from Stroop, backwards digit span, and WCST set failures. The authors suggest this factor involves the ability to sustain attention under conditions of interference. Finally, the third factor seemed to correspond to inhibition failure, as measured by incorrect designs on the Visual Spatial Learning test and the California Verbal Learning Test.

The authors interpret these various impairments in terms of a three factor model consisting of drive or motivation (which they relate to dmPFC), sequencing (which they relate to premotor and supplementary motor cortex) and control (which they relate to dlPFC). Control is thought to contain at least two components involving social/emotional control and cognitive control.

Busch et al. conclude that the first factor (self-generation of behavior and set-shifting) is related to drive, that the second factor (sustained attention despite interference) is related to Baddeley's central executive, and that the third factor (inhibition) is related to control.

The authors note several reasons for caution in making such conclusions. First, because primarily clinical measures of EF were used, none of these measures could be truly process-pure. Second, because the entire sample was brain damaged, these results may not generalize to other populations.

12/05/2006

Clinical Neuropsychology and Executive Function

Some ridicule the psychological construct of executive function (EF) for its relationship to cognitive science's fabled "problem of the homunculus." EF is commonly considered the set of cognitive processes required to coordinate and direct behavior in a goal-directed way under conditions involving interference or otherwise requiring precise control of response. The "problem of the homunculus" is that these EF processes may appear to require their own coordination - i.e., another set of executive functions that guide the previous set of executive functions. In other words, to explain the highest workings of the mind, it appears necessary to posit another mind - a homunculus, or "little man" - that then guides the first mind, and so on, into a recursive loop of nested homunculi somewhat akin to the nesting dolls pictured above.

Some have argued that this recursive homunculus problem is more apparent than real, and that many so-called executive functions can be explained by recourse to the relatively humble and low-level processes of reward conditioning. Computational models of recurrent connectivity between the prefrontal cortex and subcortical structures involved in reward learning provide tentative support for this hypothesis.

However, a very different way of tackling the problem is far more empirical, and involves neuropsychological examination of brain-damaged patients with executive function impairments.

A special report from the American Neuropsychiatric Association in the Journal of Neuropsychiatry and Clinical Neuroscience takes exactly this approach by analyzing "factor analyses of putative executive measures, community-based epidemiological studies [...] and placebo-controlled clinical trials with executive outcome measures."

The authors identify two themes of research into EF: the first associates EF with cognitive functions like will, abstraction and judgment, whereas the second (what they term the "cybernetic" view) associates EF with the "piloting," regulation, or control of other cognitive operations. Other research has related EF impairments to synaptic density within and/or damage to the frontal lobe, the basal ganglia, or the circuits connecting them.

Behavioral assessments of EF impairment involve an enormous variety of tasks: the Stroop task, the trail making task, the conceptualization task of the dementia rating scale, the Wisconsin Card Sort, the Executive Interview (EXIT25), the executive clock drawing task (CLOX), the frontal assessment battery (FAB), subtests of the Neuropsychiatric inventory (NPI), Behavioral Assessment of the Dysexecutive Syndrome (BADS), the Frontal Lobe Personality Scale (FLOPS), among others. The authors note that EF impariments have been observed in "almost every major neuropsychiatric disorder," perhaps as a result of the huge variety of ways in which it is evaluated, and in some cases the EF impairments seem more related to external aspects of the patients' environment (e.g., level of care) than to the degree of positive psychological symptoms.

The authors review several reasons that EF are most commonly thought to reside in the prefrontal cortex (PFC): PFC is the most highly interconnected brain region; one PFC region of particular interest (broadman's area 46, roughly equivalent to dorsolateral prefrontal cortex) is particularly rich in inhibitory interneurons (involved in schizophrenia and neurolepsy); PFC receives input only from regions that themselves process information in a variety of sensori-motor domains; neural activity in PFC is modulated by the motivational value of stimuli; impairments in EF tasks can be observed by lesioning the PFC or thalamic/basal ganglia regions that project to it. The authors conclude that "the frontal lobe is the only cortical region capable of integrating motivational, mnemonic, emotional, somatosensory, and external sensory information into unified, goal-directed action.

The authors identify three subregions of PFC that are particularly important for EF:
  1. dorsolateral prefrontal cortex (DLPFC, BA's 8-12, 46 & 47): involved in "hypothesis generation, behavioral control," "goal selection, planning, sequencing, response set formation, set shifting, verbal and spatial working memory, self-monitoring, and self-awareness," and consistently activated by the WCST (though comprehension and visual search skills need to be controlled first)
  2. orbitofrontal cortex (aka vmPFC, BA's 10-15 & 47): involved in "initiation of social and internally driven behaviors and the inhibition of inappropriate behavioral responses," risk assessment, and reward prediction. Lesions to this region produce impairments on go/no-go tasks, "insight, judgment and impulse control" as well as "environmental dependency and utilization behavior [in which subjects impulsively, spontaneously, and uncontrollably use objects they find in familiar ways, without regard to the contextual appropriateness of such behavior]."
  3. anterior cingulate (ACC, BA medial 9-13, 24 & 32): involved in conflict monitoring, "monitoring behavior and error correction," and ACC integrity is well indexed by EXIT25 assessment. Activation of ACC is reliably induced by the Stroop task although variance ACC activity does not completely explain Stroop performance.
Hopefully the above has given you an idea of how complex this "region" (or set of regions) truly is. After reading this, it is perhaps not surprising that neuropsychology and neuropsychiatry have not been able to establish a "gold standard" for EF assessment. The authors suggest this difficulty can be traced to four unresolved problems in current understanding of the neural bases of executive function. They discuss each of these at length; I will summarize each in turn.

Problem #1: Frontal Lobe vs Frontal System

The authors suggest that clinical assessment of EF impairment has been so confused partly because there is no established list of what the primary executive functions actually are, and that this is compounded by difficulty in estbalishing anatomical boundaries between prefrontal regions (which as you may have noticed above, do not cleanly fall within cytoarchitectonic distinctions laid out by Broadman). Furthermore, PFC lesions are frequently quite messy, involving damage to multiple PFC as well as subcortical regions, and follow-up studies of these patients are typically both short and rather superficial.

Problem #2: Structure vs Function

Related to the above problem, damage to various remote regions of cortex can indirectly affect prefrontal processing. EF is highly sensitive to frontal metabolism; for example, increases in PFC metabolism have been related to obsessive-compulsive disorder, whereas decreases in PFC metabolism have been related to parkinson's, major depression, and schizophrenia.

Problem #3: Control vs. Process

The authors suggest that questions about EF impairment often take the form of "how" or "whether" a patient can do something, whereas non-executive cognitive tasks typically investigate "how much" or "how well" a patient can do something. The distinction between these is the integrity of a process (measured by "how well" the patient can copy a drawing, for example) vs. the integrity of control ("whether" a patient can copy a drawing from memory).

Problem #4: Are EF's Unitary or Multiple?

Although terms like "central executive" seem to imply that there is but one executive function, the frontal lobes do seem to follow a few spatially-organized principles, which the authors describe as follows: "left–verbal/right–nonverbal, anterior–cognitive/posterior–motor, ventral–perception/dorsal–action, and medial–internal focus/lateral–external focus." (The authors later note that a different dorsal/ventral distinction seems to apply within medial cortex, such that dorsal areas are more purely cognitive whereas ventral areas are related to more emotional processing). As the authors note, however, this spatial organization could simply reflect PFC performing the same executive function on a variety of different inputs.

On the other hand, the authors note that several studies find multiple dimensions of EF, falling out roughly as follows: rule discovery as tapped by WCST (with lesion studies suggesting dlPFC involvement), working memory as tapped by digit span and tower of london (also tied to dlPFC), attentional control as measured by continuous performance tasks (with lesion studies suggesting mesiofrontal involvement), and response inhibition factors as measured by Stroop (lesion studies suggest orbitofrontal involvement). Thankfully the authors admit that no task is process pure, and that factor analytic studies suggest the involvement of multiple EF functions in each of these tasks.

However, if more process-pure measurements could be developed, the field of neuropsychological assessment could clearly benefit. Disability and outcome predictions could be made more reliable across a wide range of disorders; treatment options could be better specified for various disorders; and an enormous number of people would be affected. This latter point is made clear by two recent studies - each involving more than 1,000 subjects - that showed over 25% of elderly adults manifest EF impairment. Aging is associated with decreased adrenergic and dopaminergic activity in PFC, which is in turn correlated with decreased frontal metabolism as well as decreased performance on Stroop and WCST.

Of course, there are even more clear markers of EF impairment in populations already diagnosed with specific disorders. For example, major depression shows reduced frontal metabolism and decreases in behavioral indexes of EF, which increase after the alleviation of symptoms.

In the case of schizophrenia, vlPFC and vmPFC regions diminish in gray and white-matter volume, and the former is correlated with negative symptoms like apathy, ahedonia, loss of motivation, etc. dlPFC show less regional blood flow and metabolic activity. Astonishingly, some estimate that the percentage of schizophrenics with executive impairments is the same as the percentage of ostensibly "healthy" elderly adults, and that the severity is similar as well. Note that this similarity remains after controlling for level of elderly care (e.g., use of assistive devices such as prosthetics) which is also strongly correlated with elderly EF performance.

In structural brain diseases such as Alzheimer's, frontal lobe pathology is a better predictor of dementia than atrophy in other brain regions, and correlates strongly with behavioral EF impairment. Vascular dementia shows a similar pattern. Diabetes mellitus also shows impairments on behavioral indices of EF.

There are pharmocological implications as well. For example, the use of atypical antipsychotics (i.e., not haloperidol) in schizophrenics has demonstrated that some agents seem to selectively improve executive function. Risperidone increases EF performance on the trail making test among schizophrenics, relative to haloperidol. Improvements in EF may ultimately be more important in the functional outcomes of schizophrenia treatment than the reduction of schizophrenia's positive symptoms.

Based on this rather messy picture of the role of EF in clinical and elderly patients, the authors conclude that more research is needed into the components of EF, perhaps by associating latent EF factors with distributed neural networks identified through fMRI. Additionally, measures of EF should be included in both normal behavioral assessment, in pharmacological trials, and in studies of genetic and environmental contributions to behavior.

12/04/2006

Review: Mind Wars: Brain Research and National Defense

Basic science has always had military applications, but only relatively recently has the defense industry actively funded and solicited scientists to optimize war. In "Mind Wars," Jonathan Moreno analyzes the military's intense interest in modern neuroscience from historical, scientific, and ethical perspectives.

A famous historical example of military funding basic science is the British intelligence services' employment of thousands of mathematicians - including artificial intelligence pioneer Alan Turing - to decipher the Enigma encryption system during World War II. Both the simultaneous development of the ENIAC computer and the role of Vannevar Bush (another artificial intelligence pioneer) as Roosevelt's science advisor helped to solidified the defense industry's interest in advanced mathematics and computer science.

Far less famous is the long-standing interest of the military in the behavioral sciences, which Jonathan Moreno carefully traces back to its roots in the psychological analyses of American soldiers in the 1950s to improve training and recruiting techniques. Moreno estimates that the military - including KUBARK, the codename for what would come to be known as the CIA - was the real source of nearly all federal funding for 1950's behavioral sciences. More than a third of American research psychologists were funded through such channels (frequently without their knowledge). This startling conclusion is validated by the involvement of several 1950's psychologists in the development of interrogation techniques (involving psychological torture and humiliation) as well as even by contemporary psychology's involvement in the Abu Ghraib scandal (and refusal by the American Psychological Association to critcize such practices).

After this historical introduction, "Mind Wars" turns its focus to the potential military applications of neuroscience - a field that represents the convergence of medical, computer and behavioral science, into each of which the military has poured enormous sums for decades. Moreno covers several existing programs, including the Defense Advanced Research Projects Agency's (DARPA) Augmented Cognition (AugCog) and Preventing Sleep Deprivation (PSD) programs, involving the use of "smart drugs" like modafinil and CX717, as well as the development of nonlethal weapons such as hypersonic "high intensity directed acoustics" or microwave-radiating "active denial systems." Moreno also cautiously discusses some of the military's future directions, such as "rapid onset brain-targeted bioweapons," with a careful eye towards what is technically feasible and what is merely hype.

In what is probably the best part of "Mind Wars" (and unexpectedly so, at least for me), Moreno discusses the ethical implications of neuroscience's involvement with the military. Moreno admits that he is no "loose cannon" - indeed, he has given invited testimony to Congress, has served on two presidential ethics commissions, and is an advisor to the Department of Homeland Security. Nonethelesss his analysis is incredibly even-handed, bringing up topics like the philosophy of "dual use" for military science, the history of the practice of informed consent (which actually began in the military decades before it was used in academia), and the privacy implications of new neurotechnology.

The book itself is written in a highly conversational tone, filled with interesting and relevant personal anecdotes (of which Moreno has many; his father was a psychiatrist involved in the military testing of LSD). Moreno's sources are well cited, where possible: many of his government contacts declined to be identified by name.

"Mind Wars" will likely be enjoyed by both neuroscientists, psychologists, and lay people alike, although experts are likely to be familiar with most of the existing technologies and programs that Moreno reviews. On the other hand, the historical and ethical treatment of military neuroscience are the most timeless contributions of "Mind Wars" to this debate, and will be interesting to anyone with an interest in science and its applications.

Related Posts:
Jonathan Moreno on "Mind Wars" (Podcast; thanks Neurophilosopher!)
Review: Rhythms of the Brain
Review: I of the Vortex
Review: Darwin Among the Machines
Review: Everything Bad Is Good For You
Review: The Future of the Brain
Review: The Three Pound Enigma

12/01/2006

Review: Rhythms of the Brain

A good popular science book will provide laypeople with an exciting perspective on the state of the art in a particular field. But this comes at a price: typically such books are written from just a single theoretical perspective, glazing over or altogether ignoring details that might be considered controversial within the academic community. To understand these deeper issues, an interested layperson would have to trudge through academic textbooks, or for the most cutting-edge topics, delve into the often impenetrable peer-reviewed literature.

And then there are the absolute best popular science books. György Buzsáki's "Rhythms of the Brain" is of this latter variety. Not only does it provide a wide-ranging and readable introduction to neural oscillators, but every crucial argument is carefully footnoted with deeper explanations, some qualifications, and suggestions for additional reading.

"Rhythms of the Brain" begins with the premise that "structure defines function," and then outlines how the architectural principles of neural networks can give rise to neural oscillations. In the process, he meticulously covers topics like the complex, small-world, scale-free connectivity of cortex without resorting to complicated equations - the concepts are carefully grounded in real-world analogies and lay terms.

Buzsáki introduces several other topics that are usually found only in mathematically sophisticated academic works on the brain: for example, how "neural noise" can actually enhance processing through stochastic resonance and the 1/f or "pink noise" signature of EEG, mechanisms of "phase precession" and "phase reset" within nested oscillations, and the difference between relaxation and harmonic oscillators.

It is perhaps not surprising that Buzsáki is the author of such a book - holding both an MD and a Neuroscience PhD, Buzsáki's has published over 185 peer-reviewed publications, 10 book chapters, and 2 edited volumes spanning the last 35 years. His lab at Rutgers consists of a veritable army of researchers, including 8 post-docs and 4 grad students.

After reading "Rhythms of the Brain," it's easy to understand why there's so much demand for working in this laboratory. There's potentially an entirely new field of neuroscience lurking in here: Buzsáki discusses distinct oscillations with frequencies spanning 4 orders of magnitude, from the ultra-slow ("slow 4": .02 Hz) to the ultra-fast ("high gamma": 600 Hz) and everything in between.

Although this book is probably not suitable for entry-level laypeople (a good popular science introduction to the brain and its rhythms is "I of the Vortex"), it is virtually guaranteed to please everyone with some previous neuroscience experience, literary or empirical. Beware also that "Rhythms of the Brain" is quite dense (with the copious footnotes constituting almost an entire second volume!) and is therefore more likely to be enjoyed with caffeine than as a relaxing bedside book.

Some may criticize "Rhythms of the Brain" for failing to offer a comprehensive "big picture" summary of how each of these oscillations contribute to cognition (although hints are there, to be sure). For me, this is actually a strength of the book; half-informed conjecture and hasty extrapolation ruins far too many popular "science" books on the brain, and they become prematurely outdated. Besides, such speculation is far more fun to do as a reader - and for this Buzsáki has provided fertile ground.