Under The Rug: Executive Functioning
This field gets its name from Baddeley's proposal of a "central executive" subsystem in working memory, which for years seemed like nothing more than a placeholder. It was a convenient spot in which to hide those things that we couldn't accurately measure or didn't fully understand (such as attention, or visual binding). But recent work by Miyake, Friedman, Emerson, Witzki, and Howerter has begun to tease apart the component features of "executive functioning" and give us a much better idea of what functions may subserve intelligent behavior.
Of course, how do you test intelligent behavior? The authors picked a battery of tasks that are commonly believed to load one or two executive subfunctions: shifting, updating and inhibition. "Shifting" is the switching of attention back and forth between multiple responses, either in a dual task paradigm or in a task requiring different responses under different conditions. "Updating" subfunctions refer to the monitoring and coding of incoming information for relevancy, and then updating working memory representations with more relevant information. Finally, the "inhibition" subfunction refers to the deliberate suppression of dominant responses.
Many executive function tasks are plagued with "task impurity" problems, in which they have low test-retest or within-subject reliability, reflecting both the fact that executive functions rely on non-executive cognitive abilities (they are, after all, "coordinators") and also suggesting that the use of multiple strategies may be confounding the results. To mitigate these problems, the authors ensured that each participant had not encountered any task before (performance is sensitive to repeat encounters) and also adopted a unique statistical approach known as latent variable analysis or structural equation modeling. This approach allows one to test a small number of hidden variables (in this case, updating, shifting, and inhibition) which are thought to be responsible for the variation seen across a number of manifest variables. The correlations between these latent variables allows one to assess whether a three-part model fits the data significantly better than one involving only a single latent variable (some unitary "executive function") or one that involves two of the three proposed subfunctions. Further analyses allow one to determine whether the proposed subfunctions have sufficiently distinct explanatory power to be truly considered different constructs.
Before delving into the results, it's important to review what each subfunction task measured. Shifting was measured by the plus-minus task (participants are given three lists of random numbers and asked to add 3 to each number on the first list, subject 3 from each number on the second list, and alternate between adding and substracting three on the third list; measures of mean reaction times on this task allow an index of the "switch cost" associated with shifting behaviors), the number-letter task (participants respond in one way to a number-letter pair if presented in the top two quadrants of a computer display, but oppositely if they are presented in the bottom two quadrants of the display; in the first two blocks of trials, the pairs are presented entirely in the top or bottom of the display; during the final block, however, the pairs will alternate from top to bottom half; differences in mean reaction times between the third block and the mean of the first two blocks results in a measure of switch cost), and the local-global task (participants respond on the basis of either a global shape on the computer screen, or on the basis of the tiny shapes that are organized to create that large shape, somewhat like ASCII art; the switch cost here is measured as the difference in RT between those trials requiring a shift in response set from those that were repeats of the previous set).
The three updating tasks were keep-track, tone-monitoring, and letter-memory. In the keep-track task, participants were presented with each of 15 words, for 1.5 seconds a piece, and had to remember the last word presented in each of six pre-determined categories. Proportion of correct responses was the dependent variable (DV). In tone-monitoring, participants were presented with a series of 25 tones that were randomized as either high, medium, or low, and their job was to respond on the fourth presentation of each tone type, again with proportion of correct responses as the DV. In the letter-memory task, subjects were required to rehearse outloud letters as they were presented, and then to recall the last 4 letters in a given list, with proportion correctly recalled as the DV.
Finally, inhibition was measured with the antisaccade task (subjects must inhibit a saccade in the direction of visual cue in order to successfully detect a briefly presented target), the stop-signal task (subjects had to categorize stimuli except for when presented with a brief tone) and stroop.
The authors also administered five complex executive tasks, in order to assess how well each of their postulated subfunctions could account for the "messier" results provided by these more traditional measures of executive functions: Wisconsin Card Sort (WCST), Tower of Hanoi, Random Number Generation, Operation Span, and Dual Task. In WCST, participants must match their cards to a series of reference cards according to the dimensions labeled by the experimenter (i.e., on the basis of color, shape, number, etc). The DV is the number of perseverative errors, in which subjects mistakenly sort by a dimension that is no longer relevant. In Tower of Hanoi, participants must move a series of disks from one peg to a third peg so that the pile looks identical at the end of the task as it did at the beginning; however, subjecs are only allowed to move one disk at a time and can never place a larger disk on top of a smaller one (try it here). The total number of moves is the DV. Random number generation was measured with a variety of randomness indices. In operation span, participants must read aloud arithmetic equations and then a briefly presented word; after a certain number of equations (2-5) the participants must recall all the previously presented words. Finally, in the dual task paradigm participants had to finish as many mazes with paper and pencil as possible while completing a word generation task outloud.
In what can only be called a mammoth of a study, 137 subjects were tested with only two outlier exclusions. The results of the study are reported below:
- Each of the three posited executive functions were separable, distinct constructs as confirmed by factor analysis in which a three variable model fit the data significantly better than a single factor model; further, predictions based on the three factor model did not significantly deviate from the observed data, whereas the single factor and all two factor models were significantly worse; finally, none of the three factors were perfectly correlated with one another, reflecting independence (but some overlap, as would be expected of executive functions that coordinate other functions)
- After determining factor loadings for the basic executive tasks, the factor loadings for the 5 complex executive tasks (WCST, Tower of Hanoi, Random Number Generation, Operation Span and Dual Task) did not significantly differ, suggesting that the empirically derived factor structure was highly reliable even for more complex tasks that involved multiple subfunctions
- WCST loaded shifting functions, not updating, and the contribution from inhibition was non-significant
- Tower of Hanoi was better modeled with a single path model from inhibition better than no-path, all other one path, or the three factor models;
- Two components of the random number generation task derived from the analysis of specific randomness indices, identified in previous literature as a "prepotent associates" component and a "equality of response usage, tapped both inhibition and updating functions, respectively. This result is consistent with research with transcranial magnetic stimulation of the dorsolateral prefrontal cortex, showing dissociable capacities for these two component factors ("prepotent associations"/inhibition, and "equality of response usage"/updating)
- Operation span was found to load updating; other models were significantly worse;
- finally, dual task performance was not significantly related to any of the three postulated functions, possibly suggesting that it may tap an executive function that is independent of the three postulated here (though conclusions from null results must be made with caution)
In summary, the authors remark that their results show both the unity and diversity of executive functions: while tasks can be created that load each individually, intelligent behavior on mor ecomplex tasks (and indeed in day-to-day functioning) is likely a result of complex interactions between these subfunctions. Still, several questions remain, such as how each of these constructs may map onto neuroanatomy, whether other factor structures explain the data even better, and how these constructs might relate to more traditional measures of intelligence (e.g., Gc, Gf). Thankfully, there is some preliminary evidence for the answers to two of these questions; stay tuned for reviews of some relevant evidence in this week's upcoming posts.
Related Posts:
Task Switching in Prefrontal Cortex
Active Maintenance and the Visual Refresh Rate
The Tyranny of Inhibition
Selection Efficiency and Inhibition
4 Comments:
Interesting stuff.
I'm impressed you manage to write such a long post about the "central executive" without once using the word 'homunculus'...
lol - so true.
hey tim - got any ideas about how to selectively disrupt binding? Apparently Baddeley et al., have an inpress article at JEP general showing that a couple different types of WM load don't impact it. A friend thinks some kind of MOT paradigm might work... But I'm probably not making sense...
I'll look up the article later; I can't think of any way to distrupt just binding off the top of my head, but I'll think on it.
Hi Ellen - I think you're exactly right. Imaging techniques are probably the most well known reason that we're getting better at measuring the "hidden phenomena," but I think that computational modeling is playing a big role too. Neural network models let us understand how the brain works at a mechanistic rather than descriptive or correlational level. Once you have that kind of understanding it becomes a lot easier to "look in the right places," even in the context of vanilla, traditional behavioral tasks.
Post a Comment
<< Home