A New Mode of Perception?
In Rensink's change blindness flicker paradigm, subjects are presented with two different visual scenes which alternate back and forth, separated only by a brief (80ms) blank screen during the switch. Forty subjects underwent nearly 48 trials each, 42 of which actually contained a change from one scene to the other; the remaining six trials consisted of only two identical pictures switching back and forth which served as "catch trials."
Subjects were to give one response when they first "sensed" a change, and a second response when they were sufficiently sure of a change that they could verbally identify the changing object and its location. The results were separated into two trial types: alpha trials were those in which the first response occurred within 1 sec of the second response, and hence there was effectively no "sensing;" beta trials were those in which "sensing" responses occurred more than 1 sec before the "seeing" response. Subjects were then divided into three groups: the only-see group were those who had a very low proportion of alpha trials, while the remaining subjects were divided between the can-sense group (if they performed above 50% on the catch trials) and the guess group.
The results showed that the can-sense group was able to "sense" a change more than 2 seconds before they were able to identify that change, and given the hit rate for "sensing" it's clear that these responses were not merely the result of a guessing process. Further analyses suggest that "sensing" responses are not simply the a result of a lower change detection threshold; rather the pattern of results more strongly implicate a distinct mechanism of visual perception. A second experiment was conducted to rule out the possible effects of transients in the display. No one knows the specific mechanism by which this non-specific "sensing" might occur, although one possibility proposed in Rensink's paper is the disturbance of some non-attentional, global representation of scene layout.
Nearly 30% of participants were able to sense changes without having actually seen them - which Rensink calls "mindsight." In contrast to the phenomenon of blindsight (in which people with damage to V1 will declare themselves totally blind, but can nonetheless perform well above change in identifying visual information), mindsight involves a conscious awareness of information, but no visual experience. Rensink concludes by conjecturing that mindsight may underlie the commonly held belief in a "sixth sense," and while there's no need to posit an "extrasensory" modality, it's likely that similar phenomena would occur for the other senses as well.
EDIT: be sure to see the next article in this series, "Mindsight Reconsidered," for a different perspective on this data, offered by Dan Simons, et al..
4 Comments:
Rene Marois paper on the neural fate of ignored stimuli (in the attentional blink), is pretty interesting with regards to this phenomena of not consciously perceiving a stimulus but none-the-less getting information from it.
The case of blindsight demonstrates that there is a difference between attending to something and being consciously aware of it. The case of mindsight shows that different processes operate at different speeds. Simply being aware of a change in the scene occurs more quickly than being able to formulate a linguistic representation of the change, the former being a prerequisite for the latter.
In the case of optical illusions or other experiences it may take some considerable time after the initial perception for someone to be able to construct a description of what they experienced.
I am indeed surprised that there were no controls to rule out your point that we're not seeing an action/perception dissociation. In other words, would participants be able to point to the quadrant in which the change occurred, if pressed?
This would be analogous to some of the studies with hemispheric neglect patients in which, when pressed, they can show some knowledge of the objects in the neglected hemifield. It is also analogous to patients with optic ataxia, who can often not identify the correct orientation for an object to fit through a certain hole, but when pressed can actually do so.
But to be fair, your description of mindsight doesn't actually accomodate all of the data. For example, why would it sometimes take longer for a linguistic representation to be formed than other times (from less than 1 second for many observers, and up to 14.6 seconds on catch trials)? The correlation between sensing onset and seeing onset is less than .05, which strongly suggests that a more sophisticated interpretation is necessary. Also, can-sense observers gave more false-sensing reports than false-seeing reports on catch trials, suggesting that the distinction between sensing and seeing is not likely to be purely linguistic; i.e., what additional mechanism would you have to posit in order to explain why they cease to form a linguistic representation of the 'sensed' change?
Rensink also addresses your point with a paraphraph beginning: "Next, consider the case in which the threshold for sensing is lower than that for seeing; in this case, sensing would be a simple precursor of seeing." Unfortunately I don't understand his argument here enough to paraphrase, but if you really think this is just an issue of prerequisites, you might see how you feel about his attempted rebuttal.
Though I'm sympathetic to the "skeptic" point of view, particularly for papers that use the phrase 'extrasensory' :), this one is from one of the top journals in the field and cited by several other high-impact journals.
But... I just found a rebuttal article that makes it look like it's just a change in response criterion... hmm...
Variability in the amount of time needed to construct a linguistic description is probably proportional to how familiar the image being observed is. For unfamiliar objects, or familiar objects in an unfamiliar orientation the brain needs time to do its interpolations, as described in one of your earlier postings. The same mechanisms used to interpolate visual images are probably also used in more abstract realms to interpolate language related constructs.
Post a Comment
<< Home