1/31/2006

Redeeming Freud: Memory Suppression

Freud suggested that humans can repress unwanted or traumatic memories, and many still think of this idea as simply an unproven Freudian hypothesis. However, the fact is that we can intentionally forget stimuli, as seen in directed forgetting and think/no-think paradigms. In these studies, subjects learn several paired associations (between two words, or two pictures), and are then repeatedly presented with one of each pair of stimuli and asked to either remember or forget it's associate. Memory for the items is then tested through explicit (free-recall, cued-recall, or recognition) or implicit measures (wordfragment completion, reptition priming), and the difference between recall accuracy of to-be-remembered items and to-be-forgotten items is assumed to reflect the effects of an intentional "forgetting" process.

There are several different theories of how directed forgetting actually works, all of which have implications for the functions of working memory and executive control. One unlikely hypothesis is that subjects are able to somehow inhibit the stimuli in working memory, without conscious awareness. A second more likely hypothesis is that subjects are able to think of and focus on something else which then displaces the to-be-forgotten item from working memory ("diversionary thought" and "associative interference" hypotheses). A third hypothesis is that subjects are actually able to shut down areas outside of working memory that would otherwise process the items.

In deciding between these hypotheses, it's important to take a closer look at the data. Some have reported that even novel cues still do not elicit recall of the to-be-forgotten items, suggesting that the directed forgetting is not simply due to "erasing" the association between items. A network of brain areas has been shown through fMRI to be more active during suppression than recall (e.g., dorso- and ventrolateral prefrontal cortex, anterior cingulate cortex) suggesting that this is an active suppression process. However, data from hippocampus show a mixed response, in that items that are later remembered show more hippcampal activation than items that were simply forgotten, whereas items that were actively suppressed showed the highest activation of all.

Research is just beginning to shed light on the functional contributions of each of these brain areas to the task of directed forgetting, and any conclusion about their true function is of course premature. Nonetheless, it appears that there are active suppression processes which involve a network of brain regions coordinated by DLPFC and ACC, and which may show an advantage for suppressing emotional as opposed to neutral stimuli.

Related Posts:
A role For MicroRNA in Learning and Memory
Tyranny of Inhibition

1/30/2006

Risk Taking and Intelligence

Common wisdom says it's "stupid" to take unecessary risks, but some surprising results from the Journal of Economic Perspectives suggest that intelligent people might be the most likely to make these "stupid" decisions. In an NYT interview with Professor Shane Frederick of the MIT Sloan School of Management, the author relates how a short math test has unearthed some fascinating insights into individual differences in risk taking. Consider the following problems:

1) A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

If you got all three answers right, you may be good at math, but you're also probably someone who reflects on their answers; each question has an intuitive "foil" which is in fact completely wrong. Frederick found that the score on this test predicted the amount of risk that individuals would take in choosing between various financial payoffs - the higher your score, the more likely you are to either wait for a reward or to take risks in order to get a better reward. In addition, there was an interaction with gender, such that high-scoring women show slightly more willingness to wait for a payoff than high-scoring men, but low-scoring women are even more risk averse than low-scoring men.

These differences are not well explained by current theories of decision making, such as Kahneman & Tversky's "prospect theory" which states that subjects evaluate risks according to an asymmetric utility curve in which risks are weighted more heavily than potential gains. Frederick suggests that these diversions from prospect theory may actually be a result in differences in intelligence between the groups, although there are problems with this interpretation.

First, many of the scales used by Frederick explicitly rely on introspection and self-report, two processes known to be inaccurate in exactly this context (e.g., the self-report of SAT scores). Second, any scale containing only three questions is likely to show a lot of variability, especially one containing math problems (since "smarter" participants could simply have been exposed to the questions more frequently), but Frederick does not report the statistics that bear on this question. Third, his analysis is full of references to vague terms like "cognitive ability," and the suggestion that this is related to an interaction of working memory, processing speed, and IQ, but with no actual explanation of exactly how those factors interact to produce cognitive ability, nor which of them is particularly relevant to correctly solving these math problems.

Many of his control analyses are also problematic, above and beyond the use of relatively undefined terms. For example, there are no reaction time measures for the math problems: what if good performance is more strongly related to how much time you spend reviewing your answer than to 'cognitive ability'? Or consider one of the results from a question meant to control for individual differences in time preference: those in the low scoring group "[the 'cognitively impulsive'] were willing to pay significantly more for the overnight shipping of a chosen book (item 1) which does seem like an expression of an aspect of pure time preference (the psychological 'pain' of waiting for something desired)." [emphases in original] Even if you assume that terms like "psychological pain" could actually be measured in a way that bears on this question, it's difficult to know whether the low-scorers are actually impatient or if they just happen to be voracious readers - especially since other measures of "time preference" were unrelated to score.

In summary, this is an interesting starting point for the study of individual differences in decision making, but it is plagued by several methodological problems. Nonetheless, the idea that intelligence (or other measures of executive control, such as self-monitoring) may interact with risk aversion is fairly interesting; more elegant designs will be needed to conclusively prove this fact.

EDIT: Today COGBlog is also running a very nice story on this study's methodological flaws - and finds even more!

The correct answers to the math problems are 5 cents, 5 minutes, and 47 days.

1/29/2006

Leptin and Depression

Science Nerd Depot blog has a nice introduction to the hormone leptin. Leptin is yet another potential mind/body link, this time between depression and obesity. Interesting stuff...

1/27/2006

Neural Oscillations and the Mozart Effect

Nearly 10 years ago, a group of psychologists made the astonishing claim that listening to classical music improves people's mathematical and spatial reasoning skills. While not limited to humans (even rats showed IQ increases) it is sometimes limited to certain types of music or certain situations: everything from Mozart to Philip Glass has been tested for beneficial effects in situations ranging from IQ testing to invasive surgery. The 250th anniversary of Mozart's birthday seems like a suitable time to review the evidence on what has been called "the Mozart effect": for whom, and in what situations does music have any positive effect?

In some ways, the Mozart effect seems like a diverse and robust finding. Six-year-old children who are given keyboard or voice lessons have shown a reliable 2 to 3 point increase in IQ scores compared to control groups who received other types of artistic lessons. Pre-schoolers with two years of music lessons scored better on spatial reasoning tests than those who took computer lessons for the same time. And as little as 10 minutes of exposure to Mozart's Sonata for Two Pianos in D Major resulted in a temporary enhancement of spatial-temporal reasoning on the Stanford-Binet IQ test. These and other studies lend credence to experimental pedagogical methods (like the Kodaly method), and have inspired a flurry of commercial interest, from the sale of "Mozart Effect" cds to the formation of the Music Intelligence Neural Development (MIND) Institute, which claims to be able to dramatically increase standardized test scores.

However, there's plenty of reason for skepticism. Not everyone who has attempted to replicate these results has been able to do so - perhaps music only has certain effects for specific populations. Some hold that the effect of such music is only to elevate arousal and mood, which then results in improved performance and well-being in a variety of situations. And if true, the Mozart effect is one of the few examples of an extremely rare phenomenon known as 'far transfer,' in which experience with one domain (music) can transfer benefits to a completely distinct domain (spatial reasoning). Still, the variety of reported results remains alluring: Mozart has also been shown to allow some with Alzheimer's disease to function more normally, to reduce the severity of epileptic seizures, and even to lessen the need for sedatives in surgery relative to no music or white noise.

Some of these effects are also evident from computational models of the brain, which often result in neural assemblies forming characteristic firing rhythms. According to Shaw's book, "Keeping Mozart In Mind," he and graduate student Xiaodan Leng transformed their models' activity into music, and noticed patterns closely resembling several different musical styles. This finding motivated the first "Mozart Effect" study, conducted by Shaw and Rauscher: if neural networks fire in patterns that are related to music, perhaps musical experience can help improve their function.

Like most things related to biology, the whole story is not so clear cut. While babies show diverse abilities to recognize novel rhythms, this skill remarkably deteriorates within the first few years of life, just as infants' ability to discriminate phonemes deteriorates as they acclimate to their culture's phonography. So if music does affect the spatio-temporal patterns of neural networks, it could well be limited to specific rhythms. And if musical experience is unequivocally good for children, it seems puzzling that autistics show remarkably enhanced tone memory and discrimination relative to their peers.

Related Posts:
Neural Network Visualization
Review: Everything Bad is Good For You
Synchrony vs Polychrony
Tuned and Pruned: Synaesthesia

If you liked this, don't forget to digg it.

1/26/2006

The False Promise of View-invariance

Any complete theory of visual object recognition must explain how humans are able to reliably identify specific objects from a near-infinity of different orientations. For example, can you identify the image at the start of this article?

Many theorists, although perhaps most emphatically Jeff Hawkins, have claimed that this 'view invariance' is the hardest problem in computational vision. Accordingly, many explorations of possible object recognition mechanisms have posited the existence of view-invariant geometric primitives, such as geons, somewhere in visual cortex. This theory says that objects are recognized by extracting these geons from 2-d retinal images, and the relationships between these geons form the basis for basic-level object categorization.

The major strengths of view-invariant theories (relative to 'template-based' approaches) are their viewpoint-invariance (well, obviously!), their resistance to visual noise, a sufficient combinatorial power to describe the object space of human visual experience, correspondence with some experimental and anecdotal data (complementary-part priming and contour deletion), and a superficial complementarity with what is known about receptive field structure in inferotemporal cortex. Additionally, given that geons are simple combinations of nonaccidental features, it seems to be a tractable way of implementing basic three-dimensionality in object recognition.

But in a seminal paper from Cognition in 1995, Michael Tarr took a step back from the intuitive appeal of invariant object recognition, and forced theorists to face up to the facts: human object recognition is not invariant - we are clearly faster at recognizing objects from specific, characteristic views. Could it be that we actually store a unique representation for every possible view of every possible object?

It sounds implausible, but painstaking experiments show that this is a more parsimonious explanation of human object recognition than view-invariant theories. For example, the cases in which object recognition seems view-invariant can be explained by floor effects in reaction time measures, such that in these cases all the stimuli are discriminable on the basis of features arranged along a single dimension. As soon as these features are arranged with an additional degree of freedom, reaction times are found that are consistent with view-dependence. Results from Shepherd's classic mental rotation experiments clearly show that recognition is not completely invariant: reaction times increase linearly with degree of rotation, as though we require some 3-D mental rotation in order to match images with their stored representations.

Secondly, view-dependence also seems to be more compatible with neurophysiological data. Single cell recordings from monkey IT show view-dependent response patterns. Although there are some neurons that have been identified that are entirely view-invariant ('bill clinton neuron', 'halle berry neuron') these appear to be nearly "everything-invariant": because those neurons fire reliably for both caricatures and photos (both young and old), they appear to be more conceptual than visual.

Further, certain algorithms have been discovered which can recognize objects from novel perspectives by interpolating between (or extrapolating beyond) characteristic views of those objects. It appears that just a few orthogonal images of each object (perhaps as few as five or six, but depedent on expertise and geometric complexity) in combination with some sophisticated processing of these images, are sufficient to recognize an enormous array of perspectives on an enormous number objects.

Related Posts:
Active Maintenance and the Visual Refresh Rate
Kosslyn's Cognitive Architecture
Language Colors Vision

1/25/2006

A Role for MicroRNA in Learning and Memory

In neural networks, learning and memory are accomplished by changes in the connection strength between neurons. This 'synaptic efficacy' can be changed in many ways: the number of vesicles released, the number of receptors, changes in the alignment of the pre- and post-synaptic membranes along the synaptic cleft, changes in the rate of neurotransmitter reuptake and metabolic breakdown, myelination, and even the growth of entirely new dendritic spines.

Dendritic remodeling can occur extremely quickly - in vivo imaging of drosophilia dendrites shows remarkable structural changes within just 3 minutes of learning (I have a video of this if anyone is curious). After looking at neuroscience text books and static images of complex neural networks, it's easy to forget that the mass of interconnected tissue known as 'brain' is constantly rearranging itself in space.

Researchers from Harvard and the Medical University of Vienna have shown that at least some of this complicated spatiotemporal structural change is mediated by microRNAs. microRNA's are small, non-coding sequences that modulate the translation of messenger RNAs (mRNA) by binding to complementary sequences; previously no one had identified the specific micro- and messenger-RNAs that were involved in synaptic change. And it's no wonder, since they seem very hard to find: the microRNA they identified, miR-134, is brain-specific and localized to hippocampal dendritic spines. It was shown to negatively regulate the size of dendritic spines by inhibiting the encoding of a protein kinase that is necessary for spine growth, while an "antisense" inhibitor of miR-134 can cause an increase in dendritic spine size.

While mainstream cognitive science now undergoes a transformation into cognitive neuroscience, ending a long period of "agnosticism towards the brain," it does not yet embrace genetic factors (that is left to the "behavioral geneticists"). I can only imagine that thiry years from now, once cognitive neuroscience is better understood, we'll see yet another transformation take place: cognitive neuroscience transforms into cognitive neurogenetics. This will end the period of "agnosticism towards genetics" which is just now replacing mainstream cognitive science's previous "agnosticism towards the brain."

Related Posts: Molecular Basis of Memory

RIP Chris McKinstry

After expressing suicide intentions on his blog, Mindpixel founder Chris McKinstry was found dead in Chile on January 23rd, as reported in the El Mercurio.

Conference: Bio-Inspired Models and Hardware for Brain-like Intelligent Functions

This sounds like a pretty interesting conference, even if it sounds a little broad. Thanks to Positive Technology Journal for bringing it to my attention...

ISABEL 2006: Bio-Inspired Models and Hardware for Brain-like Intelligent Functions
August 24-25, 2006, Seoul, Korea

This symposium aims to bring together international researchers from the cognitive neuroscience and engineering communities for biologically-inspired models and system implementations with human-like intelligent functions. The previous meeting was held as a post-IJCNN Symposium on Bio-Inspired Models and Hardware (BIMH2005) at Montreal, Canada, on August 5, 2005. Although artificial neural networks are based on information processing mechanisms in our brain, there still exists a big gap between the biological neural networks and artificial neural networks. The more intelligence we would like to incorporate into artificial intelligent systems, the more biologically-inspired models and hardware are required. Fortunately cognitive neuroscience has developed enormously during the last decade, and engineers now have more to learn from the science. In this symposium we will discuss what engineers want to learn from the science and how the scientists may be able to provide the knowledge. Then, mathematical models will be presented with more biological plausibility. The hardware and system implementation will also be reported with the performance comparison with conventional methods for real-world complex applications. A panel will be organized for the future research directions at the end. This symposium will promote synergetic interaction among cognitive neuroscientists, neural networks and robotics engineers, and result in more biologically-plausible mathematical models and hardware systems with more human-like intelligent performance in real-world applications.

Topics include, but are not limited to:

  • Models of auditory pathway.
  • Models of visual pathway.
  • Models of cognition, learning, and inference.
  • Models of attention, emotion, and consciousness.
  • Models of autonomous behavior.
  • Hardware implementation of bio-inspired models.
  • Engineering applications of bio-inspired models.

Visit the conference website for detailed information.

1/23/2006

Anticipation and Synchronization

In Nakatani et al.'s '05 paper in the Journal of Cognitive Neuroscience, they describe a peculiar EEG pattern in subjects experiencing a phenomenon known as "attentional blink." Attentional blink occurs when perceivers can only report one of two target images that are presented in quick succession. This is usually found in conditions of high processing load, such as those demanding of both early and higher visual processing.

Attentional blink is often studied in a paradigm where a sequence of images are presented in rapid succession, for about 100 msec each. Subjects have to detect the presence of two specific images within the sequence. When the target images are present and separated by more than 5 intervening distractor images, both targets are detected without much difficulty. When the two images are presented directly after one another (no intervening images) they can also be reported without much difficulty. However, when the targets are separated by between 1 and 4 intervening images, the second target is usually missed!

What happens during this time, when participants are essentially blind? No one really knows. One explanation is that this "attentional blink" is actually the switch cost associated with task switching between looking for target 1 and looking for target 2 (also known as switching the "attentional set"). Accordingly, several studies implicate the areas responsible for working memory: right posterior parietal, cingular, and left temporal/frontal regions. Further, when the second target was detected, subjects showed a large area of phase coherence in the gamma range (30-80 Hz) throughout the task, suggesting that this synchronized activity might reflect differences in attentional focus, which subsequently translate into improved target detection.

The frequency range of 30-80 Hz has many other relationships with visual attention: synchrony within this band has been implicated in object detection, memory retention, readiness, and consciousness. The authors propose that it may be involved in anticipatory processing, or as functioning something like a "procedural buffer" which can be used to alleviate some of the switch costs associated with changing attentional set.

Sure enough, the authors found increased baseline levels of synchrony in the experimental group (which had to find 2 targets) compared to the control group (which had to find only 1). Interestingly, global synchronization involving more than 188 different electrodes also appeared every 300-500 msec, even before the presentation of the first target, in those participants that were able to detect the second target. The authors suggest that this global, long-range synchrony may be a mechanism by which information is transmitted quickly from occipital to frontal areas, and that cycles of synchrony may correspond to processes such as visual "filter reconfiguration or, alternatively, a memory operation."

The authors continue to say that "the real challenge, rather than looking for traces of endogenous versus exogenous control mechanisms, might be to investigate how a system that is switching continuously between different intrinsic states of phase synchrony is able to adjust these rhythms in coordination with external events." Along those lines, one really interesting thing about this pattern of EEG activity is that it could be caused by a very slow oscillation (2-3 Hz) modulating a much faster rhythm (38-43 Hz). If these are actually two separable components, the slower oscillation could be a good candidate for synchronization with events in the external world (such as expectation of a target).

Of course, then we enter into the infinite regress often associated with assigning "agency" to brain functions: what then modulates the slower signal?

Related Posts:
Active Maintenance and The Visual Refresh Rate
Synchrony vs Polychrony
Hypnotic Lullabies

1/21/2006

Neural Network Visualization

The easiest way to visualize processing in neural networks is probably to download and install Neural Viewer. Although certainly not the environment to be used for biologically plausible cognitive simulation, it is still a very useful tool for understanding basic computational principles like spreading activation, spatio-temporal codes, hebbian learning, membrane thresholds, etc. And on an aesthetic level, it's just beautiful to literally see how neural oscillations give rise to abstract representations.

Related Posts:
Learning Like a Child
From Inhibitory to Excitatory And Back Again
Complexity and Biologically Accurate Neural Modeling
Synchrony vs Polychrony

1/20/2006

Active Maintenance and The Visual Refresh Rate

Does the visual system have a "refresh rate," similar to the 24 frames per second of traditional film, or the 60 Hz refresh rate of many computer monitors? In one sense this is an obvious question: film appears continuous precisely because we have a visual refresh rate somewhat below 25 fps. You can also see this in the "strobing" effect of waving your hand very fast in front of a CRT monitor: the distance between successive images of your hand corresponds to the velocity of your hand divided by the difference between the refresh rate of your monitor and the refresh rate of your visual system.

While interesting superficially, this may not seem incredibly important for something like working memory, or other components of higher cognition. But according to one view of working memory, "refresh rate" may actually be critical.

In Kosslyn's cognitive architecture, for example, there is no single component corresponding to Baddeley's "visuo-spatial sketchpad." (The sketchpad is a purely visual storage space in which humans can perform tasks like mental rotation and size comparison, and is known to be distinct from another storage space for auditory information known as the articulatory loop.) Instead of considering the sketchpad a single architectural component, it can be viewed as an emergent property of the system in which the sketchpad is a rapid cycling of Kosslyn's architectural loop: from visual buffer to attentional window, to object/spatial encoding, to associative memory, to information lookup, to attention shifting, and back to attentional window, ad infinitum.

Indeed, there are several reasons to suspect that such "rapid cycling" may be responsible for visual working memory. In visual tasks requiring adaptive responses from one trial to the next (such as the Dimensional Change Card Sort, or DCCS), other measures of processing speed highly correlate with successful switching performance. And neural network models of good DCCS performance consist of a layer corresponding to prefrontal cortex, in which artificial neurons project back to themselves in an explicit loop of recurrent connections. Further, we see an analogous phenomenon in measures of verbal working memory: your digit span is highly correlated with how quickly you can subvocally rehearse the digits, just as you would expect if the auditory system works on the basis of a similar loop.

Of course, it's possible that the "retinal refresh rate" and the "visual sketchpad refresh rate" are two distinct quantities, and if visual working memory span measures (such as mental rotation, size comparisons, and attentional blink measures) do not correlate with measures of retinal refresh rate, this would appear to be the case. But this is still an empirical question.

Supposing for the moment that both visual and verbal working memory are implemented in prefrontal cortex as a kind of cognitive loop with a certain refresh rate, then one particularly interesting phenomenon is the developmental emergence of verbal rehearsal of visual information at around 7 years. Why does it take so long for kids to be able to verbally rehearse information that was originally presented to them visually? Perhaps there is a slow-developing gateway between the articulatory and the visual loops in prefrontal cortex. Corresponding with the hypothesis that both the cognitive loop and the gateway between loops arises from a willful process of active maintenance, some work has shown that kids can be trained to verbally rehearse visual material before 7 years of age. What late-developing prefrontal gating mechanism could accomplish this transfer of information between the two modalities, and can it be modeled with neural networks?

1/19/2006

Kosslyn's Cognitive Architecture

Kosslyn's 1990 paper in Cognition describes a cognitive architecture that begins at low-level vision and stretches all the way up into executive control. Here is a summary of his "unified theory" - future posts will examine some "architectural remodeling" that's required given the past 15 years of new discoveries in cognitive neuroscience.

1) Low-level vision - Defined as the component that is driven by sensory input to detect lines, shapes, colors, textures, and depth, this component is topographically organized.

2) High-level vision - Defined as the component that implements mental imagery and object identification, this component is often not topographically organized.

2.A) Visual Buffer - This subsystem of high-level vision explicitly represents edges, depth, and orientation at multiple scales. As the first component of high-level vision, the visual buffer receives its input from low-level vision.

2.B) Attentional Window - This is a "window" onto the buffer, of fixed and presumably less size than the buffer itself, which can adjust its focus on the buffer in three dimensions. It is also subject to a scope/resolution tradeoff in which attending to a larger visual area results in decreased resolution. Window size is changed linearly through either bottom-up, preattentive mechanisms (on the basis of simple physical features) or through top-down attentional shifting. This sends spatial information to the dorsal stream, and object/identity related information to the ventral stream.

2.B.1) Dorsal Stream - this pathway receives information relevant to spatial properties from the attentional window and magnocellular input, and consists of two main stages: spatiotopic mapping and relation encoding.

2.B.1.a) Spatiotopic mapping - this process transforms retinotopic input from the dorsal stream into a spatiotopic mapping: a unified representation of both the size and the location of both objects and their constituent parts. Multiple levels of scale and resolution can also be represented, and outputs are sent both to long-term associative memory and to the encoding processes, as described below.

2.B.1.b) Encoding Process - Receiving input from spatiotopic mapping, this process consists of two stages:

2.B.1.b.i) Categorical relation encoding - based on input from spatiotopic mapping, this left-hemispheric process encodes categorical relations between objects and object parts. These relations are of the kind "above," "next to," "behind," etc. Orientation and size are intrinsically encoded by these relations and their associated nonspecific values: "how far above" one part is from another part can tell you the relative size of that object.

2.B.1.b.ii) Coordinate relation encoding - based on input from spatiotopic mapping, this right-lateralized process represents the specific coordinate locations of objects (or an object's parts) and the specific distances between them. Either global or local coordinate systems can be used; for global coordinates, there's only a single point of origin, whereas in local coordinates every object can be represented in relation to another object (in a sense, every object is a point of origin). This process is heavily used in navigation.

2.B.2) Ventral Stream - this pathway receives information relevant to an object's physical characteristics from the attentional window and parvocellular input, and consists of three primary stages:

2.B.2.a) Preprocessing - this step extracts invariant properties from ventral input, including parallelism, geometric properties of edges and corners, etc. Bayesian methods can determine whether these properties occured by chance alone; those that are considered "nonaccidental" (aka "trigger") features are then combined into invariant perceptions of shape, which is matched against stored representations of shape below.

2.B.2.b) Pattern Activation - Information from Preprocessing is matched against modality-specific representations of previously-seen objects. This system must be capable of both generalization and identifying unique instances of objects. It then sends the name of the potential object, along with a confidence rating, to long term associative memory, with which it engages in a kind of dialogue. The sizes, locations and orientations of both stored and the input representations can be changed until the best match is found.

2.B.2.c) Feature Detection - this system works primarily with color (though probably also texture and intensity) to extract features unrelated to shape. Those detected features are then sent directly to associative memory. This subsystem is admittedly coarse.

3) Long-term associative memory - this poly-modal, non-topographic component receives inputs from a variety of dorsal and ventral subsystems, including Spatiotopic Mapping, Relation Encoding, Pattern Activation, and Feature Detection. All the various pieces of information corresponding to an object become associated with one another through a parallel process of constraint-satisfaction, where both object properties and spatial properties are integrated into a propositional representation of object identity. This information feeds back into the pattern activation subsystem, where visual memories are stored.

4) Hypothesis testing system - this component violates the hierarchical decomposition principle of Kosslyn et al., because it is not just one area but consists of an interaction between many areas. It consists of two winner-take-all competing subsystems operating in parallel, and a third final attention-shifting process:

4.A) Coordinate property look-up - this subsystem returns the spatiotopic coordinates of the parts of a specific object, based on representations in long-term associative memory. This information is then sent not only to the attention-shifting subsystem (below) but also to the pattern activation subsystem, in order to bias perception in favor of a hypothesized object. It is this latter pathway that accomplishes object recognition through a process of constraint-satisfaction between perception and prediction.

4.B) Categorical property look-up - this subsystem provides local coordinates for the parts of a specific object that has been "looked-up" in long term associative memory. Outputs from this process are sent to the pattern activation subsystem, in order to bias perception in favor of a hypothesized object. Outputs are also sent to the categorical-coordinate conversion subsystem, below.

4.B.1) Categorical-coordinate conversion subsystem - receiving input from categorical property look-up, this system transforms object size, taper, and orientation information into a set of specific coordinates. It does so through a two-stage process:

4.B.1.a) Open-Loop process - this outputs a range of coordinates to the closed-loop process via a "fast and loose" type algorithm

4.B.1.b) Closed-Loop process - this "fine tuning" process zeros in on specific coordinates to which attention can be shifted, based on location closest to current position of attention window.

4.C) Attention-shifting system - receiving input from the look-up systems above, this process shifts the attentional window, and moves the head, eyes, and body as appropriate to the new coordinates for attention. Since the attentional window is retinotopic, this system must convert coordinates from spatio- to retinotopic and then send the appropriate information to other regions. This is accomplished in two phases, and three very coarse subsystems:

4.C.1) Phases

4.C.1.a) transform coordinates from spatiotopy to retinotopy

4.C.1.b) fine-tune the new attentional window location based on attentional feedback

4.C.2) Subsystems

4.C.2.a) Shift attention to position in space - accomplished by the superior colliculus

4.C.2.b) Engage attention at that position - accomplished by the thalamas

4.C.2.c) Disengage attention as appropriate - accomplished by parietal lobe

1/18/2006

Giving the Ghost a Machine

Instead of using parts of natural brains to control robots, very few researchers are taking the opposite approach: using computer parts to control animals. This form of "remote control" mostly relies on basic conditioning paradigms, such as pairing certain responses of a rat to sensations with electrical stimulation, or relying on unconditioned stimuli like a cockroach's fear of light. Most explorations of neuron-to-silicon technology focus on brain-to-machine communication. The reverse - machine-to-brain communication - has garnered interest from DARPA and academia alike, though neuroengineering work with this focus is still rare.

In the May 2, 2002 issue of Nature, Talwar and colleagues from SUNY reported they could successfully train and wirelessly control rat behavior using brain microstimulation from up to 500m away. Rats were trained via positive reinforcement, in which rewards were delivered via electrical stimulation to the medial forebrain bundle, and cues were delivered via stimulation of the somatosensory cortical areas that normally receive input from the whiskers (a natural navigational guide in rats). After as little as ten training sessions, the rats were able to successfully navigate a variety of terrain, and remember the stimulus-response contingencies up to several months later. As they write, "the ability to receive brain activity remotely and interpret it accurately could allow a guided rat to function as both a mobile robot and a biological sensor." The researchers were not reluctant to point out the possible military applications for their work ("pest control, military surveillance, and mapping of underground areas," said Talwar), and therefore it is perhaps not surprising that little has since been published on the wireless control of small animals.

This and similar research had a long history of funding through DARPA's multidisciplinary "computational neuromechanics" grant (1998-2004). Some groundbreaking experiments by Miguel Nicolelis probably precipitated this funding. At Duke in the mid-90's, Miguel Nicolelis's team implanted electrodes in a rat whichi then underwent operant conditioning, where a lever press was paired with a drink of water. The electrodes recorded the activitiy of 46 motor cortex neurons, and the researchers then unpaired the lever with water, while a new contingency was put in place: the same 46 neurons had to be activated in order for the drink of water to be delivered. The rat soon learned to receive water through thought alone.

Despite the somewhat frightening implications this work, most of it has possible medical applications as well, such as the development of artificial prostheses. Several challenges remain for any attempt to implant electrode arrays for prolonged periods, however: the brain usually views these electrode arrays as foreign objects, and will attempt to expel them.

1/16/2006

Neurorobotics

Several researchers at Northwestern have created a hybrid neurorobotic system, consisting of a brain-machine interface between a Khepera base module and lamprey brainstem. These two components are connected in a closed-loop, so that the resulting "neurobot" is entirely autonomous and displays behavior eerily reminscent of the animal from which its brain was removed.

Lampreys are jawless, eel-like fish whose locomotor systems have been extensively studied. The researchers explanted a region of the lamprey brainstem that is known to stabilize swimming and keep the lamprey upright by receiving vestibular input. In the robot, two electrodes instead applied stimulation to the reticular formation (specifically to intermediate and posterior octavomotor nuclei), where the stimulation rates were proportional to the light intensity as measured on each side of the robot. Two electrodes placed on a part of the reticular formation (the right and left posterior rhombencephalic reticular nuclei) - the region that normally sends swimming motor commands - received input from the explanted brain, and caused the robot's two wheels to turn proportional to the spike rates measured by the electrodes.

These modifications caused the neurobot to become phototactic and move towards light - just as this area tracks and maintains the vertical axis in swimming and intact lampreys. It is an elegant demonstration of the abstract, content-invariant information processing performed by neural networks. Current work by these researchers is exploring the use of combinations of electrical stimuli and pharmacological agents, such as those founds in vivo, to manipulate plasticity and other neural functions.

Researchers at the biologically-inspired robotics group of Ecole Polytechnique Fédérale de Lausanne (the same university to work with IBM on Blue Brain) have taken this work a step farther - by adding anatomically correct biomechanics. In addition to similar work with lampreys, they have also used neural networks and genetic algorithms in tandem to reproduce the neuromechanics of salamanders, one of the first veterbrates thought to have made the transition from aquatic to terrestrial life. There are several animations and diagrams of the resulting network structure available here. Using some of the same networks seen in lampreys, they claim to faithfully reproduce not only salamander locomation, but also the evolution of this terrestrial locomotion from more basic swimming behavior.

In many ways, neurorobotics is a natural complement to computational modeling. Some have stressed the importance of physically instantiating neural network models as a way to verify their plausibility: the integration of neurophysiology can bridge the philosophical divide between abstract simulation and empirical research. Others might argue for the long-term advantages of neurobotics: binding and embodiment are likely properties that emerge only in the interaction of an agent with its physical environment. Even areas of the brain responsible for language (e.g., Broca's area) are located within motor cortex, suggesting a rather deep connection between our physical interactions with the environment and the way that higher cognitive functions are structured and sequenced.

Related Posts:
Emotional Robotics
A Mind of Its Own: Wakamaru
Imitation vs Self-Awareness: The Mirror Test
Mind Games: Humans, Dolphins and Computers

Nature vs. Nurture and Materialism

Oxford University pharmacologist Susan Greenfield was interview by ABC Radio's "Science Friday" on a wide spectrum of neuroscience topics, including free will, nature versus nurture, adult neurogenesis, the genetics of depression, fMRI and the law, and neuroscience as materialism.

One of the most interesting parts of the discussion, paraphrased below, was triggered by the question, "What is the current state of the argument between nature and nurture?"

"We now know that there is a strong interaction. A gene - all a gene is - is something that makes a protein, but it makes more than one protein - it makes tens of thousands of different proteins. They then will switch on or trigger other genes to be switched on or off. [So] you can't really disentangle the interaction between nature and nurture. And then, the big issue is what those proteins are doing - those big chemicals. They don't have 'good housekeeping', or 'being witty' trapped inside them. So you can't have a gene equals a protein equals 'good housekeeping', or 'dignity', or something like this. It's really much more interesting than that."

And later, when asked to respond to the idea that neuroscience represents a kind of ultimate materialism:

"I would query what the alternative to [neuro-materialism] would be. If it's not the physical brain, what else could it be? An emotion floating out there in the ether? I think the problem, and why it seems repugnant to many, and difficult to accept for most, is that 'nothing-but-ism'. I do embrace materialism, but not 'nothing-but-ism.'

"The mind is more than merely the brain. There's a difference - we can reduce your body to ten cents worth of chemicals, or something like this, which it may well be, but we know when you put things together there's things called emergent properties where the whole is more than the sum of the parts. So imagine you're drawing half a circle, a straight line and two dots. If you configure them in a certain way, you'd have a face because of the relation between those elements. Nothing has been added, but by a certain pattern, they are more than just 'nothing-but' a line and a couple of dots: they're a face.

"We know that in the brain, although you can reduce it to chemicals, genes and brain circuits, the way they are put together - in ways that we poorly understand at the moment - result in these emergent properties."

1/15/2006

Intelligent Adaptive Toys

A topic of intense interest within human-computer interaction is the design of technology for children, often through a fusion of ubiquitous computing with educational principles. Such research is guided by several assumptions: computers have latent potential for enhancing education; children possess latent learning capacity not fully engaged by current methods of teaching (particularly in math and science); and as future users, children have been hitherto ignored by mainstream HCI, which has largely focused on usability for older populations.

Seymour Papert was one of the first people to consider how computers might revolutionize learning. After graduating from Cambridge and working with Jean Piaget, he founded MIT's Artifical Intelligence Laboratory and wrote Perceptrons with Marvin Minsky. He then turned to research the ways in which computers could enhance learning and creativity, and soon authored Mindstorms (after which Lego Mindstorms is named). Papert's basic premise was that children need objects to think with, and that the computer could be the ultimate instrument of learning: "the computer is the Proteus of machines. Its essence is its universality, its power to simulate.” Although the premise guiding much modern research in the field has evolved to objects that think with children, the basic principles of current research in this field remain the same.

The largest research group at MIT's Media Lab remains focused on children, but this focus is by no means limited to Cambridge. At the Craft Technologies Group at University of Colorado, Boulder, Mike Eisenberg and others are developing various low-cost options for integrating computation with educational activities. They have refined inexpensive techniques for math visualization in three-dimensions, created cellular-automaton construction kits, and even developed low-tech three-dimensional printers. These tools are hypothesized to exercise spatial thinking and therefore the parietal lobe, a brain region implicated both in normal arithmetic ability and impaired in dyscalculics.

In contrast, much of the work done at University of Maryland's Human-Computer Interaction Lab centers around the use of language. In the "StoryRooms" project, Allison Druin and others have developed a system of wirelessly mesh-networked intelligent toys in which children can act out their own narratives using computational props. Another project, "PETS," allows child-friendly programming of various robots to enact stories written by children: think Teddy Ruxpin with WiFi and an API.

Nor is this field purely academic; several private companies have begun developing similar products. Anthrotronix, a Maryland-based company, has developed programmable robots for rehabilitation, as well as various systems designed to exercise attention, visual learning, and social interaction in both neuro-normals and autistics. Leapfrog has created a "pen-top" interactive game that has been shown to help special needs children. Kaplan has an immense variety of toys that purport to enhance problem solving. And Wonderbrains develops toys based on MacArthur "genius grant" recipient and Harvard psychologist Howard Gardner's theory of multiple intelligences.

Many of these toys, however, have yet to scientifically demonstrate any positive cognitive effects on children. Given how little is currently known about the mechanisms of brain development, the causes (and effects) of developmental delays, and the role of so many other aspects of popular culture in cognitive development, it's hard to know how these adaptive and intelligent toys may influence children.


Related posts:

Mind Games: Humans, Dolphins and Computers
Embryogenesis (and mechanisms of brain development)
Learning Like a Child (and possible effects of developmental delay)
Review: Everything Bad is Good For You (cognitive effects of popular culture)

1/13/2006

Molecular Basis of Memory

Harvard biologists have identified a protein, called Armitage, which regulates the formation of long-term memories. By manipulating this protein, the researchers were able to both enhance and impair long-term memory formation in fruit flies, as measured by a classical conditioning paradigm using smells and electric shock.

Armitage - and other proteins comprising the RISC pathway - is also present in both mice and humans. It appears to play a regulatory role in memory formation such that destruction of the molecule is necessary for additional protein synthesis at the synapse, which ultimately results in behavior consistent with long-term memory formation.

While it is possible that this could lead to drugs that would enhance human memory, there are several complicating factors. First, Armitage is localized to the synapse, and so no one knows what its effects might be in other regions of the body and brain.

Second, what are the natural conditions that trigger the destruction of Armitage so as to consolidate a given memory? In other words, what "chooses" to remember? The authors propose that "an integrated sensory trigger" induces Armitage destruction, and that it "is triggered with neuronal specificity in order to produce memory-specific patterns of protein synthesis," but these are just descriptions, not mechanistic explanations of how this process occurs. We may indeed increase memory by interfering with this pathway, yet possibly with disastrous effects on memory selectivity: what would it be like to experience life without forgetting anything?

Jorge Luis Borges wrote about such a character, named Funes, who "remembered the shapes of the clouds in the south at dawn on the 30th of April of 1882, and he could compare them in his recollection with the marbled grain in the design of a leather bound book which he had seen only once, and with the lines in the spray which an oar raised in the Rio Negro on the eve of battle of the Quebracho ... These recollections were not simple; each visual image was linked to muscular sensations, thermal sensations. ... He told me, 'I have more memories in myself than all men have had since the world was a world..."

A related third point is that the study was limited to the olfactory bulb. Given that protein expression is stimulus-specific, the pathways involved in the formation of other kinds of memories could be different. The "neuronal specificity" and "sensory triggers" that cause formation of long term memories from specific experiences could also be drastically different depending on the type of stimulus.

Nonetheless, this is an interesting advancement in memory proteomics precisely because it allows us to ask questions about how memory selectivity is accomplished. This question is also of increasing interest to cognitive neuroscientists, given that "selection efficiency" appears to be one of the most important factors in individual differences working memory capacity.

Review: Everything Bad Is Good For You

Steven Johnson's newest book, "Everything Bad Is Good For You" makes the controversial claim that popular culture engages us in a kind of mental calisthenics, resulting in the drastic changes in IQ distribution seen in the last 50 years. He describes beneficial effects of changes in popular culture - changes that have often been decried as hallmarks of societal demise - and shows how these new forms of media exploit our natural reward circuitry. Echoing Marshall McLuhan, Johnson says it's not so much the content (or 'message') of cultural media like Grand Theft Auto and The Sopranos, but the multi-threaded, interactive style of delivery (the 'medium') that engages us in a cognitive workout, and ultimately results in the drastic IQ increases of post-World War II America.

Johnson begins his book with a vitriolic quote from George Will: "Ours is an age besotted with graphic entertainments. And in an increasingly infantilized society, whose moral philosophy is reducible to a celebration of 'choice,' adults are decreasingly distinguishable from children in their absorption in entertainments and the kinds of entertainments they are absorbed in - video games, computer games, hand-held games, movies on their computers and so on. This is progress: more sophisticated delivery of stupidity." This quote characterizes the dominant perspective on popular culture. But contrary to intuition, Johnson argues, today's most popular entertainment is enormously complex according to several different metrics, such as number of concurrent plot lines, the interdependence or 'nesting' of those plot lines, the Kolmogorov complexity of the networks relating the characters, and the kind of thinking required to make sense of all this complexity. And what's more, popular media has been trending towards increased complexity for the past half-century.

The economics driving these developments relate to a shift from "least objectionable" programming into "most repeatable" programming, rewarding those games/movies/narratives that embrace ambiguity, those that require the entertained to take a more active and exploratory role in comprehension, and those that reward the inquisitively entertained with yet more ambiguity to resolve upon the next viewing. This neuroeconomic "device" is perfectly designed to hijack the pleasure system by establishing an expectation of reward. It is precisely this type of cognition which has been shown to modulate dopamine levels in the nucleus accumbens, providing the fix craved by pack-a-day smokers, ice-cream fanatics, and gambling addicts alike.

And while the violence illustrated in games like Grand Theft Auto may seem to provide the cognitive nutrition equivalent to gambling, Johnson emphasizes (to use McLuhan's phrase) that the "medium is the message." It is not the content so much as the method of delivery that determines its most important effects: that of rewarding critical thinking and emphasizing interactivity, whether purely cognitive (as in complex narratives) or integrating motor skills as well (as in games). Whatever the detrimental effects of prime-time depravity might be, the positive effect of this new interactive media trend takes the form of "the Sleeper Curve": a 3-point increase in average IQ per year for each of the past 100 years. To put this change in perspective, consider this: a person placing in the 90th percentile of IQ in 1920 would place in the bottom third of a IQ test in 2000.

"Everything Bad Is Good For You" is an incredibly provocative piece of cultural criticism, and while light on experimental evidence for causal relationships between IQ increases and changes in popular culture, it more than makes up for that shortcoming by illuminating ways in which this evidence might be attained. The book's best moments call to mind the optimism of the early 90s for engineering an interactive techno-topia, but these moments are thankfully tempered with a rigorously historical perspective and a firm grounding in relevant neuroscience. The book should be required reading for anyone with even a passing interest in communication theory, and is highly recommended for those with an interest in integrating neuroscientific principles with entertainment and education.

1/12/2006

Learning Like a Child

There's a fascinating post over at MindPixel about Elman's neural network modeling of cognitive development, in which he argues that maturational increases in working memory span may provide computational advantages not realized by neural networks that always had span of mature size. Proof comes from computational models of language, in which Elman attempts to recreate the mechanism behind so-called "critical periods" (or "sensitive periods" as qualified by Steven Rose) using simple recurrent networks.

First, Elman begins with staged input: he discretely increases the size of the training corpus, which consists of roughly 25,000 sentences from an artificial generative grammar (including verbs, nouns, prepositions, relative clauses, and various rules of agreement between them). This grammar permits utterances of the type "boys who chase dogs see girls," "girl who boys who feed cats walk," "cats chase dogs," "mary feeds john," and "dogs see boys who cats who mary feeds chase." Though clearly not as complicated as real English, it is constrained enough that incorrect formulations would be likely if the network were performing at chance.

Only the networks trained in a discretized, staged way - a five-stage process of gradual change in the training corpus from mostly simple utterances to mostly complex ones - were able to correctly predict novel grammatically-correct utterances. Remember that these networks are never told the grammatical rules, but learn them autonomously, just as toddlers do. As Elman puts it, "this is a pleasing result, because the behavior of the network partially resembles that of children. Children do not begin by mastering the adult language in all its complexity. Rather, they begin with the simplest of structures, and build incrementally until they achieve the adult language." Essentially, Elman gave his network carefully designed grammar lessons in some non-existent language.

Yet there is a problem with this training regime: real children are not exposed only to some clearly-defined, age-appropriate level of linguistic complexity, but instead are faced with a much more diverse mix of language. To address this problem of realism, Elman then builds a network in which the training corpus remains relatively constant, but memory increases through discretized stages, modeled as changes in the amount of recurrent backpropagated feedback. Amazingly, this network performed much like the previous, which had complete feedback available to it from the start. In other words, limitations on the network actually caused better performance than expanding the network's capacity!

This work has fasincating implications for approaches to development, both natural and artificial: If capacity increases are gradual, as physiology would suggest, what causes the clearly-defined sensitive periods of development? What environmental or internal conditions have evolved to precipitate increases in memory capacity? Can we induce these changes artificially? What happens to learning when increases in corpus complexity are synchronized with increases in memory capacity, and what does this say about our institutionalized system of "staged input", a.k.a. public education?

And the strangest question of all: could some developmentally-delayed children actually be better off than their peers? Some anecdotal evidence might suggest so: Albert Einstein, after all, was four years old before he could speak and seven before he could read.

Embryogenesis

Several new advancements in our understanding of development were reported this week, including a startling revelation about the neural development of gender, the discovery of a signalling molecule that transforms midbrain to hindbrain (the region that becomes the cerebellum), and a glimpse into the cause of a common neural tube birth defect.

The "neural tube" is a stage in the development of the embryonic brain; the brain begins as a relatively undifferentiated sheet of cells on the dorsal surface of the embryo. Known as the neural plate, this structure is visible when the embryo is less than 1 cm long. The plate then folds in on itself to become the neural groove, which fills with cerebrospinal fluid and eventually closes to become the neural tube. This process completes within only 25 days of conception.

This structure will begin to swell at three distinct points along its length (one for forebrain, one for midbrain, and one for cerebellum & pons) and eventually will bend backwards to create a shape more characteristic of the adult brain. As each swelling differentiates into various brain regions, the newest cells (called neuroblasts) begin in the center of the neural tube and must migrate outwards: miraculously, the brain constructs itself inside-out. How each baby neuron finds its way from the center of the neural tube, crawling past all the other neurons, and ultimately finds it resting place on the outermost layer of the new brain is still a topic of intense debate.

One of the most common defects in this complex process is that new cells may not begin at the midline of the neural tube. This causes 1 in every 20 spontaneous abortions, and maybe the neural tube defects seen in every 1 out of 1000 births. These "lost neuroblasts" may become misplaced because the polarity of the cell is lost during cell division; new cells therefore lack the information to determine which way is up. A paper from Nature demonstrates how polarity is restored to new cells, based on an asymmetrical migration of proteins shortly after cell division.

Another new paper this week shows how subtypes of a signalling molecule known as FGF8 have differential effects on the neural tube, such that FGF8a causes midbrain to grow, while FGF8b can transform midbrain cells to hindbrain cells. Given that other FGF8 isoforms exist throughout the embryo, this has implications for our understanding of the mechanisms modulating embryogenesis.

Yet another paper demonstrated how estrogen interacts with a protein known as alpha-fetoprotein (AFP) to "feminize" or "masculinize" the developing brain. When female mice that were incapable of producing AFP were injected with extra estrogen and surrounded by sexually active males, they showed no interest in sex and furthermore would even try mounting other females! Female mice that were both AFP-deficient and denied estrogen in the womb, however, showed normal female behavior. These findings indicate that the presence of estrogen masculinizes the brain, and the presence of AFP counteracts that effect. The role of other sex hormone binding proteins may be similar.

1/11/2006

Emotional Robotics

It may sound like science fiction, but a team lead by David Bell from Queen's University is using emotions to guide robotic behavior. Their robot responds to new objects with a cascade of feelings; initial fearfulness gives way to caution and inquisitiveness. After a certain amount of observation, the robot will decide whether those new objects should be avoided or approached, and whether they can be categorized as instances of old objects or as entirely new ones.

Antonio Damasio, among others, has long held that emotions are a critical part of intelligence. According to this view, emotions and feelings are the "immune system" of the brain; they interface our internal worlds with the external world, and guide us towards responding adaptively to it. Emotions motivate many of our responses to external stimuli; they are easily conditioned and thereby altered with experience; and the experiences that elicit strong emotions are likely to affect our behavior for quite some time.

But these researchers had a dual focus. Not only did they emphasize the role of emotions, but also the importance of childlike cognition - of which heightened emotions are certainly a part. As David Bell put it, "A system that can observe events in an unknown scenario, learn and participate as a child would is a major challenge in AI. We have not achieved this, but we think we've made a small advance." No tantrums, I hope.

For this small advance his team has won the British Computer Society's 2005 Machine Intelligence Prize. The robot, named "IFOMIND" is based on the Khepera platform. The BBC is also running a short article on the team.

This work builds on earlier advances in neural network and neurorobotic modeling of the mammalian dopamine system, also using the Khepera system. Olaf Sporns and Will Alexander created a robot capable of navigating through an environment, avoiding obstacles, and gripping objects with a moveable arm. Sensory inputs include color vision and "taste," defined as the conductivity of the objects it encounters. The robot's rate-coded neural network used layers based on real mamallian neural structures, and was provided with artificial "dopamine" fluctuations depending on the rewards it encountered, in which objects with lower conductivity were more rewarding. After exploring its environment, the robot learned to stay within the areas of highest reward density. This behavior was never explicitly programmed, but developed autonomously as a result of the interaction between environment and (intelligent?) agent.

One might claim that the relationship between these "artificial dopamine" fluctuations and real, human emotion is purely metaphorical, and that the analogy is really just an extreme case of anthropomorphism. As Dylan Evans points out in his article on emotional robotics, however, the same thing was once thought of animals: "Descartes, for example, claimed that animals did not really have feelings just like us because they were just complex machines, without a soul. When they screamed in apparent pain, they were just following the dictates of their inner mechanism. Now that we know the pain mechanism in humans is not much different from that of other animals, the Cartesian distinction between sentient humans and 'machine-like' animals does not make much sense." The same might now be said of the distinction between sentient humans and 'human-like' robots; are we making an artificial distinction between the emotional circuits of the human brain, and simpler versions of it in silico?

1/10/2006

From Inhibitory to Excitatory and Back Again

Here's something you don't see every day: one of the fundamental "textbook" claims of neuroscience appears to be false.

Most neuroscience texts will tell you that a neuron is either excitatory or inhibitory; that is, it will release either an inhibitory neurotransmitter (such as GABA) or an exictatory one (such as glutamate). Yet Erik Fransen has observed co-release of both excitatory and inhibitory neurotransmitters from the same synaptic terminal. He has also observed "backpropagating dendritic action potentials" which can actually cause neurotransmitters to flow backwards from dendrite to axon! This then causes "conditioning depression" in which the presynaptic neuron can go below 50% of its initial amplitude.

So, what happens for the really freakish neurons that undergo both co-release of transmitters AND show conditioning depression? From evidence with computational models, neurons with both features can actually switch between the release of GABA and glutamate. This reversal can even be stable over time - presumably until the next backpropagating dendritic potential.

How do these changes affect neurocomputation? It's difficult to say. The backpropagation algorithm has been a feature of artificial neural network models for quite some time, even though it has been criticized for a lack of biologically plausibility. Co-release, on the other hand, is certainly not a standard feature of any artificial neural network; it was thought to be impossible. Given the emergent nature of neural processing, and the enormous number of neural interactions that could be affected, it's difficult to speculate on how these observations may affect neural computation.

It is far easier to speculate on the implications for modeling. Obviously, biologically plausible models will need to be updated with this surprising new feature of neural networks. Secondly, the philosophy of "biological plausibility" may require rexamination. As a form of "Occam's razor," it can be useful in constraining neural network models, but is still risky in that we may intentionally underestimate the features of neural networks.

Further, plausibility is a difficult thing to judge when we're still guessing about so much of neural computation! There may even be some validity to the idea that Occam's razor shouldn't apply to complex biological systems, given that adaptation generally requires complication rather than simplification to existing biological structures. And how can we even posit the "simplest" explanation for a phenomenon, like intelligence or consciousness, when we not sure of any mechanism that would cause these to occur in the first place?

1/09/2006

Evolution of the Brain

Evolution was certainly the most powerful force in the long-term development of the human brain, but how much can "evolutionary psychology" (henceforth evo-psych) really help us understand brain functioning? It is appealing because of its frequent support of common intuitions, its succinct and seductive logic, and an abundance of neatly-packaged explanations for any human behavior. But all too frequently, evo-psych is filled with unfalsifiable and non-mechanistic just-so stories based on misconceptions and statistical half-truths. Just as the "intelligent design" movement emerged as "creation science" failed, so too has evo-psych attempted to restate the failed claims of sociobiology.

As Steven Rose writes in "The Future of the Brain: The Promise and Perils of Tomorrow's Neuroscience," "... like their predecessor sociobiologists, a group of highly articulate over zealous theorists have hijacked the term evolutionary psychology and employed it to offer yet another reductionistic account in which presumed genetic and evolutionary explanatations imperialise and attempt to replace all others. For evolutionary psychology, minds are thus merely surrogate mechanisms by which the naked replicators enhance their fitness. Brains and minds have evolved for a single purpose, sex ... and yet in practice evolutionary psychologists show as great a disdain for relating their theoretical constructures to real brains as did the now discredited behaviorist psychologists they so despise." As Steven Rose eloquently points out, evo-psych is fundamentally non-mechanistic: it purports to explain the "why" but never the "how" of brain function.

Further, most of evo-psych's claims are built on hunter-gatherer societies, of which we have little direct knowledge. This theoretical basis is problematic, given that as many as 11,000 generations could have elapsed between now and the Pleistocene era, commonly seen as the last great period of human evolutionary change. To further complicate matters, recent reprints of classic evo-psych papers that described the behavior of tribal cultures show that many of these "adaptive" or "universal" behaviors aren't even stable over a 20-30 year period. This incredibly plasiticity of human behavior, preferences and tendencies causes serious problems for any attempt at evolutionary logic.

Similarly, David Buller has debunked the common explanation of age-asymmetries in sexual relationships, which is that men have evolved to prefer women in their peak reproductive years, while women evolved to prefer high-status men. However, the data supporting this claim is ambiguous, in that other factors such as appearance and closeness in age explain equal or greater variance in mate selection as reproductive potential. Buller also shows, in contrast to popular evo-psych explanations of child rearing, that households with no genetic parents have the lowest incidence of abuse.

As noted by David Bullers' "Evolutionary psychology: the emperor's new paradigm," even Buss's classic work on the gender differences in sexual jealousy (females rank emotional over sexual fidelity, while males rank sexual fidelity as more important than emotional fidelity) is based on altogether questionable data. For one, the data do not actually show that males care more about sexual infidelity than they do emotional infidelity - in fact, over half chose the opposite response. Only a very narrow interpretation of the results supports Buss's original claims.

In summary, evo-psych has made minimal positive contributions to understanding the brain, and often buttresses claims that are unsubstantiated by experiment or real data. In other cases, the claims are simply unfalsifiable. However, given the intuitive appeal of evo-psych's logic and its widely-reported "findings," it's unlikely to disappear any time soon.

1/06/2006

Thinking about "Thinking Harder"

At its best, cognitive psychology can seem like magic. We can use techniques like pulsation threshold masking, stroop tasks, or phonological suppression to infer the kinds of hidden mental processes that modulate behavior. Here's one of the least used instruments in the cognitive psychologist's toolbox: pupillometrics.

Pupil dilation is consistently sensitive to mental effort (aka "capacity utilization" or some function of both absolute task demand and individual differences in ability) in mental arithmetic, sentence comprehension, letter matching, stroop, human-computer interaction (drag-and-drop, search), problem solving, imagery, rehearsal and retrieval from STM, and delayed tone discrimination tasks. Confounding variables include ambient lighting, baseline pupil diameter, spontaneous variations in pupil diameter (which seem to be suppressed during cognitive load), and direction of eye gaze, all of which can be controlled with various procedures. Diameter changes occur within 1200 ms of cognitive demand onset, and constriction occurs with similar speed, but these speeds are dependent on ambient lighting level. The absolute magnitude of diameter change differs between individuals but trends are consistent, as typically measured (average pupil diameter, or percent change in pupil diameter from baseline, with various sample rates). The mechanism relating cognitive load to pupil diameter is unknown, but is hypothesized to be a consequence of cortical modulation of the reticular formation, which is itself thought to modulate the pupillary control system.

Pupil dilation can measure what it means to "think harder" because it is a function of capacity utilization; it reflects not only task demands, and not only individual ability, but some combination of the two. In fact, one can even track moment-by-moment cognitive effort in digit span tasks: progressive pupil dilation occurs as each digit is presented, and progressive constriction occurs as each digit is recalled. Maximal pupil dilation corresponds both with the reported time period of maximal effort, and with cognitive components of task structure as determined through GOMS analysis. For all the non-believers, you can even see practice effects, such that tasks will show less absolute dilation after extended practice, despite "task demand" remaining ostensibly constant.

Pupil dilation is certainly a valuable addition to other neuroindices (EEG, fMRI) and appears to be relatively stable across development. Pupillary responses can even be seen in infants less than 4 months old, in response to various social stimuli (such as pictures of their mothers).

1/05/2006

Review: The Future of the Brain

Neuroscientist Steven Rose goes to great lengths to correct common misperceptions about the explanatory potential of current genetics, evolutionary psychology, and molecular neuroscience. Ultimately, only the last two chapters cover the "future" of the neurosciences, delving into topics like transcranial magnetic stimulation, pharmacological cognitive enhancement, and neuroethics. But before telling us where we're headed, Rose spends 10 chapters telling us where we've been, both in terms of cognitive change across the lifespan, the cascading processes of synaptogenesis and apoptosis seen in utero and in early childhood, and the changes in brains both across species and across evolutionary time. If "The Future of the Brain" could be said to have a central principle, it's that "the past is the key to the present," and it is here that Rose's talents as a writer truly shine: he integrates the histories of neurons, individuals, psychopharmacology, sociobiology, cognitive psychology and genetics into a coherent narrative, with both appropriate subtlety and engaging clarity.

Rose begins with theories of the origins of life, proto-cells, and nucleic acids. He uses this broad introduction to debunk the simplifications we often make without hesitation: thinking of humankind as the highest on some evolutionary scale of nature; considering organisms to be passive players in evolution; believing that evolution strives for increased complexity as time continues. As he writes, "all living forms on earth ... are more or less equally fit for the environment and life style they have chosen. I use the word chosen deliberately, for organisms are not merely the passive products of selection; in a very real sense they create their own environments ... The grand metaphor of natural selection suggers from its implication that organisms are passive, blown hither and thither by environment change as opposed to being active players in their own destiny." In this way, Rose complicates the popular notion of causality frequently seen in news articles, where researchers claim to have discovered a gene "for" this or that; to Rose, every result has multiple causes, both genetic and environmental.

After reviewing how neural nets may have initially developed in the first multicellular animals (Coelenterates), Rose describes the development of the mammalian cortex during gestation as autopoesis, the process of continual self-creation. The reader is whisked from fertilisation to the embryonic formation of the neural groove, to the birth of neurons and glia in the neural tube, to the migration of neurons as they follow concentration gradients of neural growth factors. We then follow changes in brain structure seen in hominins, then hominids, and finally homo sapiens.

The later chapters document the development of psychopharmacology and the rise of Big Pharma, from aspirin to valium and now Ritalin and Strattera. Rose winds up with fascinating predictions about the future of neurotechnology, all of them well-tempered by a thorough understanding of our past.

Rose's book is quite simply the best popular neuroscience writing I have read. It is hard to imagine another writer that could so seamlessly weave together the fields of genetics, cognitive science, neurophysiology, and pharmacology into such an entertaining yet informative book. Highly recommended...

1/04/2006

Complexity and Biologically Accurate Neural Modeling

While biological plausibility is an important characteristic of cognitive models intended to simulate human thought, simplifying assumptions must be made. Biological accuracy, as distinct from plausibility, is simply unattainable (though I wish Blue Brain the best of luck).

By most estimates, there are 100 billion neurons in the brain. Some neurons are known to have more than 1,000 dendrites, and up to about 1,000 different branchings of their axons. There are some 50 known neurotransmitters, and who knows how many other neuromodulators may exist (hormones, neural growth factors, neurosteroids). There are also many different receptor types for each of the neurotransmitters. A conservative estimate of the number of interactions you'd have to model to be biologically accurate is somewhere around 225,000,000,000,000,000 (225 million billion).

This isn't even counting the fact that some synapses form on dendritic spines, while others form on the shaft of dendrites. Also, synapses may not only make contact with dendrites and cell bodies, but in some cases also connect with other synapses. It is also difficult to quantify the influence of dendritic geometry, in which those synapses further from the axon hillock will transmit signals more weakly than closer synapses, as well as undergo conduction delays. As if this didn't complicate the picture enough, dendritic geometry changes over time, such that substantial restructuring occurs within minutes. And then there's the "astrocyte hypothesis," which suggests that the 1 trillion glial cells may also be involved in computation, though there's little proof they are anything but support cells.

Clearly, simplifying assumptions need to be made in cognitive simulations. Accordingly, different simulation environments have focused on different aspects of the complex geometric, metabolic and electro-chemical features of biological neural networks.

We may hope that only a few of these features are essential for simulating intelligence, but if complexity science has taught us anything, it's that even small changes can have enormous effects on sufficiently complex systems. How then do we determine which factors to include and which to "simplify out" through shortcuts like rate-coding, or the use of "point-neuron" models?

1/03/2006

Synchrony vs. Polychrony

What is the neural code? Some claim information is encoded in the firing rate of neurons (often simulated via rate-coded neural networks) while others point to variations in each neuron's firing rate, aka "inter-spike interval" (which is simulated via pulsed neural networks). Yet others maintain that it's some combination of the two.

One well-known possibility, popularized by authors like Steven Strogatz, is that information is encoded through synchrony of firing. Self-synchronization is a pervasive property of natural systems (from pacemaker cells to crickets to fireflies) and could be useful computationally. Synchronous firing has been implicated in visual selection, attention, and prediction. Others have gone a step further and concluded that synchrony accomplishes binding. Only a pulsed network could include information carried by synchrony, since phase information is lost in rate-coded networks.

Unfortunately, using synchrony as a computational mechanism has several problems, as pointed out by Oreilly, Busby & Soto (2001). For one, synchrony is a transient phenomenon and yet binding is persistent. Further, as the authors put it, "The problem is that if one is truly binding the features of multiple objects at the same time, but out of phase with each other, a downstream neuron will receive synchronous inputs from the features associated with both objects! How can it decode which object to respond to, when it will be strongly driven by the synchrony associated with both sets of features?" Finally, synchrony requires that neurons produce reliable firing rates both across time and across various contexts, yet this contradicts evidence from studies with alcohol, aging, and ERP.

Yet another possibility remains, however, as proposed by Izhikevich in a new paper in Neural Computation and supported by another in-press paper of theta-phase locking in the hippocampus. Izhikevich terms this "polychronization," or the generation of reproducible time-locked but not synchronous spiking patterns. Polychrony may inherit many of synchrony's problems, but it has distinct computational advantages and is more in line with neurophysiology (such as conduction delays) than synchrony. Finally, because it is the phase transition from anachronous to syn- or polychronous behavior (and back again) which allegedly accomplishes cognitive functions like attention, both rate-coded and pulsed network simulations carry sufficient information for accurate modeling.