Complexity and Biologically Accurate Neural Modeling

While biological plausibility is an important characteristic of cognitive models intended to simulate human thought, simplifying assumptions must be made. Biological accuracy, as distinct from plausibility, is simply unattainable (though I wish Blue Brain the best of luck).

By most estimates, there are 100 billion neurons in the brain. Some neurons are known to have more than 1,000 dendrites, and up to about 1,000 different branchings of their axons. There are some 50 known neurotransmitters, and who knows how many other neuromodulators may exist (hormones, neural growth factors, neurosteroids). There are also many different receptor types for each of the neurotransmitters. A conservative estimate of the number of interactions you'd have to model to be biologically accurate is somewhere around 225,000,000,000,000,000 (225 million billion).

This isn't even counting the fact that some synapses form on dendritic spines, while others form on the shaft of dendrites. Also, synapses may not only make contact with dendrites and cell bodies, but in some cases also connect with other synapses. It is also difficult to quantify the influence of dendritic geometry, in which those synapses further from the axon hillock will transmit signals more weakly than closer synapses, as well as undergo conduction delays. As if this didn't complicate the picture enough, dendritic geometry changes over time, such that substantial restructuring occurs within minutes. And then there's the "astrocyte hypothesis," which suggests that the 1 trillion glial cells may also be involved in computation, though there's little proof they are anything but support cells.

Clearly, simplifying assumptions need to be made in cognitive simulations. Accordingly, different simulation environments have focused on different aspects of the complex geometric, metabolic and electro-chemical features of biological neural networks.

We may hope that only a few of these features are essential for simulating intelligence, but if complexity science has taught us anything, it's that even small changes can have enormous effects on sufficiently complex systems. How then do we determine which factors to include and which to "simplify out" through shortcuts like rate-coding, or the use of "point-neuron" models?


Post a Comment

Links to this post:

Create a Link

<< Home