Biological Bases of Mind and Behavior
the 19th century, William James (1842-1910) defined
psychology as the science of mental life, attempting to
understand the cognitive, emotional, and
motivational processes that underlie human experience, thought, and action. But even in the earliest days of scientific psychology, there was a lively debate about how to go about studying the mind. Accordingly, there arose the first of several "schools" of psychology, structuralism and functionalism.
Structuralism and Functionalism
One point of view, known as structuralism tried to study mind in the abstract. argued that complex mental states were assembled from elementary sensations, images, and feelings. Structuralists used a technique known as introspection (literally, "looking into" one's own mind) to determine, for example, the elements of the experience of a particular color or taste. The major proponents of structuralism were:
- Wilhelm Wundt (1832-1920), a German physician and physiologist who became interested and philosophical and psychological problems. Wundt wrote a groundbreaking textbook, Foundations of Physiological Psychology (1873-1874), and established the first psychology laboratory in 1875, at the University of Leipzig (the first American laboratory was established by G. Stanley Hall, a student of James's, at Johns Hopkins University in 1883). For that reason, he is often called the founder of scientific psychology (see A.L. Blumenthal, "A Re-Appraisal of Wilhelm Wundt", American Psychologist, 1975). Despite his training in medicine and physiology, Wundt believed that psychology and physiology were complementary disciplines. He thought there was an clear and obvious connection between them -- for example, sensation is mediated by the sense organs -- but the former was not to be reduced to the latter.
- Edward B. Titchener (1867-1927), one of several American students of Wundt. With Kulpe, another student of Wundt's, Titchener defined psychology as the study of the facts of experience as dependent on the experiencing individual -- in contrast to physics, which in their view studied the same facts independent of those who experience them.
- Edwin G. Boring, Titchener's student at Cornell, built his career at Harvard as a historian of pre-scientific "philosophical" and early scientific psychology. A beloved teacher of the introductory psychology course, Boring also hosted "Psychology 1 with E.G. Boring" on National Educational Television, one of the first educational television courses offered on the predecessor to our modern Public Broadcasting System.
Structuralism was beset by a number of problems, especially the unreliability and other limitations of introspection, but the structuralists did contribute to our understanding of the basic qualities of sensory experience (see Boring's The Physical Basis of Consciousness, 1927).
Another point of view, known as functionalism, was skeptical of the structuralist claim that we can understand mind in the abstract. Based on Charles Darwin's (1809-1882) theory of evolution, which argued that biological forms are adapted to their use, the functionalists focused instead on what the mind does, and how it works. While the structuralists emphasized the analysis of complex mental contents into their constituent elements, the functionalists were more interested in mental operations and their behavioral consequences. Prominent functionalists were:
- William James, the most important American philosopher of the 19th century, and who taught the first course on psychology at Harvard, James's seminal textbook, Principles of Psychology (1890), is still widely and profitably read by new generations of psychologists. True to his philosophical position of pragmatism, James placed great emphasis on mind in action, as exemplified by habits and adaptive behavior.
- John Dewey (1859-1952), now best remembered for his theories of "progressive" education, who founded the famous Laboratory School at the University of Chicago.
- James Rowland Angell (1869-1949), who was both Dewey's student (at Michigan) and James's student at Harvard, and who rejoined Dewey after the latter moved to the University of Chicago; later Angell was president of Yale University, where he established the Institute of Human Relations, a pioneering center for the interdisciplinary study of human behavior. In contrast to Titchener, who wanted to keep psychology a "pure" science, Angell argued that basic and applied research should go forward together.
Psychological functionalism is often called "Chicago functionalism", because its intellectual base was at the University of Chicago, where both Dewey and Angell were on the faculty (functionalism also prevailed at Columbia University). It is to be distinguished from the functionalist theories of mind associated with some modern approaches to artificial intelligence (e.g., the work of Daniel Dennett, a philosopher at Tufts University), which describe mental processes in terms of the logical and computational functions that relate sensory inputs to behavioral outputs.
The functionalist point
of view can be summarized as follows:
- Adaptive value of mind. Functionalists assume that the mind evolved to serve a biological purpose -- specifically, to aid the organism's adaptation to its environment. Thus, functionalists are interested in what James called (in the Principles) "the relationship of mind to other things" -- how the mind represents the objects and events in the environment. Functionalism also laid the basis for the application of psychological knowledge to the promotion of human welfare.
- Mind in context. From a functionalist point of view, the mind essentially mediates between the environment and the organism. Therefore, the functionalists were concerned with the relations between internal mental states and processes and the states and processes in the internal physical environment (i.e., the organism) on the one hand, and the external social environment (i.e., the real world) on the other.
- Operations over content. Whereas structuralism attempted to analyze the contents of the mind into their elementary constituents, functionalism attempted to understand mental operations -- that is, how the mind works. It's this sense of functions as operations that gives functionalism its name.
- Individual differences. For Wundt and other structuralists, it didn't matter who the observer was: so long as observers were properly trained, they were interchangeable. But the functionalists, with their roots in Darwin's theory of natural selection, were interested in variation.
- Mind and body. Because the mind is what the brain does, functionalists assumed that understanding the nervous system, and related bodily systems, would be helpful in understanding the workings of the mind.
In these lectures, we pick up this last point, by examining the biological foundations of our mental lives in the brain and the rest of the nervous system.
For an interactive tour of the brain and behavior, see "The Brain from Top to Bottom", a website developed by Bruno Dubuc of McGill University and the Canadian Institute of Neurosciences, Mental Health, and Addiction.
The "Neuron Doctrine"
Well into the 17th century, following Descartes' arguments for dualism, the mind (or the soul) was usually considered to be composed of an immaterial substance. However, this view began to change with the explosion of knowledge of anatomy and physiology that occurred, largely in England, in the latter decades of that century. Among the principal players in this revolution were the "virtuosi" organized by the physician Thomas Willis (1621-1675) into the Oxford Experimental Philosophy Club (Willis later became Professor of Natural Philosophy at Oxford). Among the members of Willis' "club" were William Harvey (1578-1657), who showed how the heart functioned to circulate blood through the body; Robert Hooke (1635-1703), who identified the cell as the smallest unit in living matter; Robert Boyle (1627-1691), the father of modern chemistry; and Christopher Wren (1632-1723), the architect who rebuilt London's churches after the Great Fire of 1666 (he also illustrated many of the virtuosi's' books).
For his part, Willis described the network of blood vessels covering the brain, proposed that various parts of the brain were specialized for particular functions, and concluded that the ventricles of the brain were not functional. Most importantly, he developed the neuron doctrine -- the view that the network of nerves that connect various parts of the brain with each other and with other tissues and organs permitted the brain to control the body (knowing nothing about electricity, Willis thought that this communication was achieved by the transmission of "animal spirits"). Still under the influence of Cartesian dualism, Willis still believed that the mind and the brain were separate, though the asserted that the immaterial, rational mind was housed in a material brain that performed most of its functions by mechanical principles.
Willis's story is detailed in Soul Made Flesh: The Discovery of the Brain -- and How It Changed the World by Carl Zimmer (2004).
Organization of the Nervous System
- At the lowest level, cells are the
smallest units that can function independently.
- Tissues are groups of cells of a
- Organs are groups of tissues that perform a particular function.
- Systems are groups of organs that
perform related functions.
- At the highest level, there is the organism -- a living individual composed of many separate, but mutually interacting systems.
The nervous system is one such
system, allowing various organs and tissues to communicate
with, and influence, each other. This communication is
accomplished by means of electrical discharges.
- Endocrine system, consisting of the pituitary, adrenal, and other glands, plus the ovaries and testes.
- Integumentary system, consisting of the skin, hair, nails, etc.
- Skeletal system, consisting of the bones and joints
- Muscular system, consisting of the muscles and tendons.
- Cardiovascular system, consisting of the heart and blood vessels.
- Lymphatic system, consisting of the lymph nodes, thymus, spleen, and tonsils.
- Respiratory system, consisting of the nose,pharynx and larynx, and lungs.
- Digestive system, consisting of the mouth, stomach, intestines, and anal canal, as well as the teeth, tongue, liver, gallbladder, and pancreas.
- Urinary system, consisting of the kidneys, urinary bladder, and urethra.
- Reproductive system, consisting of the testes and ovaries, internal and external reproductive apparatus, and the mammary glands.
Whereas the traditional systems of the
body all consist of tissues and organs, the immune
system, consisting of Toll-like receptors, B
cells, and T cells, is found at the cell level.
Examples in the Nervous System
|Systems||Nervous System(s) consisting of the
nerves, spinal cord, and brain
|Organism||The living, thinking, behaving person
composed of several overlapping and interconnecting
systems, including the nervous system.
organization of biological structures continues above the
level of the individual organism:
- species, a class of individuals having common attributes (like our species, homo sapiens);
- genus, a group of related species (h. sapiens are part of the genus homo, along with h. erectus, h. habilis, and h. sapiens neanderthalensis);
- family, a group of related genera (homo is part of the family hominidae, along with other early hominids, such as Australopithicus);
- order, a group of related families (hominidae are part of the order primates, along with chimpanzees, rhesus monkeys, and other apes and monkeys);
- class, a group of related orders (primates are part of the class mammalia, along with cats and dogs);
- phylum (or division), a group of related classes (mammalia are part of the phylum chordata, along with all other animals that have a spinal cord, like lizards and birds); and
- kingdom, a group of related phyla (chordata are part of the "animal" kingdom animalia, as distinct from the members of the "vegetable" kingdom plantae).
The Endocrine and Immune Systems
The nervous system is not the only means of communication within the body.
- pituitary gland
- adrenal glands
- ovaries (in females) and testes (in males).
Whereas the nervous system allows various organs to communicate with each other by virtue of electrical signals generated by neurotransmitters flowing across synapses, the endocrine system permits communication by means of chemical substances called hormones secreted by the endocrine glands and carried through the body by the bloodstream to various organs of the body. Communication via the nervous system is relatively quick, while communication via the endocrine system is relatively slow.
One behaviorally important hormone is oxytocin,
sometimes called the "love hormone" because of the role
it plays in promoting social bonding. In a classic
series of studies, Thomas Insel and his colleagues (he
is now the Director of the National Institute of Mental
Health) discovered the role of oxytocin in prairie voles
-- which, unlike most mammals, form monogamous bonds
that last long after mating, so that the couples
continue to cohabit and the males assist in raising the
young. This bonding is promoted by two hormones
released during mating itself: oxytocin in females and vasopressin,
which promotes territoriality, in males. Later studies
showed that, in addition to their direct effects on
behavior, there are receptors for both oxytocin and
vasopressin in those areas of the prairie voles' brains
that serve as "reward" centers. Stimulation of
these centers, in turn, can lead to addiction. And
better yet, activation of the reward centers becomes
conditioned to the mere presence of the partner.
So, the mating pairs quite literally become addicted to
- Oxytocin and vasopressin bind to receptors in the reward system in prairie voles, but not in other vole species -- which are not monogamous.
pair-bonding affects social behavior, what you might
call couplehood, but that doesn't mean that both male
and female prairie voles aren't capable of mating
outside the pair bond. They can and they do,
with the result that males can father young with other
females, and can also end up raising young that are
not their own. The differences have to do with
the details of their genetic endowments.
- Not all male prairie voles pair-bond in this manner. But most of them do, and the differences between them and other vole species are particularly striking.
- Parental behavior such as grooming stimulate oxytocin production in young prairie voles, such that infants that are denied these social interactions have difficulty bonding with mates as adults.
- Somewhat inevitably, perhaps, these results have been generalized to humans. There's a self-help book entitled Make Love Like a Prairie Vole: Six Steps to Passionate Plentiful, and Monogamous Sex, as well as a sort of aphrodisiac spray called "Liquid Trust", which contains synthetic oxytocin. Synthetic oxytocin has also been used experimentally as a treatment for people with autism, in an attempt to stimulate social behavior. But, as Insel himself has noted, "You have to be very careful and not assume that we are very, very large prairie voles" (quoted in "What Can Rodents Tell Us About Why Humans Love?" by Abigail Tucker, Smithsonian Magazine, 02/2014, from which some of this material, and the cute picture of a prairie vole, is drawn). Human pair-bonding is likely more complicated, and more influenced by social factors, than a simple matter of a couple of hormones.
- Oxytocin may be
the "love hormone", but it isn't just a
"love hormone. It also performs a number of
other functions, such as the maintenance and repair
of muscle tissue. (There's a lesson here about
specialization: the "specialty" of some gene, or
neurotransmitter, or hormone" may depend a lot on the
location on which it's acting.)
A system of internal organs known as the hypothalamic-pituitary-gonadal axis, or HPG axis, is important for the control of reproductive behaviors. Each of the organs named secretes a different hormone, but the three of them work in an integrated fashion, so it makes sense to think of them as constituting a system.
- A portion of
the hypothalamus releases gonadotropin-releasing
- GnRH, in turn, stimulates the pituitary gland to release the luteinizing hormone (LH) and follicle-stimulating hormone (FSH)
- LH and FSH, in their turn, stimulate the gonads to release the female and male sex hormones, estrogen and testosterone.
- In particular, testosterone levels regulate levels of dominance and aggression, and thus status-seeking or status-maintaining behaviors, especially in males -- which is why you see so many advertisements for drug treatments for 'Low-T" syndrome in men.
Another system beginning with the hypothalamus, the hypothalamic-pituitary-adrenal axis (HPA) is important for stress-response -- discussed later in these lectures, when I take up the autonomic nervous system. Again, each of the organs releases a different hormone, and the three of them work in an integrated fashion.
- During stress, a different portion of the hypothalamus secretes vasopressin and corticotropin-releasing hormone (CRH)
- Vasopressin and CRH, in turn stimulates the secretion of adrenocorticotropic hormone (ACTH) by the pituitary gland.
- ACTH then stimulates the adrenal cortex to which produces glucocorticoid hormones such as cortisol .
- These glucocorticoids in turn act back on the hypothalamus and pituitary to suppress the production of CRH and ACTH, thus creating a cycle of negative feedback which modulates the stress response.
You'll see more of how this works later in these lectures, when I discuss how the sympathetic and parasympathetic branches of the autonomic nervous system act in response to stress.
The HPA and HPG
axes interact with each other. According to the dual-hormone
hypothesis, cortisol, a product of the HPA alters
the behavioral effects of testosterone released by
the HPG (Mehta & Josephs, 2010). In
particular, levels of testosterone are correlated with
levels of dominance and aggression, but only in
individuals who are low in cortisol. For
individuals who are high in cortisol, the relationship
between testosterone and dominance/aggression
disappears, or may even be reversed.
The nervous and endocrine systems interact in interesting ways, especially in the operation of the autonomic nervous system.
In the first place, some hormones are also
neurotransmitters. For example, adrenalin (epinephrine)
and noradrenalin (norepinephrine), neurotransmitters that
are important for the autonomic nervous system, are also
secreted by the adrenal glands (hence their name).
For a history, see Adrenaline by Brian B. Hoffman
- If a chemical is secreted into the bloodstream, it's a hormone.
- If it's released by the terminal fibers of a neuron, it's a neurotransmitter.
In the second place, neural impulses can
stimulate the adrenal glands to release hormones. Activity
in a subcortical structure called the hypothalamus
stimulates the adrenal gland to release various hormones.
- Release of a "thyrotropin releasing factor" (TRF) stimulates the pituitary gland to release yet another substance, which acts on the thyroid gland to control the process of thermoregulation -- getting the animal warmer when it's body temperature falls too much, or cooling a body that's become overheated.
- A "lutenizing releasing factor" (LRF), also released by the hypothalamus, acts on the pituitary to regulate aspects of reproductive behavior.
- Somatostatin, a "somatotropin release-inhibiting factor" (SRIF), also released by the hypothalamus, acts on the pituitary to inhibit the release of growth hormone, and thus inhibits the growth of the body.
Hormones can affect the structure and function of the nervous system, as we'll see in the lectures on Development, when we discuss gender dimorphism.
Certain hormones are also implicated in
emotion and motivation:
- Dopamine plays an important role in the reward systems by which we derive pleasure from eating, sex, and other activities.
- Oxytocin mediates feelings of trust and affection toward others, and also has a calming effect during periods of stress.
- Vasopressin activates the "flight or fight" response to stress
These interactions are now studied by a new interdisciplinary field called psychoneuroendocrinology.
There is also an immune system, responsible for the recognition and destruction of foreign material (like viruses and bacteria) that enter the body, causing infectious diseases. Like the nervous system, the immune system is organized into subsystems:
Interactions between the nervous system and
the immune system, particularly the adaptive immune
system, are commonly thought to be responsible for the
well-known association between stress and disease, and are
studied by another interdisciplinary field, psychoneuroimmunology.
- The adaptive immune system, consisting of "B" and "T" cells, which adapt themselves to specific pathogens, and then remain in the body to fight off subsequent attacks by the same pathogens. In a sense, the adaptive immune system is a "memory" that records information about past infections; it is the basis for vaccines that protect us against viral or bacterial diseases like polio.
- The innate immune system includes antimicrobial molecules and various phagocytes, that ingest and destroy pathogens. The innate immune system is a nonspecific "rapid response system" for detecting and fighting infectious agents, mediated by "Toll-like receptors" that identify incoming pathogens and initiate the body's reflex-like response to them.
- Afferent neurons carry sensory signals from the receptor organs to the spinal cord and brain.
- Efferent neurons carry motor signals from the spinal cord and brain to the muscles and other internal organs.
- Interneurons connect afferent and efferent neurons.
Note: If you want to reduce confusion between "afferent" and "efferent", think of "affect" as a "feeling", and of something having an "effect" on something else.
The human brain contains about 86-100
billion neurons -- most, interestingly, in a structure known
as the cerebellum, rather than the cerebral cortex
itself. The upper estimate is the conventional one;
the lower estimate comes from Suzana Herculano-Houzel, a
Brazillian neuroscientist who developed a new method of
Structure of the Neuron
No matter what its type, every neuron consists of a cell body, with its nucleus, a number of dendrites stemming out from the cell body, and a long axon, ending in a number of terminal fibers.
The axon may be covered with a myelin sheath, and this sheath may be interrupted by breaks, called the Nodes of Ranvier.
An individual neuron may be drawn schematically as follows:
Each of these elements
has a particular function:
- The dendrites (or branches) of a
neuron receive stimulation from adjacent neurons (they
are, in a sense, the "afferent" portion of the neuron,
receiving stimulation from other neurons).
Actually, it's not the dendrites themselves that receive
stimulation, but rather dendritic spines which
stick out of the dendrites.
- When sufficiently stimulated, the cell body discharges an electrical impulse.
- This impulse is then carried along the axon to the terminal fibers, enabling the neuron to stimulate other adjacent neurons (in a sense, the terminal fibers are the "efferent" portion of the neuron, stimulating other neurons).
- The myelin sheath covers the axons of most afferent and efferent neurons, and plays a role in the regeneration of damaged peripheral nerve tissue.
- Myelinated axons comprise the "white matter" of the central nervous system.
- The "grey matter" consists of cell bodies, which are not covered in myelin.
- The Nodes of Ranvier are interruptions in the myelin sheath, and affect the speed with which the neural impulse is carried down the axon.
In this way, neural impulses are carried from one point to another in the nervous system.
A sequence of these three types of neurons constitutes the reflex arc -- described by Charles Sherrington (1857-1952), a British physiologist who won the Nobel Prize for Physiology or Medicine in 1932 (shared with Edgar D. Adrian). Essentially, the reflex arc processes sensory inputs and generates motor outputs. Although things are actually more complicated than this, conceptually, at least, the reflex arc is the basic building block of behavior.
In addition to neurons, glia
cells (also known as neuroglia) also comprise
another elementary unit in the nervous system. There are
roughly a trillion glia cells in the human brain.
That's about 10 times the high estimate of the number of
neurons, which might be the source of the common assertion
that we use only about 10% of our brains. But this
is a complete misunderstanding. All of our
neurons are actively engaged in mental activity.
Glial cells aid neural
functioning in several ways:
- They build the myelin sheath (white matter) around some axons, which provides insulation.
- They guide the migration of axons and dendrites during development, so that individual neurons "connect up" effectively.
- They provide a kind of "packing tissue" for neurons, keeping them in their proper places for synaptic transmission.
- They transfer nutrients from blood vessels to the neurons; and maintain a proper ion balance.
- They remove waste material left over when neurons die, and fill up the vacant space thus created.
Although it was once thought that the
glia cells are not directly involved in mental processes,
recent evidence suggests that, in addition to the support
functions outlined above, glia may form an
information-processing and communication network of their
own, running parallel to the neurons (see "The Other Half of
the Brain" by R. Douglas Fields, Scientific American,
04/2004). For example, UCB's Prof. Marian Diamond, famous
for her undergraduate neuroanatomy course, discovered that
while there was nothing remarkable about Einstein's brain so
far as neurons were concerned, certain parts of the cerebral
cortex had a remarkably high concentration of glia
cells.Diamond interpreted this as reflecting a greater
metabolic activity in his neurons.
For the saga of Einstein's brain, which is an even better tale than the saga of Descartes' head, see "Genius in a Jar" by Brian d. Burrell, Scientific American, 9/2015.
Even if they do not play a direct role in mental processes, glia cells do appear to be implicated in certain forms of brain disease.
- In Alzheimer's disease the patient suffers dementia, a general loss of intellectual functions. This affects some elderly persons (in which case it is sometimes known as "senile dementia"), but is can also affect younger individuals ("presenile dementia"). AD results from the degeneration of nerve cells -- either natural, due to normal aging, or accelerated. In either case, the dead neural tissue is replaced with glia cells, resulting in the characteristic "plaques and tangles" that can be observed on autopsy.
- Similarly, certain brain tumors are caused by an abnormal growth of glia cells, which produce damaging pressure on other brain structures, or prevent normal functioning of neurons.
The Challenge of Alzheimer's Disease
Alzheimer's Disease (AD) is a serious public health problem: in 2003, it was estimated that AD affected some 4.5 million Americans, 10% of the population over 65 -- not to mention their extended families. And because the elderly population is increasing due to the Baby Boom following World War II (and the echoing baby boom produced by the Baby Boomers' children), not to mention increasing immigration, the problem is going to get worse -- straining budgets and frazzling nerves.
AD was first described by a Alois
Alzheimer, German psychiatrist and neuropathologist, in
1906. Alzheimer's Disease is a form of dementia, a
set of syndromes all of which involve loss of multiple
cognitive functions, with preserved clarity of
consciousness (i.e., the patient is not delirious). Put
bluntly, dementias involve a loss of general intelligence,
including abstract thinking, judgment, object recognition,
and comprehension, which interferes with work, social
activities, and social relationships. Dementia takes a
number of different forms, including
- Alzheimer's Disease
- Pick's disease
- Huntington's Disease
- Parkinson's Disease
The chief feature of AD is a progressive loss of memory, including "short-term" as well as "long-term" memory, which is why clinics specializing in the diagnosis and treatment of AD are often referred to as "memory disorders" clinics.
Alzheimer was also the first to describe the neuropathology of AD, based on autopsies of his patients. Alzheimer's disease is a neurodegenerative disease, meaning that it is caused by the progressive loss of brain cells, which naturally entails a loss of synapses (and thus of synaptic connections). This cell death is accompanied by the development of neurofibrillary tangles composed of tau protein and senile plaque composed of amyloid-beta protein (abbreviated Aβ or A beta; the same protein found in glia cells). In fact, prospective studies of individuals at risk for AD (e.g., by virtue of family history) show that an accretion of amyloid can be seen more than 5 years before AD is first diagnosed, tau buildup and brain shrinkage can be observed more than 1 year before diagnosis. The brain atrophy is widespread, not limited to a single area, resulting in general dementia as opposed to some specific deficit (though, especially in the early stages, memory is hit hardest). The cell death and synaptic loss results in reduced levels of particular neurotransmitters, such as ACh.
The most popular theory is that AD is
caused by the buildup of amyloid-beta plaque in the
cortex, especially the frontal and parietal lobes, and
that the tangles are a kind of adventitious consequence.
However, it is possible that the actual cause is
amyloid-beta that floats freely, rather than inside
plaque. In a mouse model, animals with free-floating
amyloid-beta, but no plaque, showed the same behavioral
deficits as a comparison group that had both types.
It is possible that Alzheimer's patients produce too much
amyloid, more than can be metabolized; alternatively, they
may produce normal amounts, but dispose of it at a slower
rate than normal. Either way, amyloid-beta builds
up, and eventually gets impairs proper neuron function.
Other researchers believe that the problem
lies with the tau protein, which accumulates in the
hippocampus. While beta-amyloid is responsible for
the "plaque" element in the "plaques and tangles" that are
central to the neuropathology of AD, tau is responsible
for the "tangles". These competing groups are known
in neuroscientific circles as the "tauists" and the
"baptists" (the latter for beta-amyloid
- Interestingly, tau also builds up in boxers and football players who have received repeated blows to the head, leading to a condition called chronic traumatic encephalopathy (CTE). The severity of CTE is correlated with the number of head injuries the person has suffered, and can be a particular problem with young children's developing brains. A good argument for wearing really good football helmets (a good argument for touch football!). A good reason not to let little soccer players make too many head strikes.
- On the other hand, some elderly persons show considerable buildup of both amyloid-beta and tau in their brains, yet show no signs of serious cognitive impairment.
- Some researchers now focus on a protein known as REST (RE1-Silencing Transcription factor) or NRSF (Neuron-Restrictive Silencer Factor), which is absent in AD patients.
- Other researchers look for the causes of AD in the physical and social environment, such as chemical toxins, and even economic inequality.
Still, most theorists focus on beta-amyloid and
tau. One prominent theory is that both are
necessary for AD to develop. It is not known, however,
whether beta-amyloid promotes the spread of tau, or tau
promotes the spread of beta-amyloid.
But where do beta-amyloid and tau come from? Apparently, they are waste products created by brain activity. It turns out that the brain has its own waste-removal system. To make a long story short, a newly discovered glymphatic system employs cerebrospinal fluid, which enters the brain along the arteries, to carry waste products, toxins, and the like away from the brain. Most of this waste-disposal activity occurs during sleep, when the brain is relatively inactive (the glymphatic system may also carry nutrients to the brain in much the same way. Defects in this glymphatic system may contribute to the buildup of tau and beta-amyloid that, apparently, causes AD.
For more details on the glymphatic system, see "Brain Drain" by Maiken Nedergaard and Steven A. Goldman, Scientific American, 03/2016.
Although it is common the hear that people have been diagnosed with AD, in fact this is only presumed AD, because there are no diagnostic tests that can distinguish AD from other forms of dementia. At present, AD can be definitively diagnosed only at autopsy, where it is possible for a pathologist to visually confirm the presence of senile plaques and neurofibrillary tangles that define the disease in neurological terms. Until the patient dies, a provisional diagnosis of AD can be made based on the patient's performance on certain behavioral tests of mental function (there are other forms of dementia besides AD), age of onset (AD sometimes has an early age of onset, and was formerly called presenile dementia), and information from family members.
CT, MRI, and other brain-imaging techniques are of limited use in diagnosing AD because the plaques and tangles do not show up in them. Somewhat paradoxically, a provisional diagnosis of AD has been made when dementia occurs, especially with an early onset, in the absence of evidence of tissue damage in conventional CT or MRI brain scans.
Another approach is to take repeated brain scans of individuals at risk for AD, or showing some of the behavioral signs of AD, much as mammograms are sometimes used to track breast cancer. It is known that elderly individuals who are healthy lose about 0.2% of their brain volume every year. However, Dr. Nick Fox of the National Hospital for Neurology and Neurosurgery, in London, has reported that individuals who end up with AD lose much greater amounts of brain volume, about 2-3% per year, especially in the hippocampus. Accordingly, it may be possible to detect AD by conducting repeated brain scans of at-risk individuals. However, this approach would be extremely expensive, and it is not clear that these brain scans would discriminate between AD and other forms of dementia.
There is another possibility. It is known that the chief cause of senile plaques in AD is a protein called amyloid-beta. This protein appears in the brain even before behavioral symptoms appear, and is presumed to cause the actual damage to nerve cells that results in AD. For this reason, it may be possible to diagnose AD by testing for the presence of amyloid-beta in patients' brains. In 2002, Dr. Henry Engler of the University of Uppsala, in Sweden, reported a study in which he injected PIB, a molecule that can cross the blood-brain barrier and attach itself to amyloid-beta, along with a radioactive tag (carbon-11) , into the bloodstreams of patients with (presumed) mild AD and healthy controls. Subsequent PET scans showed no trace of PIB in the brains of controls, but large amounts of PIB in the frontal and temporo-parietal lobes of the patients.
This PET brain image shows unattached amyloid-beta leaving the brain of a healthy control subject through the ventricles (left panel), but amyloid-beta accumulating in the frontal and temporo-parietal areas of a patient with mild AD (right panel).
According to a news article in Science magazine, Engler's initial report of his research, at the 2002 International Conference of Alzheimer's Disease and Related Disorders, "audibly took the audience's breath away" ("Long-Awaited Technique Spots Alzheimer's Toxin" by Laura Helmuth, Science, 297:752-753, 08/02/02; this article was also the source of the graphic). Interestingly, amyloid-beta also shows up in the lens of the eye, and can cause a rare type of cataract. Whether through PET scans or eye exams, it may someday be possible to definitively diagnose AD while patients are still living. The availability of a biological marker, if that's what amyloid-beta proves to be, will also be of use in tracking the progress of patients who are receiving various types of treatment for AD.
One problem with Engler's technique is that the radioactive isotope, carbon-11, has only a very short half-life. It has to be produced on site, and then used very quickly. In 2010, Dr. Daniel Skovronsky employed another isotope, flouride-18, which is more readily available and lasts longer. Using this preparation, the researchers were able to see the buildup of amyloid plaque (the yellow and red areas in the lower images) only in patients with a presumptive diagnosis of Alzheimer's disease (based on standard memory testing), compared to normal controls. Interestingly, some degree of plaque buildup was also seen in those normal subjects who performed relatively poorly on the diagnostic tests ("Promise Seen for Detection of Alzheimer's" by Gina Kolata, New York Times, 06/24/2010). In 2011, the Food and Drug Administration approved a new diagnostic technique based on this research -- the first to permit diagnosis of AD without an autopsy, and promising early identification of patients at risk for AD. This raises the question of whether anyone would want to know if they were getting AD, and also the question of how to prevent insurance companies and other third parties from misusing such information. But the advent of the diagnostic technique itself is a great advance for neurology.
One question is whether use of PET imaging
for diagnosing AD is cost-effective. In 2003, a diagnostic
PET scan cost approximately $1500, as was considered "90%
accurate" in diagnosing AD as opposed to other forms of
dementia. But standard diagnostic procedures, involving
behavioral tests of mental function similar to those used
in experimental cognitive psychology, cost approximately
$300 and were considered "80 to 90% accurate". The
question for health consumers (and specialists in health
economists and health policy) is whether the 5-10%
improvement in accuracy is worth a 500% increase in costs
("Finding Alzheimer's Early" by Laura Johannes, Wall
Street Journal, 10/16/03).
A related approach is to use PET imaging to track the spread of tau in the brain. Research by UCB Prof. William Jagust, suggests that the buildup of beta-amyloid in the brain is a characteristic of normal aging, while the buildup of tau is specifically related to AD.
Pharmaceutical Treatments, and the
Possibility of Prevention
Although advances in understanding brain development make the prospects for recovery of function somewhat better than previously thought, cures for AD and other neurological diseases are a long time away. For all practical purposes, it remains true that brain damage is forever. However, given early diagnosis, it may be possible to administer drugs that will retard the progression of the disease. Among the most popular medicines currently used for treating AD are Aricept, Reminyl, and Exelon: all these drugs increase levels of the neurotransmitter acetylcholine (ACh).
Another drug, Namenda, has a different mechanism of action. Drugs like Aricept ostensibly replace the missing ACh. This may slow the progress of the disease, but does nothing for the underlying pathology, which is premature brain cell death. A genuine cure would operate on that level, retarding neuronal death.
In fact, a whole host of quite different drugs have been considered in the prevention and treatment of AD:
- Some inhibit the enzymes that produce amyloid-beta.
- Some clear amyloid-beta that has already begun to build up.
- Some prevent amyloid-beta from aggregating into clumps that damage neurons.
- Some block the production of toxic tau proteins.
- Some enhance the health of neural tissue, in an attempt to protect it against the damaging effects of amyloid-beta and tau.
As of 2012, clinical trials are underway on
a novel drug regime for the prevention of Alzheimer's
disease. The subjects are members of a group of extended
families in Medellin, Colombia, large numbers of whom
harbor a genetic mutation, APOE4, that greatly increases
susceptibility to a particular early-onset form of
Alzheimer's disease. Not everyone who carries APOE4 will
get Alzheimer's, and many more people (about 99%) who get
Alzheimer's don't have APOE4, but It is hoped that success
with this very narrowly defined group will lead to
preventative programs that are more generally applicable.
For more information, see;
- "Alzheimer's: Forestalling the Darkness" by Gary Stix, Scientific American, June 2010.
- "Seeds of Dementia" by Larry C Walker & Mathias Jucker, Scientific American, May 2013.
Vanishing Mind", a series of articles in the New
York Times by Gina Kolata and others
Other neural structures are built up from neurons, but neurons themselves are not physically connected; instead, they are separated by a gap, called the synapse -- a term coined by the physiologist Charles Sherrington. The synapse separates the terminal fibers of one neuron (the presynaptic neuron) from the dendrites and cell body of the next (the postsynaptic neuron). Each neuron contains a large number of terminal fibers and dendrites: thus, each neuron synapses onto a large number (1,000 or so, on average) of other neurons, producing a rich network of interconnections. Given that the adult human brain contains about 100 billion neurons, each making approximately 1,000 synaptic connections with other neurons, there are roughly 100 trillion synapses -- more synapses in a single brain than there are stars in our Milky Way galaxy).
Several neurons, each separated by a synapses from each of the others, may be drawn schematically as follows.
The neuron may be thought of as a wire
that conducts an electrical charge from the cell body down
the axon to the terminal fibers. The electrical discharge of
a neuron is based on an electrochemical process.
- Initially, the neuron is in a resting state, or resting potential, in which the cell membrane carries a negative polarization.
- When the neuron is stimulated (e.g., by the discharge of a presynaptic neuron, the action of a sensory receptor, or an electrode implanted in the cell body), ion channels open that allow positively charged sodium ions (Na+) outside the cell to join positively charged potassium ions (K+) already inside.
- The resulting increase in positive charge inside the neuron induces a small electrical current, a process known as depolarization.
- The process creates a positive action potential in the cell body that moves along the axon to the terminal fibers.
At this point the action potential meets the synapse, and the problem is for the action potential to get across the synaptic gap, from the presynaptic neuron to the postsynaptic neuron. (Here's where the "wire" analogy breaks down, because individual neurons aren't connected to each other, the way electrical wires are.)
The precise mechanism by which the neural impulse propagates down the axon from the cell body to the synapse is too complicated to detail here: you'll learn all about this if you take a more advanced course in biological psychology or neuroscience. Put briefly, it's not exactly like an electrical current. Rather, the impulse travels by means of a series of depolarizations involving the exchange of sodium and potassium ions. This process is described by the Hodgkin-Huxley model, described by Alan Hodgkin and Andrew Huxley in a classic 1952 paper (Andrew Huxley is a descendant of T.H. Huxley, known as "Darwin's bulldog" for his vigorous defense of evolutionary theory in the 19th century; he's also related to Aldous Huxley, who wrote Brave New World as well as the Doors of Perception, a report of his experiences with psychedelic drugs that gave The Doors their name). For their discovery, Hodgkin and Huxley shared the 1962 Nobel Prize in Physiology or Medicine with John Eccles.
Eccles, for his part, won the Prize for discovering the mechanism of synaptic transmission. Traditionally, neurophysiologists believed that synaptic transmission occurred electrically, by something like a spark. Eccles demonstrated that the mechanism was actually chemical in nature. He and his colleague Bernard Katz also identified the first neurotransmitter, acetylcholine. Katz, for his part, got his own Nobel Prize in 1970, for his "kiss and run" model of synaptic activity -- which really is too complicated for a course at this level!
This feat is accomplished by means of synaptic transmission. The arrival of a neural impulse at the terminal fiber of the presynaptic neuron induces the discharge of a chemical known as a neurotransmitter substance. Neurotransmitter flows into the synapse, and is taken up by the dendrites (technically, the dendritic spines) of the postsynaptic neuron. If a sufficient quantity is taken up, the postsynaptic neuron is depolarized. Another substance clears used neurotransmitter out of the synaptic cleft, permitting the cycle to start all over again. In this way, neural impulses travel from the peripheral sensory receptors to the spinal cord, up the spinal cord to the brain, from one brain structure to another, back down the spinal cord from the brain, and out from the spinal cord to the muscles and glands.
Strictly speaking, the presynaptic and postsynaptic neurons are not completely separated. Specialized cell-adhesion molecules form a physical connection between them, thus stabilizing their connection. But the neural impulse isn't transmitted by these physical connections. The neural impulse passes from one neuron to another chemically, by means of synaptic transmission.
Neurotransmitters are differentiated according to function. Excitatory neurotransmitters depolarize a postsynaptic neuron, while inhibitory neurotransmitters hyperpolarize it, making it more difficult to discharge. Some neurotransmitters have both excitatory and inhibitory effects, depending on the presence of other transmitters. There are also excitatory and inhibitory synapses -- presumably because they release excitatory and inhibitory neurotransmitters; but the point is that there's a difference between the synapse and the neurotransmitter.
- Glutamate is a good example of an "excitatory" neurotransmitter: it is used in over 90% of the synapses in the brain.
- Gamma aminobutyric acid, or GABA, is a good example of an "inhibitory" neurotransmitter: it is used at more than 90% of the remaining synapses (so that's 99% of synapses, right there).
So glutamate and GABA are the primary
neurotransmitters, operating throughout the nervous
system. There are many more neurotransmitters,
however, which perform specialized functions.
(ACh), another prominent "excitatory"
neurotransmitter, provides a good example of the ways in
which synaptic transmission works. ACh has an inhibitory
effect on the vagus nerve, which serves the heart; and an
excitatory effect on the nerves that serve the voluntary
musculature, including the muscles that supply the lungs.
Consider the following toxic effects on the function of
- Botulism, a toxin, prevents the release of excitatory ACh from the terminal fibers of the presynaptic neuron. The cosmetic drug Botox is a derivative, which is where it gets its name.
- Curare prevents the uptake of excitatory ACh by the dendrites of the postsynaptic neuron;
- Nerve gas prevents used ACh from being cleared away from the synapse -- and thus effectively prevents new ACh from being released by the presynaptic neuron.
Biological sleep aids (sometimes called hypnotics, although they have nothing to do with hypnosis) provide another, perhaps less threatening, example.
- The first generation of sleep aids consisted of barbiturates, such as sodium pentothal or sodium amytal. Barbiturates bind to receptors for GABA -- which, as noted above, is a major inhibitory neurotransmitter, and it also blocks the receptors for glutamate, which is a major excitatory neurotransmitter. The result is a general suppression of nervous system activity -- which is why barbiturates are often used to induce general anesthesia (in considerably higher doses, they're also used in physician-assisted suicide and capital punishment). The barbiturates are sedatives and muscle relaxants, so it's not surprising that they help people get to sleep. Even at the low doses used to promote sleep, however, barbiturates pose serious risk of addiction, and thus overdoses leading to death, and so they're not generally used for this purpose anymore.
- The second generation of sleep aids consisted of benzodiazepines, such as diazepam, lorazepam, and midazolam, which are also used to induce anesthesia, because they're also sedative muscle-relaxants. Benzodiazepines also potentiate the effects of GABA by binding to receptors, but in a different way. The barbiturates increase the amount of time that the receptor channel is open, while the benzodiazepines increase the frequency with which the channel opens. The ultimate effect is the same, however, which is an increase in inhibitory action. Benzodiazepines also pose a risk of addiction, but they offer considerably less risk of accidental overdose.
- The third generation of sleep aids are -- get this --
nonbenzodiazepines, so called because they have
an entirely different molecular structure than the
benzodiazepines. They are also known as the "Z-drugs":
zopiclone, zolpidem and zaleplon.The most familiar
example is Ambien, which is by far the most popular
sleep aid currently on the market. The Z-drugs
have the same mechanism of action as the
benzodiazepines: they bind to GABA receptors and keep
them open. pose less risk of overdosing than the
barbiturates, and less risk of addiction than the
benzodiazepines (though they can increase risk for
depression and psychological dependence).
Ambien can also induce a blackout-like state, in which
the patient engages in various (mostly highly
automatized) activities, with no memory of doing so
- A fourth generation of sleep aids is currently under development, which takes an entirely different approach. an example is Suvorexant. Instead of potentiating inhibitory neurotransmitters like GABS, they work on another neurotransmitter, orexin, which is important in the system that promotes wakefulness. Suvorexant effectively prevents orexin from reaching its receptors. So, instead of suppressing brain activity in general, suvorexent inhibits the activity of a neurotransmitter that keeps us awake. As Ian Parker puts it in an article describing the developmental and approval process for Surovexant, the drug "ends the dance by turning off the music, whereas a drug like Ambien knocks the dancer senseless" ("The big Sleep", New Yorker, 12/09/2013). All the benzodiazepine-like drugs, which act on the GABA system, can put the patient to sleep, but Suvorexant, which acts on the system that keeps us awake, will maintain sleep as well. And there is little of the post-sleep grogginess that usually comes as a side effect of the other drugs. Suvorexant is the first drug specifically designed as a sleep aid -- the first to be specifically engineered to address the physiology of the sleep-wake cycle. But, as of late 2013, it was still working its way through the FDA approval process
- Note: Because sleep is such an issue
for college students, it's important to understand
that the best treatments for various kinds of insomnia
are not pharmacological but psychological. A
specifically form of psychotherapeutic intervention,
as Cognitive-Behavioral Therapy for Insomnia (CBT-I),
has been shown to be at least as effective as any
medication currently on the market, with none of the
adverse side-effects. You can read more about
this in the Lecture Supplements on Psychopathology and
In any case, the result is the cessation of neural transmission in the nerves supplying the skeletal musculature, including the lungs. The person cannot breathe, and will die of suffocation unless he or she is artificially respirated until the toxin is metabolized and washes out of the system. Thus, skeletal paralysis, and death, is the final common pathway uniting the three toxic effects.
Technically, the terms excitatory and inhibitory apply to the receptors on the postsynaptic neuron, rather than to the neurotransmitters themselves -- which is why some neurotransmitters can have both excitatory and inhibitory functions. But, at our level, it is convenient to think of the neurotransmitters themselves as being excitatory, or inhibitory, or both.
In the lock-and-key model of neurotransmitter function, each neurotransmitter has a particular molecular structure, or shape, which can fits only certain receptor molecules. The neurotransmitter is like a key, and the receptor is like a lock. Only if the neurotransmitter fits into a particular receptor can it excite or inhibit the postsynaptic neuron. Some drugs used in the treatment of schizophrenia work by binding themselves to certain receptors, taking the place of certain neurotransmitters, and thus effectively jamming the lock in the lock-and-key system, and preventing the neurotransmitter from functioning properly. So-called designer drugs are deliberately structured to take the place of particular neurotransmitters.
There are many different
kinds of neurotransmitters, each with a particular chemical
structure and range of action. You don't have to memorize
all these neurotransmitters -- this is a course in
psychology, not molecular and cellular neuroscience! But
there are some whose names will crop up from time to time,
and you should recognize them as familiar.
- Acetylcholine (Ach)
- Epinephrine (NE)
- Norepinephrine (Adrenaline)
- Dopamine (DA)
- Serotonin (5-HT)
- Amino Acids
- Substance P
- Corticotropin (ACTH).
- The symptoms of Myasthenia gravis are a weakness in the muscles, and easy fatigability. It is due to an autoimmune disease (a disease that attacks its own body) that affects ACh receptors at the nerve-muscle junctions. It is treated by a drug that prevents the breakdown of ACh.
- The symptoms of Parkinson's disease include extremely slow movements, rigidity, and tremor. It is due to the degeneration of axons containing dopamine. It is treated by l-DOPA, which supplies the missing substance, and restores proper function.
- For an interesting account written by a person with
Parkinson's disease, focusing on the way it is
diagnosed through neuropsychological testing, see
"Have Your Lost Your Mind?" by Michael Kinsley (New
Yorker, 04/28/2014); also the letters responding
to this article published on 05/12/2014.
- The symptoms of chorea include rapid, jerky, involuntary movements. It is caused by an excess of dopamine at the synapses, and is treated by haloperidol, which blocks the release of DA.
- The symptoms of Huntington's disease include severe tremors that may make walking and speaking impossible. It appears to be related to reduced levels of ACh, catecholamines, serotonin, and GABA. There is no effective treatment yet, but any drug will probably have to make up for these deficiencies.
- The symptoms of Giles de la Tourette's syndrome, as celebrated on celebrated in many television series, including "St. Elsewhere" and "LA Law", include tics, repetitive movements, and involuntary noises, including obscenities and epithets. It is apparently caused by imbalances among several neurotransmitters: excessive DA and norepinephrine; deficits in ACh, serotonin. It is treated with haloperidol, which blocks the release of DA, helping to restore the proper balance.
- The symptoms of schizophrenia include thought disorder, language disorder, hallucinations, and delusions. One theory, known as the dopamine (or catecholamine) hypothesis, is that it is caused by excessive levels of dopamine. Interestingly, doses of amphetamine, a drug that increases DA levels, worsens these symptoms (and can create them out of whole cloth in individuals who are not at risk for schizophrenia). Schizophrenia is often treated with chlorpromazine, which apparently blocks DA receptors.
- The symptoms of affective disorder (manic-depressive illness) include extreme dysphoria or euphoria, or alterations between these mood states. One theory, known as the serotonin hypothesis, is that it is caused by abnormal levels of serotonin and/or catecholamines: diminished levels produce depression, excessive levels produce mania. Presumably, the antidepressant drugs correct these problems. One class of antidepressant medications, the selective serotonin reuptake inhibitors (SSRIs) -- as their name implies -- inhibit the reuptake of serotonin at the synapse, and thus increase the amount of serotonin available for use.
Dynamics of the Neural Impulse
Returning to normal neural functioning, the neural discharge conforms to the all-or-none law formulated by Edgar D. Adrian (1889-1977) -- work for which he won the Nobel Prize in Physiology or Medicine for 1932 (shared with Charles Sherrington, another neurophysiologist). According to the all-or-none law, either a neuron fires or it does not, and if it fires it does so in a single burst of activity. Each neuron has a certain threshold for discharge -- a certain amount of stimulation that is necessary for depolarization to occur.
After firing, the neuron enters a refractory period, during which it must recover before it can discharge again.
- The absolute refractory period is the short interval during which no further discharge can occur, regardless of the amount of stimulation.
- The relative refractory period is the somewhat longer interval during which the neuron is capable of firing, but requires extra stimulation to do so. In either case, the total refractory period is very short, on the order of a millisecond.
One way in which stimulation crosses threshold, obviously, is from a single super-threshold stimulus from a single presynaptic neuron. This action releases enough neurotransmitter to depolarize the postsynaptic neuron. In many cases, however, such presynaptic stimulation is subthreshold. If so, the all-or-none law comes into effect: the postsynaptic neuron doesn't discharge "just a little bit"; rather, the neural impulse simply travels no further.
However, subthreshold stimulation can depolarize the postsynaptic neuron by virtue of summation. That is, under some circumstances presynaptic activity can accumulate. If the aggregate presynaptic stimulation crosses the threshold, the postsynaptic neuron will fire. If not, again transmission stops.
- In temporal summation, the post-synaptic neuron receives several impulses from a single presynaptic neuron. Each impulse by itself is subthreshold. However, the impulses occur in rapid succession, too fast for all the neurotransmitter substance to be cleared away after each discharge. In this case, neurotransmitter will accumulate, and eventually the total may be enough to depolarize the postsynaptic neuron.
- Spatial summation occurs by virtue of the fact that many presynaptic neurons synapse onto every postsynaptic neuron. Thus, each postsynaptic neuron receives impulses from many presynaptic neurons. Again, even if the individual impulses are subthreshold, each releases some neurotransmitter substance, and the total may be enough to depolarize the postsynaptic neuron.
Temporal and spatial summation are extremely important. Consider one implication of the all-or-none law. If stimulation is super-threshold, the neuron fires, no matter how far above threshold the stimulation is. If stimulation is subthreshold, the neuron does no matter how close to the threshold it is. In order words, the firing of individual neurons cannot record the intensity of stimulation. But, obviously, we are able to distinguish between different intensities of stimulation: lights are bright or dim, sounds are loud or soft, tastes and smells are strong or weak. Note, however, that while increasing stimulation has no effect on the magnitude of the impulse of any single neuron, it does have two other effects. (1) Increasing intensity increases the rate at which single neurons fire. (2) Increasing intensity increases the involvement of adjacent neurons. Thus, the nervous system as a whole records intensity in terms of both the rate at which individual neurons discharge, and the number of individual neurons that are discharging simultaneously.
Nerves, Ganglia, and Nuclei
At the tissue level, the nervous system is comprised of nerves, ganglia, and nuclei.
The nerves come in two forms, the type of neurons they're composed of.
- Afferent nerves, composed of afferent neurons, carry information from sensory receptors toward the central nervous system. Afferent nerves also comprise the ascending tract of neural tissue in the spinal cord.
- Efferent nerves, composed of efferent neurons, carry information from the central nervous system to the muscles and glands. Efferent nerves also comprise the descending tract of neural tissue in the spinal cord.
The ganglia and nuclei are composed of interneurons. The distinction between these two types of tissues is based on location.
- Ganglia are located outside the brain and spinal cord (except for the basal ganglia, which are part of the forebrain, and probably should be called "basal nuclei", but aren't).
- Nuclei are located inside the brain and the spinal cord.
Ganglia and nuclei serve as the central nervous system in so-called "primitive" organisms, but are also found in the nervous systems of more complex organisms.
Hierarchical Organization of the Nervous System
The nervous system itself is also hierarchically organized, depending on the particular tissues and organs involved. We distinguish first betweenthe central nervous system (CNS), composed of the brain, the spinal cord, and the brainstem that joins them, and the peripheral nervous system (PNS), consisting of all the nerves leading to and from the CNS.
The Peripheral Nervous System
There are two branches of the PNS:
- the somatic nervous system (SNS), consisting of all the nerves running from the sensory receptors inward to the spinal cord and the brain, as well as the nerves running outward to the muscles; and
- the autonomic nervous system (ANS), consisting of the nerves running to and from the glands and other internal organs.
The Autonomic Nervous System
The sympathetic nervous
system mobilizes the body to meet
emergencies. In this so-called flight or fight
response (so named by Walter B. Cannon), the secretion
of the hormone adrenalin (epinephrine) leads to emotional
arousal, while the secretion of noradrenalin
(norepinephrine) releases stored sugar into the bloodstream,
providing more energy to the muscles. At the same time,
blood is re-channeled from the surface of the body to the
muscles: this promotes physical activity (sugar gets to the
muscles faster), and lessens bleeding in the case of an
Actually, "flight or fight" is
something of a misnomer, because the initial response of
many organisms to a stressor is to freeze in
place. So, "flight or fight" should really be "freeze,
flight, or fight". But the shortened version, "flight
or fight" has become commonplace, so we'll stick with
it. However, in a moment we'll see that even the
fuller formulation isn't quite right.
The parasympathetic nervous system normally mediates vegetative functions such as digestion, elimination, and reproduction.
There are important differences in function between the sympathetic and parasympathetic branches of the autonomic nervous system.
- The sympathetic and parasympathetic branches of the ANS are in an antagonistic relationship with each other. Sympathetic activation depletes the body's resources, while parasympathetic activation restores resources depleted by sympathetic activation.
- The SNS tends to act as a unit, mobilizing the entire body to meet the stressor, while the PNS tends to act on one organ at a time, depending on where it is needed most.
- While autonomic arousal decreases when the stressor is removed, SNS activity is terminated immediately, while PNS activity diminishes more slowly, in order to finish the job of restoring depleted resources.
- a gross emotional reaction, mediated by SNS activation;
- decreased emotionality, mediated by PNS counter-activation; and
- ending in exhaustion and death, when the body's resources are completely depleted.
Sex Differences in Stress Response? The classic view of the autonomic nervous system, which we owe to Walter B. Cannon, is that it mediates "flight or fight" responses of the organism to stressful events. More recently, Shelly Taylor and her associates at UCLA discovered that almost all of this research was based on the responses of male organisms. The female of the species, Taylor et al. argue, typically responds with a different pattern of behavior, which they have labeled "tend and befriend": that is, under stress females either protect their young or provide support to others. Of course, there are individual differences within each sex: these labels apply to group trends only. Apparently, the "choice" between these reaction patterns is influenced by the organism's hormonal endowment (e.g., the presence of the male hormone testosterone or the female hormone estrogen).
The details of an organism's stress response may differ in other ways as well, depending on the nature of the stress and the organism's appraisal of it. For an overview of the stress response, see M.E. Kemeny, "The Psychobiology of Stress", in Current Directions in Psychological Science, 2003, 12(4), 124-129.
The Enteric Nervous System?
A major portion of the autonomic nervous system connects with various parts of the gastrointestinal system -- the esophagus, stomach, and large and small intestines. The spinal nerves emanating from the thoracic segment of the spinal cord, at levels T5-T12, connect with nerve cells embedded in these organs to track the movement of food into and through the gut, and sending signals to the brain that control eating behavior. These levels are colored magenta in the diagram to the left.
- Afferent neurons sensitive to stretching of tissue signal when food has entered the stomach.
- Other afferent neurons signal the presence of nutrients.
- Afferent neurons release peptides control muscle contractions in the intestine that move food through the system (peristalsis and churning).
- Excess (undigested) fat fat reaching the ileum in the lower intestine is picked up by afferent neurons, which send an "I'm full" signal to the brain.
The system is very extensive -- consisting of about 100 million neurons, fewer than the brain but more than the spinal cord, and involving some 30 different neurotransmitters. Moreover, the nerve cells in the gut connect to ganglia located outside the spinal cord (also colored in magenta), permitting it to function somewhat autonomously even when the vagus nerve is severed, disrupting communication with the brain. -- that some physiologists refer to it as a separate nervous system -- a "second brain" known formally as the enteric nervous system (Gershon, 1998). But really, it's all part of the autonomic nervous system, involved in homeostatic regulation.
The Somatic (or Skeletal) Nervous System
- Some cranial nerves are exclusively afferent, such as the olfactory nerve (Cranial Nerve I, mediating the sense of smell) or the optic nerve (Cranial Nerve II, mediating the sense of sight).
- Other cranial nerves are exclusively efferent, such as the oculomotor nerve (III, which turns and focuses the eyes) or the hypoglossal nerve (XII, which moves the tongue).
- Some cranial nerves mix afferent and efferent
functions, such as the trigeminal nerve
(V, which mediates the sense of touch around the face
and eyes as well as chewing) and the facial
nerve (VII, which mediates the sense of taste
as well as the movement of muscles in the face or
- The trigeminal nerve may be implicated in migraine
headaches. The characteristic symptom of a
migraine headache is an intense, throbbing pain on one
side of the head and behind one eye; migraines are
sometimes preceded by an aura, visions
of spots, wavy lines, or flashing lights, numbness in
hands or face. Migraines have traditionally been
attributed to dilation of blood vessels in the brain,
particularly the temporal arteries (hence the
throbbing); and they've been treated with triptans,
vasoconstricting drugs. But a new theory holds
that in migraine sufferers the afferent portion of the
trigeminal nerve also responds to visual, auditory,
and olfactory stimuli, releasing a neurotransmitter
known as calcitonin gene-related peptide (CGRP).
Drugs that block the activity of CGRP have proved
effective in preventing migraine episodes.
- The other major form of headache, more common than
migraine, is tension headache, a dull,
diffuse, aching pain frequently described as feeling
that there's a tight band or other pressure around
the head (hence the name).
By contrast, the 31 spinal nerves (mostly arranged in pairs, extending to the left and right side of the body), all combine sensory and motor functions: the afferent branch of each nerve, composed of afferent neurons, meets the spinal cord at its dorsal root, at the back of the body (think of a shark's dorsal fin), while the efferent branch, composed of efferent neurons, connects at the ventral root, along the front.
- There is a regular association between the portion of
the body supplied by a spinal nerve, and the level at
which that nerve connects to the spinal cord. Nerves
that go out to the arm and hand connect at a higher
level than nerves that go out to the legs and feet.
- The 8 nerves of the cervical division (C1-C8)supply the neck, the arms, and the respiratory system;
- The 12 nerves of the thoracic division (T1-T12)control posture and supply the internal organs;
- The 5 nerves of the lumbar division (L1-L5) supply the legs; and
- The 5 nerves of the sacral division (S1-S5) supply the bowel, bladder, and sexual organs; and
- The single nerve of the coccygeal division (Co1) supplies the coccyx, which doesn't function in humans (at least, those that don't have tails).
- More spinal nerves project to the extremities (e.g.,
the hands and feet), which need more acute tactile
sensitivity and finer motor control, than to the trunk
of the body.
- 4 spinal nerves (C6-8 and T1) supply just the wrist, elbow, hand, and fingers.
- 4 more spinal nerves (L3-5 and S1) supply the knee, foot, and toes.
The Central Nervous System
The CNS consists of the spinal cord, brainstem, and brain.
- The dorsal tract, running along the back of the spinal cord (again, think dorsal fin), is composed of afferent neurons, and conduct sensory impulses to the brain.
- The ventral tract, running along the front of the body, is composed of efferent neurons, and conducts motor impulses from the brain.
- There are also a few mixed tracts, composed of both afferent and efferent neurons.
- Finally, there are bundles of interneurons right in the spinal cord. Remember the reflex arc, which consists of an afferent neuron, an efferent neuron, and an interneuron joining them? The presence of interneurons in the spinal cord makes it possible for the spinal cord to mediate certain spinal reflexes, without the involvement of the brain.
A break in the spinal cord causes a neurological syndrome known as paraplegia, which entails a loss of conscious sensation and voluntary movement in body sites served by spinal nerves that connect to the spinal cord below the site of the cut. A break in the middle or lower portion of the spinal cord may result in a loss of sensory and motor functions only in the lower part of the body, including the legs and feet. If the break is very high, impairing the upper portion (including the arms and hands) as well as the lower part, the condition is called quadriplegia.
Even when the break is very high in the spinal cord, a cranial nerve, the vagus nerve, maintains vital functions such as heart rate and respiration.
Despite the break in the spinal cord, not all functions are lost: many spinal reflexes remain intact, because they are mediated solely at the level of the spinal cord, and do not require the involvement of higher nervous centers in the brain.
Such spinal reflexes include the patellar reflex (the jerking movement of your leg when your doctor taps your knee), and scratching at the site of an irritation, as well as penile erection, ejaculation, urination, and defecation.All these behaviors can occur, as involuntary, reflexive response to appropriate stimulation, in paraplegic individuals.
The spinal reflexes remain intact because all the necessary interneurons needed to mediate the stimulus-response connection are contained in the spinal cord itself -- in the gray matter, to be exact. But, in the case of paraplegia, they do have some special properties:
- They are exaggerated. Apparently, in intact organisms, the strength of the spinal reflexes is dampened by inhibitory impulses arising in the brain. If the brain is disconnected from the corresponding location in the spinal cord, the spinal reflex is disinhibited.
- They are unconscious. Because the brain is disconnected from the spinal cord, the person will have no direct conscious experience of either the sensory stimulus or the motor reflex.
- They are involuntary. Spinal reflexes are part of the "hard wiring" of the spinal cord, and they are entirely automatic. They can be neither initiated nor inhibited voluntarily, because voluntary activity arises from the brain, from which the spinal reflex is disconnected.
Regaining Function in Paraplegia
Through Human-Computer Interface
The brainstem is the portion of the central nervous system that is situated between the spinal cord and the brain. Hard to differentiate precisely from the brain itself, it contains primitive structures often assigned to the hindbrain and the midbrain, as described below. There is also a forebrain. These divisions are so named because, in most animals, the spinal cord runs horizontally. Thus, the hindbrain is toward the back of the animal, and the forebrain towards the front. However, humans walk upright. Thus, the hindbrain is at the bottom, closest to the spinal cord, while the forebrain is at the top, furthest away.
The hindbrain, comprising the lower portion of the brain stem, consists of:
- the medulla (also known as the medulla oblongata), which regulates vegetative functions involving the cardiovascular and respiratory systems); and
- the pons, important for regulating cortical arousal, especially the cycle of sleeping and waking;
Separate from the brain stem, but also part of the hindbrain, is the cerebellum (from the Latin for "little cerebrum", because that's what it looks like -- similar shape, lots of folds). A major function of the cerebellum is to integrate sensory and motor data to help the organism maintain a sense of balance and exercise fine motor control. Cerebellar functioning can be impaired by alcohol, leading to loss of balance (in acute intoxication) or tremors (as a chronic effect). Although the cerebellum is the "little" cerebrum, in fact it contains many more neurons than the cerebral cortex -- by one popular estimate, about three times more.
The cerebellum has traditionally been
construed as a "primitive" neural structure, recent research
has revealed that it can be involved in a host of fairly
complex mental functions, including the timing of both
sensory signals and motor movements. The cerebellum is
actively involved in the classical conditioning of motor
responses, as in eyeblink conditioning. Other theorists
suggest that the cerebellum provides support functions for
the rest of the brain, including monitoring sensory data and
maximizing the quality of sensory input -- particularly with
respect to tactile exploration of the environment (see
"Rethinking the 'Lesser brain'" by J.M. Bower and L.M.
Parsons, Scientific American, 08/03). As we
acquire a motor skill, such as grasping an object or playing
a musical instrument, control of motor activity gradually
shifts from the cerebral cortex to the cerebellum -- a
process that will be described further in the lectures on Learning, to follow.
The midbrain, comprising the middle portion of the brainstem, includes the reticular formation (from the Latin word reticulum, meaning "net").The reticular formation also is important in regulating cortical arousal. Cats who have suffered surgical destruction of the reticular formation lapse into a constant state of sleep, and can be awakened only by loud noises or other intense stimulation, if at all. Cats whose reticular formation is continually stimulated by implanted microelectrodes remain constantly awake; when the current is turned off, they return to their regular sleep-wake cycle.
The forebrain can be further divided into two parts:
- The thalamus, which serves to relay incoming sensory signals to the appropriate parts of the brain;
- the hypothalamus, involved in biological motives such as hunger, thirst, predation, and parenting;
- the basal ganglia, which coordinate movement, and may be implicated in the neurological disorder known as Tourette's syndrome; and
- the limbic system (from the Latin word limbus, meaning "border"), which is involved in regulating emotion.
These subcortical structures are sometimes referred to as the diencephalon.
The Blood-Brain Barrier
Like every other organ of the body, the brain is kept alive by an extensive system of capillaries that deliver blood, and the oxygen and nutrients it contains, to its various tissues. In principle, these same capillaries can serve as vehicles for the delivery of drugs that can correct various problems in brain function. A few such drugs are available for the control of epilepsy, pain, affective disorder (both mania and depression), and schizophrenia. But there are relatively few such drugs, compared to the large number of drugs available for treating diseases of systems in other parts of the body? Why? -- especially when there is so much money to be made by pharmaceutical companies in the treatment of neurological and psychiatric disorders?
One answer is purely biological: the blood-brain barrier (BBB). The capillaries that supply blood to the brain are lined with endothelial cells that are so tightly packed together that they create an extremely fine filter that allows only the smallest molecules to get through. Substances called transport proteins allow blood sugar, amino acids, and other substances to reach the brain, but effectively prohibit toxins and viruses from getting through. This same filtering mechanism also bars the door to most pharmaceutical agents, whose molecules are simply too big to get through, or incompatible with the lipid proteins that carry molecules from one side of the endothelial cells to the other.
The drugs that are currently used for
epilepsy and the like consist of very small molecules,
which is why they can cross the BBB. But there are limits
to what very small molecules can do. A major challenge in
the development of more and more effective pharmaceuticals
for the treatment of brain disorders is to devise "escort
services" that will carry larger molecules across the BBB.
- One procedure in current use for cases of brain cancer is called blood-brain barrier disruption (BBBD), in which a sugar solution is injected into the arteries in the neck. The resulting high concentration of sugar in the capillaries temporarily shrinks the endothelial cells, opening up the filter and letting larger molecules in the brain. But BBBD is an extremely invasive and difficult procedure, and only works for a very short period of time with a relatively short list of molecules, so its usefulness for chemotherapy is severely limited.
- Another invasive method involves injecting a carbonated saline solution; using ultrasound radiation, the gas bubbles are set in vibration, causing the BBB to open up enough to let the drugs enter.
- Or, in a variant on a technique used to dissolve blood
clots following a stroke, you can insert a microcatheter
right into the brain.
- Another approach, still under investigation, attempts to trick the transporter proteins into taking additional molecules across the BBB -- molecules that they were not originally "designed" to carry. In animal research, such an "escort service" or "Trojan horse" method has been successfully used to deliver neurotrophic drugs that reduce the damage caused by experimentally induced strokes.
- Yet another approach successfully used in animal studies has been to use "polymer nanoparticles" (essentially, extremely small bubbles of silicon that are coated with therapeutic molecules) to carry drugs across the BBB. But the mechanism by which this technique isn't completely understood.
Source: "Breaking Down
Barriers" by Greg Miller Science 297: 1116-118,
- "Breaking the Brain Barrier" by Jeneen Interlandi, Scientific American, June 2013.
The Cerebral Cortex
The cerebral cortex looks like one big mass of tissue, but it is also possible to differentiate among various major parts by locating three major orientation points which, together, form a map of cerebral cortex.
- The posterior portion of the cerebral cortex, the occipital lobe is (not very clearly) separated from the parietal lobe by the parieto-occipital sulcus and from the temporal lobe by the pre-occipital notch.
Major Cortical Structures
Viewed Through Structural MRI imaging.
The top panel shows a single "
slice" of cortex in lateral (side), coronal (back), and
medial (inside) views. The middle panel shows the four
major lobes of the brain in both lateral and medial views.
Notice how the frontal lobe (in red) " tucks under" the
front of the brain, curling around the cingulate cortex.
Each view is divided into 10 " slices" , labeled a-j. The
lower panel shows the distribution of cerebral cortex at
each " slice" . In slice " a" (upper left), the frontal
lobe predominates; in slide " e" (upper right), the
parietal cortex is just beginning to appear; in slice " f"
(lower left), the frontal, parietal, and temporal lobes
are visible in about equal proportion; in slice " j"
(lower right), the occipital lobe predominates.
Click on image to enlarge it. From " The Structure of the Human Brain" by J.S. Allen, J. Bruss, & H. Damasio,American Scientist, 92, 246-253, 05-06/04).
Carl Schoonover, a neuroscientist with artistic inclinations, has assembled a series of images of the brain in a " coffee-table" art book, Portraits of the Mind: Visualizing the Brain from Antiquity to the 21st Century (2010). You can view selected images at "The Beautiful Mind", a slide show hosted by the New York Times, which carried an article on the project (11/30/2010).
The map of the cerebral cortex shows
two different views of the brain: the lateral view
simply looks at the brain from the side. The medial
view looks at the brain from the midline, or the
longitudinal fissure; this view is, essentially "from the
inside out". If two structures are named, the medial
structure is closer to the midline, the lateral one is
- anterior, rostral, or cranial views are toward the front, or "head end" of the organism;
- posterior or caudal views are toward the back, or "tail end";
- dorsal views are oriented toward the back;
- ventral views are oriented toward the abdomen;
- inferior is lower;
- superior is higher.
Within each of these lobes, further "territories" are demarcated by the folds which mark their surface. These come in two types: the outward folds are called gyri (plural for gyrus), while the inward folds are called sulci (plural for sulcus). These folds are necessary because, despite appearances, the cerebral cortex isn't a big spongy glob of neurons, but rather a thin sheet of neural tissue that has to be folded and crumpled in order to fit inside the skull -- much as a large piece of cloth must be wadded up so it can fit inside a small bowl. Actually, it's like two thin sheets of tissue, one for each hemisphere, each about 12 inches in diameter. Despite their thinness, each sheet consists of about six distinct layers of neurons.
Why the brain folds precisely the way it does is a mystery. It's not a random process, because some folds are the same in all brains, and even have names -- like the inferior or third frontal gyrus (F3) and the superior or first temporal gyrus (T1), which lie on opposite sides of the lateral (Sylvian) fissure. Other folds differ from brain to brain. According to one theory, the cerebral cortex folds the way it does because of mechanical stresses and strains on individual axons produced as the cortex grows inside the brain case. In any event, the pattern of folds is pretty much established at the time of birth. It is possible that the precise pattern of folding differs in individuals with such major mental illnesses as schizophrenia and autism.
For more on the development of gyri and sulci, see "Sculpting the Brain" by Claus C. Hilgetag and Helen Barbas, Scientific American, 02/2009..
studies reveal even more fine-grained subdivisions of
cerebral cortex, within each lobe and subcortical structure.
For example, Korbinian Brodmann (1909) described the cytoarchitectonics
(from the Greek cyto, meaning "cell", and architekton,
meaning "organized structure") of the brain, distinguishing
50 areas, mostly demarcated by gyri and sulci, but more
importantly differentiated according to cell structure or
organization. These Brodmann areas are analogous
to separate tissues or organs within major
portions of the brain, invisible to the naked eye.
Furthermore, these different structures in the nervous
system, sometimes marked by gyri or sulci, are linked by
specific neural pathways to form brain circuits
or brain systems -- a concept originally
formulated by the American neuroscientist Roger Sperry, who
shared the Nobel Prize in Physiology or Medicine in 1981.
These pathways permit structures within the circuit/system
to communicate with each other.
The cerebral cortex (forebrain) is also called neocortex, because it was the last organ in the nervous system to develop. This is true whether you view development phylogenetically (from the Greek word phylon, meaning "tribe" or "race"), in terms of the evolution of species, or ontogenetically (from the Greek word onto-, meaning "existence" or "being"), in terms of the development of the individual species member. This is particularly true for the prefrontal cortex, the "foremost" part of the forebrain.
If you look at the brains of many
different species, from snake to cat to human, the brain
gets larger, but so of course does the animal. The important
observation as that in the most recently evolved species,
such as mammals and especially primates, a greater
proportion of CNS tissue is devoted to cerebral cortex --
and especially the prefrontal cortex responsible for
executive functions. Expansion of prefrontal cortex is also
characteristic of hominid evolution, as in the comparison of
"Neanderthal" and "Cro-Magnon" man.
This is the standard story, found in almost all neuroscience textbooks: the human brain, and in particular the cerebral cortex and the cerebellum, and especially the prefrontal cortex, is much larger than would be expected given our body size. For example, gorillas generally exceed humans with respect to body size, but their brains are only about 1/3 the size of ours. It is sometimes added that the human brain is also special in terms of the amount of white matter, or in terms of certain types of neurons (such as "von Economo neurons, named for the researcher who first described them, and thought to be important for social cognition). But the basic story is summarized by the encephalization quotient, or EQ -- roughly, the ratio of brain size to size of the head. In the chimpanzee, our closest living primate relative, EQ = about 3. In humans, EQ = about 7.5. No other animal comes even close. If you plot brain size as a function of body (or just head) size, humans are outliers. This high EQ is, traditionally, held to be the key to human intelligence.
- As noted earlier, she counted about 86 million
neurons in the human brain. That's fewer than the
number usually given, which is 100 million.
- She also found that we have fewer glial cells
than we thought. It's often asserted that we have
about 10 times as many glia as neurons, but in fact the
ratio of neurons to glia appears to be closer to 1:1.
Using her novel technique, H-H counted
the neurons in representative samples of animal brains, from
rodents and African elephants to primates and humans, and
made a number of remarkable findings -- the first of which
is that brain size does not necessarily correlate with the
number of neurons in the brain. The correlation
between brain size and neuronal endowment differs from one
order of mammals to another. In particular, primate brains,
including human brains, contain more neurons when compared
to other mammalian brains of even larger size (e.g, rodents
and insectivores). What makes primates, including
humans, special are what H-H calls the neuronal scaling
rules -- that is, the relationship between cortical
mass (white and gray matter combined) and the number of
neuronal cells. Primate neurons are also smaller than
those of other mammals, so that more of them can be packed
into a brain of a given size. Put bluntly,
humans are not special because we have bigger brains (for
our body size). Our 86 million neurons are just what
would be expected for primates with brains of our size
(roughly 1500 grams). And it's not even clear that our
prefrontal cortex is bigger, neuron-wise, than that of other
primates -- though because we have more neurons than other
primates, we have the most neurons there, too. H-H
thus argues for a "neuron-centered" rather than a "body-" or
even "brain-centered" view of brain evolution. The key
to human intelligence is not the size of our brains, nor
even the connectivity among individual neurons -- both of
which are typical for mammals of our size. The key,
simply, is that we pack more neurons into the cortical space
available for them. But the neuronal scaling rules for
humans are, apparently, the same as for other
This, H-H suggests, is the true key to
the evolution of the brain. For most mammalian
species, brain evolution involved expansion of the cerebral
cortex. At some point, however, primates branched from
other mammals, who continued to follow the standard neuronal
density rules, and began packing more neurons into less
(For details, see "The Human Brain in Numbers: A Linearly Scaled-Up Primate Brain" by Suzana Herculano-Houzel, Frontiers in Human Neuroscience, 2009; "Brain scaling in mammalian evolution as a consequence of concerted and mosaic changes in numbers of neurons and average neuronal cell size" by Herculano-Houzel et al., Frontiers in Human Neuroscience, 2014; The Human Advantage: A New Understanding of How Our Brain Became Remarkable by Herculano-Houzel, 2016; a nicely readable summary of her research appears in "The Remarkable (But Not Extraordinary) Human Brain", by Herculano-Houzel, Scientific American Mind, 03-04/2017.)
Similarly, it is no surprise that as the fetus and child matures, the brain gets larger. What is interesting is that the relative size of the cerebral cortex gets larger, compared to other CNS structures. As the brain matures, more folds appear as more cortical tissue has to fit into the skull.
Development of the brain continues
after birth, at least through adolescence, perhaps until as
late as the mid-20s (see "Crime, Culpability, and the
Adolescent Brain" by Mary Beckman, 07/30/04, from which the
graphic is taken). A longitudinal MRI study by Natin Gogtay
and his associates (2004) shows a spurt of growth in gray
matter immediately prior to puberty, following by a
progressive replacement of gray matter by white matter --
perhaps a reflection of progressive myelinization of the
brain, which we know continues after birth. The density of
interconnections among brain areas also increase during
adolescence. Development also proceeds from back to front,
with the frontal lobes of the brain being the last to
mature. Because the frontal lobes are associated with
executive control functions, this immaturity of frontal-lobe
structures in the brains of adolescents and young adults has
been linked to problems in impulse control. Some
neuroscientists and legal scholars have gone so far as to
suggest that, because of such brain immaturity, adolescents
and even young adults should not be held entirely culpable
for criminal acts and other elements of misbehavior. The US
Supreme Court, which has already decided that mentally
retarded individuals are less culpable for their crimes by
virtue of their diminished intellectual capacity (in the Atkins
case, 2002), will address this issue of diminished
responsibility for "normal" adolescents in the Simmons
case (2004). As Beckman puts it, the Supreme Court may well
decide that, in some respects, adolescence is "akin to
When do you die? In the past, people were pronounced dead when they stopped breathing, or their hearts stopped pumping blood, but by the standards of modern medicine, death means the death of the brain. When the patient's brain has stopped functioning, and he or she has entered what medicine calls "irreversible coma", the person can be pronounced dead -- even though other vital organs can be kept functioning through artificial life supports (in one case, a pregnant woman who suffered brain death was kept on life supports for nine weeks, until her fetus could be safely delivered by Caesarian section).
One reason we determine death by the state of brain functioning is that the brain is an organ that can't be replaced. If my heart stops working, I can (at least in principle) get another one through transplant surgery. So, too, with my liver or kidney. Or (again, at least in principle) I can get an artificial organ to do the needed work: we already have artificial lungs, kidney dialysis, and artificial hearts.
But a more substantive reason for defining death as brain death is that, at least since the Enlightenment, we have considered the brain to be the biological seat of consciousness and identity. When you get a new heart (or a substitute), you are still the same person that you were before. But if you should get another brain, or perhaps a substitute based on silicon chips, there is a sense that you wouldn't be the same person any more at all. Some philosophers disagree with this point (see, e.g., Raymond Kurzweil's The age of spiritual machines : when computers exceed human intelligence, 1999), but it seems self-evident to the rest of us.
Note, however, that brain death isn't death everywhere, even in the economically developed, technologically sophisticated cultures. In Japan, some doctors have been prosecuted after removing brain-dead patients from artificial life supports. Death is a process, with different parts of the body dying at different times. Japanese culture is very concerned about the social rituals surrounding dying, and is very concerned about the desecration of the body. Because the boundary between life and death is fuzzy, Japanese culture requires "more evidence" before declaring a person dead.
This cultural difference, coupled with the fact that we can keep brain-dead patients "alive" indefinitely by means of artificial life supports, has suggested to some commentators that, at least to some extent, death is a social construction: you're dead when authorities say you're dead (see Margaret Lock, Twice Dead: Organ Transplants and the Reinvention of Death, University of California Press, 2002).
This is not a joke: in an age where we can harvest organs from deceased individuals, including "anencephalic" infants born without brains, for transplanting into other people's bodies; and when advances in medical technology allow us to keep comatose patients alive indefinitely (though at great cost), even if they never regain consciousness, it becomes clear that the standards for determining whether a person is dead are not fixed forever, closed to debate.
So how do physicians determine whether
you're dead? In general, there are three physical signs of
- The patient has a known disease, and is currently in a "deep coma" from which recovery is extremely unlikely.
- The patient must be unable to breathe independently, without artificial ventilation.
- The patient has no motor reflexes arising from the brainstem.
- There must be no evidence of activity in the cerebral cortex or the brainstem.
But why the brainstem, a relatively
primitive part of the nervous system, as opposed to the
cerebral cortex, where human consciousness and identity
surely reside? Because in a very real sense the brainstem
is the link between life below and above the neck. The
brainstem sends signals "down" to the vital organs
(through the vagus nerve) to regulate their functioning,
as well as "up" (through the reticular activating system)
to maintain cortical arousal and alertness. So, when the
brainstem is no longer working, it takes the rest of the
body, including the brain, down with it. For more on the
"diagnosis" of death, see "The Diagnosis of Brain Death" (New
England Journal of Medicine, 344, 1215-1221, 2001)
and "Brain Death Worldwide: Accepted Fact but No Global
Consensus in Diagnostic Criteria" (Neurology, 58,
20-25, 2002), both by E.F. Wijdicks.
Brain death doesn't happen all at
once. It progresses (that's the word) in
stages. Here's how it goes in cardiac arrest, which
blood stops circulating through the body
- First to go is the brain, in stages, beginning within about 4-5 minutes of the cessation of blood flow:
- And the first of the brain structures to be damaged
is the hippocampus, a structure mediating memory
consolidation. A person who is resuscitated at
this point will probably not remember anything of the
- Then the cerebral cortex is affected, resulting in a general loss of cognitive function.
- Then the basal ganglia in the forebrain, resulting in an inability to control voluntary motor activity.
- Then the thalamus, which serves as a sort of sensory relay station, resulting in loss of vision, hearing, and touch.
- And finally, the brain stem, which regulates the respiratory and cardiovascular systems, resulting in a cessation of breathing.
- Loss of cardiac and kidney function occurs next.
- Then liver function
- And, finally, as long as a few hours later, the lungs
So by the time patients' hearts stop beating, and they
stop breathing on their own, the brain has been dead for a
considerable period of time. The paradox is that
brain-dead patients do not generally appear to be
dead. It looks like their bodies are functioning
normally, and they've just -- just! -- lost
consciousness. But unless they are resuscitated or
placed on placed on artificial life supports, the rest of
their bodies will quickly follow, and their organs may be
unsuitable for transplantation.
Recently, however, popular media outlets gave
considerable attention to a study which seemed to show
that the brain continued to function after the
heart stopped beating -- a reversal of the sequence
outlined above (Norton et al., Canadian Journal of the
Neurological Sciences, 2017). But that's not
what the study actually found. The researchers
monitored various physiological functions in four patients
whose life supports were being terminated -- in other
words, the researchers watched these people die. For
three of the patients, EEG activity ceased before
cessation of cardiovascular activity, as reflected in EKG
monitoring and blood pressure. For the fourth
patient, however, this normal sequence was reversed: the
EEG continued to record some brain activity even after the
heart had stopped beating and blood pressure had dropped
to zero. Although some popular-press accounts
speculated that this suggested that consciousness could
continue after death, the authors themselves were inclined
to attribute this result to a "nonneuronal
artefact". In the first place, the EEG activity
observed consisted of low-frequency "delta" and "theta"
activity, normally observed during stage NREM sleep and
the persistent vegetative state. In the second
place, "nonneuronal artefacts" are hardly unknown in EEG
research. In fact, Adrian Upton, a neurologist at
McMaster University, found that the EEG could also record
electrical activity when the electrodes were placed on
mold of Jell-o: the apparent electrical activity was
actually an artifact of movement by the gelatin in
response to air currents (this is a true story, not one of
those urban legends you find on the internet, although
unfortunately Upton never published his findings).
The point is, that if you can get an EEG signal from a
bowl of Jell-o, getting an EEG signal in one dead
body doesn't mean too much. It probably was
a "nonneuronal artefact", and not evidence of anything
like the survival of consciousness after bodily death.
From Structure to Function
The brain is an amazing structure:
it's been called the most complex entity in the universe.
- Roughly 86 billion neurons, and 100 thousand miles of axons, making 10 trillion synapses, supplied by 400 miles of capillaries, and supported by about a trillion glia cells, all of it packed into 3 pounds of organic matter.
- The cerebral cortex (neocortex) is only about 4 millimeters thick (a credit card is roughly 1 mm thick). If unfolded -- flattening out all those gyri and sulci -- would spread across 2500 square centimeters -- about 20" x 20" -- a little bit bigger than a large pizza.
That's pretty amazing, but we're doing more than anatomy here. We really want to know how these organs, tissues, systems, and circuits are related to mind and behavior. In other words, what are the psychological functions of these different brain structures?
According to the general principle
known as localization of function, or functional
specialization, different parts of the brain
serve different psychological functions. This idea
originated with Franz Joseph Gall (1788-1828), Johann
Spurzheim (1776-1832), George Combe, and others who promoted
a pseudoscience known as phrenology.
Phrenology was popularized in America by Nelson Sizer and
Orson Squire Fowler (and his bother Lorenzo), and you can
sometimes purchase a ceramic copy of Fowler's "phrenological
head" in novelty stores. Well into the 19th century, the
brain was thought to be a single, massive, undifferentiated
organ. But according to Gall and his
followers, the brain is composed of discrete organs, each
associated with some psychological faculty, such as morality
or love. In classic phrenological doctrine, there were about
35-40 such faculties, and the entire set of faculties
comprises a common-sense description of human abilities and
The Phrenological Faculties
- Amativeness or Physical Love: the reproductive instinct, sexual attraction, and sexual desire (wouldn't you know that the phrenologists would name this one first!).
- Philoprogenitiveness or Parental Love: A particular feeling which watches over and provides for helpless offspring, or parental love.
- Adhesiveness or Friendship: A feeling or attraction to become friendly with other persons, or to increase social contacts.
- Combativeness: The disposition to quarrel and fight.
- Destructiveness: The propensity to destroy.
- Secretiveness: The propensity to conceal, predisposes the individual to Cunning and Slyness.
- Acquisitiveness: The propensity to acquire.
- Self-Esteem: This sentiment gives us a great opinion of ourselves, constituting self-love.
- Approbativeness: This faculty seeks the approbation of others. It makes us attentive to the opinion entertained by others of ourselves.
- Cautiousness: This organ incites us to take precautions.
- Individuality: This faculty contributes to the recognition of the existence of individual beings, and facilitates the embodiment of several elements into one.
- Locality: This faculty conceives of places occupied by the objects that surround us.
- Form: This allows us to understand the shapes of objects.
- Verbal Memory: The memory for words.
- Language: Philology in general.
- Coloring: This organ cognizes, recollects, and judges the relations of colors.
- Tune: The organ of musical perception.
- Calculativeness or Number: The organ responsible for the ability to calculate and to handle numbers and figures.
- Constructiveness: The faculty leading to the will of constructing something.
- Comparison: This faculty compares the sensations and notions excited by all other faculties, points out their similitudes, analogies, differences or identity, and comprehends their relations, harmony or discord.
- Causality: This faculty allows us to understand reason behind events.
- Vitativeness or Wit: This faculty predisposes men to view every thing in a joyful way.
- Ideality: This faculty vivifies the other faculties and impresses a peculiar character called ideal.
- Benevolence: This power produces mildness and goodness, compassion, and kidness, humanity.
- Imitativeness: This organ produces a fondness for acting and for dramatic representation.
- Generation: This faculty allows us to come up with new ideas.
- Firmness: This faculty gives constancy and perseverance to the other powers, contributing to maintain this activity.
- Time: he faculty of time conceives the duration phenomena.
- Eventuality: This faculty recognizes the activity of every other, and acts in turn upon all of them.
- Inhabitiveness: The instinct that prompts one to select a particular dwelling, often called attachment to home.
- Reverence or Veneration: By this organ's agency man adores God, venerates saints, and respects persons and things.
- Conscientiousness: This organ produces a feeling of justice and conscientiousness, or the love of truth and duty.
- Hope: Hope induces a belief in the possibility of whatever the other faculties desire, it inspires optimism about future events.
- Marvelousness: This sentiment inspires belief in the true and the false prophet, and aids superstition, but is also essential to the belief in the doctrines of religion.
- Size: This organ provides notions of the dimensions or size of external objects.
- Weight and Resistance: This faculty procures the knowledge of the specific gravity of objects, and is of use whenever weight or resistance are worked upon with the hands, or by means of tolls.
- Order: This faculty gives method and order to objects only as they are physically related.
From: "Phrenology, the History of
By: Renato M.E. Sabbatini (Brain & Mind, March 1997)
Gall and other phrenologists inferred
the associations between these abilities and traits and
brain locations by looking at exemplary cases: if, for
example, an obese person had a bulge in some part of his
skull, the area of the brain underneath must control eating.
The famous case of Phineas Gage, a railway-construction
foreman who in 1848 sustained severe damage "in the
neighborhood of Benevolence and the front part of
Veneration" when a tamping iron was driven through his left
eye socket and out the top of his head (and lived to tell
about it), was a linchpin of arguments about phrenology in
particular and functional specialization in general.
(Interestingly, Gage is actually an ancestor of Fred Gage, a
Salk Institute neuroscientist who studies neurogenesis.)
Phrenological San Francisco
On the theme of phrenology, Rebecca Solnit's book, Infinite City (2010), includes this satirical map of San Francisco, mapping the character of various neighborhoods in the manner of inspired by Fowler's phrenological head.
Methods of Neuropsychology
Gall's basic idea was pretty good, but his science was very bad (which is why we call phrenology a pseudoscience). Nobody takes phrenology seriously these days. However, evidence from neuropsychology favors Gall's idea of functional specialization, though not in Gall's particular form. The modern, scientifically based list of mental "faculties", and corresponding brain regions, is much different from his.
Neuropsychology offers a number of techniques for studying functional specialization.
Historically, the most important method for neuropsychology involves brain lesions, in which scientists observe the mental and behavioral consequences of damage to some portion of the brain.
In humans, this damage is
almost always accidental, resulting from some insult,
injury, or disease treated by neurologists.Obviously,
ethical constraints prevent investigators from intentionally
damaging the brains of their other humans, studies of
neurological patients can provide valuable evidence about
- For example, in 1861 the neurologist Paul Broca (1824-1880) discovered that lesions in the posterior portion of the inferior frontal gyrus, in the frontal lobe, were associated with certain disorders of speech, leading to the notion that this area was somehow specialized for speech. Broca studied a single patient, Louis Victor Leborgne, who had lost his speech function at age 30; Leborgne was also known as "Tan", because this was the only word he could speak. When Tan did 21 years later, Broca autopsied his brain, and found a distinct lesion in the left frontal lobe -- an area now known as "Broca's area". With his finding, Broca clinched the case for functional specialization. Tan thus became the first "named case" in the history of cognitive neuropsychology. His story is detailed by Cezary Domanski, a Polish historian of psychology in a paper in the Journal of the History of the Neurosciences (2013).
- In 1874, another neurologist, Carl Wernicke (1848-1905), discovered that lesions in a different location -- roughly, in the posterior portion of the superior temporal gyrus near the boundary of the temporal lobe with the occipital and parietal lobes -- were associated with a different form of language disorder. "Wernicke's area" is now known to be important for the comprehension of speech.
Sometimes, the lesions result from
desperate medical procedures intended to alleviate disease,
pain, or suffering. An example is the patient H.M.,
discussed below, who had his hippocampus and related
structures destroyed in an attempt to control otherwise
intractable epileptic seizures. At the time, the function of
the hippocampus was unknown, and the surgeons were very
careful to leave intact areas, such as the auditory
projection area in the nearby temporal lobe, whose functions
were known. It was only later that the surgeons and
neuropsychologists realized that the hippocampus really did
have a function -- and a rather critical one at that. This
was because H.M. subsequently lost his ability to remember
recent events -- indeed, anything at all that happened to
him subsequent to his operation.
I'll have more to say about H.M. later in these lectures, and also in the lectures on Memory.
Link to an interview with Brenda Milner, who did the first experimental studies of H.M.
What Was It Like To Be H.M.?
Henry Molaison was the subject of a great deal of important memory research, but all of those scientific papers gave readers little idea of what his life was like -- what it was like,that is, to have no memory. Cognitive psychologists were very interested in H.M., but -- until Prof. Stanley Klein at UC Santa Barbara came along, hardly any personality or social psychologists took notice of him, or examined the implications of memory for identity, personality, and social interaction. Klein never got to work with H.M., but his studies of other amnesic patients shed light on the structure of the self as a memory representation (e.g., Klein & Kihlstrom, Journal of Experimental Psychology: General, 1998). Klein was also the first to note that amnesic patients have difficulty thinking of their personal futures, as well as their personal pasts, leading to the speculation that memory is the basis for mental time travel.
One exception was Philip Hilts, a science journalist who made H.M. the centerpiece of his book on memory research: Memory's Ghost: The Nature of Memory and the strange Tale of Mr. M. (1996). It is to HIlts that we owe H.M.'s description of his situation, moment to moment, day in and day out, as "like awakening from a dream".
After H.M. died, in 2008, Suzanne Corkin, the neuropsychologist who had worked most closely with him over the years, beginning in (Brenda MIlner, a Canadian neuropsychologist who worked with Wilder Penfield, actually did the earliest research on H.M., published with Scoville in 1957; Corkin had been Milner's graduate student), published a sort of double "biography" covering both H.M's and her research program, entitled Permanent Present Tense: The Unforgettable Life of the Amnesic Patient H.M. (2013). After all the research on H.M.'s memory, this is the first -- aside from Memory's Ghost. Corkin reports that H.M.'s lesion also included part of his amygdala -- which, she speculates, may have been responsible for his relatively placid personality. She writes (as quoted by Charles Gross, reviewing Corkin's book for The Nation, 11/04/2013):
Buddhism and other philosophies teach us that much of our suffering comes from our own thinking, particularly when we dwell in the past and in the future.... Meditation is a method for training the mind to have a new relationship with time, knowing only the present.... Dedicated meditators spend years practicing being attentive to the present -- something Henry could not help but do.
When we consider how much of the anxiety and pain of daily life stems from attending to our long-term memories and worrying about and planning for the future, we can appreciate why Henry lived much of his life with relatively little stress.... [A] part of us all can understand how liberating it might be to always experience life as it is right now, in the simplicity of a world bounded by thirty seconds.
Some time thereafter, a kind of counterpoint appeared, in the form of another book, Patient H.M.: A Story of Memory, Madness, and Family Secrets by Luke Dittrich (a precis of sorts appeared in the New York Times Magazine ("The Brain That Couldn't Remember", 08/03/2016; see also the extensive review of the book by Seth Mnookin, a science journalist: "A Book Examines the Curious Case of a Man Whose Memory was Removed", New York Times Book Review, 09/04/2016). Dittrich, as it happens, is the grandson of Wiliam Scoville, the neurosurgeon who performed the surgery on H.M. -- and, to make the story even better, his mother was Corkin's best friend in childhood. Dittrich's book washes a fair amount of family laundry. For example, he notes that his grandfather was a proponent of psychosurgery, and in particular prefrontal lobotomies, in cases of schizophrenia and other psychoses -- a procedure made infamous in Ken Kesey's One Flew Over the Cuckoo's Nest and other books and films. But H.M.'s operation wasn't the usual sort of lobotomy. Nobody thought that H.M. was psychotic, and nobody was trying to control his behavior through psychosurgery. H.M.'s epilepsy apparently arose from his temporal lobe, and the surgery was an act of desperation to try to relieve him of the suffering of intractable seizures. In both the magazine article and the book, and in subsequent interviews on the news media, Dittrich also made a number of charges concerning Scoville's motives in performing the surgery, and Corkin's mishandling of H.M. during his later career as a research subject. By this time, Corkin herself had died, and couldn't defend herself. But MIT, where Corkin had been on the faculty, rebutted the most serious charges, and Corkin's colleagues (myself among them) wrote a strong letter to the Editor of the Times to protest Dittrich's treatment of her research. So if you're interested in H.M., you should read this book; but you should also take its more spectacular claims with more than a grain of salt; and you should read Corkin's book as well.
Another detailed biography of an amnesic patient is the Perpetual Now: A Story of Amnesia, Memory, and Love, by Michael D. Lemonick (2017). Lemonick tells the story of Lonni Sue Johnson, who, when she was 57 years old, suffered the destruction of her hippocampus due to a viral infection. As a result, she sould no longer remember anything that happened to her more than about five minutes after the event. Lemonick poses a good question about memory and identity: "If we have no memories of the experiences that made us, how can we know who we are?".
In laboratory experiments with nonhuman animals, brain tissue is sometimes deliberately destroyed, either surgically or by a relatively large jolt of electricity. For example, Philip Teitelbaum and his colleagues discovered that lesions in the ventromedial portion of the hypothalamus cause rats to overeat to the point of obesity (a syndrome known as hyperphagia), while lesions in the lateral portion of the same structure, the hypothalamus, cause rats to under-eat to the point of starvation or even death (a syndrome known as aphagia).
Findings such as these led to the conclusion that these two portions of the hypothalamus constitute "centers" for the initiation and termination of eating.In Teitelbaum's dual-center theory of eating behavior, the VMH inhibited eating, while the LH disinhibited eating. The two centers were arrayed antagonistically, so that activation of one inhibited the other.Destruction of the VMH released the LH from inhibition, leading to hyperphagia. Destruction of the LH, on the other hand, released the VMH from inhibition, leading to aphagia. The dual-center theory has now been significantly revised. Eating turns out to be more complicated than this, and it's certainly not the case that people with eating disorders, whether obese or anorectic, have lesions in one part or the other of their hypothalamus. The dual-center theory is offered here simply as a classic example of the way we can draw inferences from the results of lesions (and besides, Teitelbaum was one of my teachers in graduate school!).
Whether lesions occur
accidentally or deliberately, they are, for all intents and
purposes, permanent. Temporary lesions can sometimes be
- cooling a portion of the brain, or by
- applying certain chemicals which disrupt electrical activity (a technique known as spreading depression).
But there are other techniques that do not cause permanent damage.
In electrical stimulation of the brain, a small electrical current is applied to some portion of the brain (or even a single cell) by means of a microelectrode implanted in brain tissue, and we observe the behavior or experience that results from that stimulation.
For example, Walle Nauta (1946) discovered that lesions in some portions of the reticular formation (in the upper brainstem, or midbrain) cause cats to fall permanently asleep, able to be awakened only by very loud sounds. Giuseppe Moruzzi and Horace Magoun (1949) discovered that electrical stimulation of these same regions caused cats to stay permanently awake (so long as the current is on). These observations led investigators to postulate the existence of a reticular activating system, centered on the reticular formation, which modulates levels of arousal and alertness.
As another example, James Olds and
Peter Milner (1954) inserted microelectrodes into various
locations in the brains of rats, and put the animals into an
apparatus, commonly known as a 'Skinner box" (which I'll
discuss later in the lectures on Learning), in which
pressing a lever delivered a short burst of electrical
current to that area of the brain -- in particular, a
portion of the limbic system known as the "septal
area". It turned out that the rats would "work"
feverishly to get these pulses, often to the exclusion of
anything else. This led the septal area, and
especially the nucleus accumbens (abbreviated NAc),
to be dubbed the "pleasure center" of the brain. The
NAc and ventral tegmentum (abbreviated VTA for
Ventral Tegmental Area), another limbic-system structure,
are major sources of the neurotransmitter dopamine, which is
also involved in reward. Together, the NAc and VTA
constitute a dopaminergic reward system that is
heavily implicated in substance addiction.
- There are other reward systems, too, mostly centered
on other structures in the limbic system.
- I'll have more to say about addiction and the reward
systems of the brain in the lectures on Motivation.
- The septal area also turns out to be important for understanding psychopathy, a form of mental illness, as discussed in the lectures on Psychopathology and Psychotherapy.
A variant on brain stimulation is transcranial
direct current stimulation, or tDCS, in
which a very small electrical current (about the
equivalent of what is supplied by a standard 9-volt
battery) is applied to electrodes attached to the scalp
(thus, not to the brain directly). A large number of
positive effects have been claimed for tDCS, depending on
where, and for how long, it is applied. But most of
these findings have not been replicated, so the actual
effects of the technique, and any negative side-effects,
are unknown at present. Some vendors are offering
consumer tDCS devices for home use, and some people have
tried the simple expedient of attaching two wires to the
terminals of a 9-v drugstore battery. As they say, caveat
For an interesting reviews of tDCS, see:
- "Transcranial Direct Current Stimulation: Five Important Issues We Aren't Discussion (but Probably Should Be" by J.C. Horvath et al. Frontiers in Systems Neuroscience (2014), a scholarly critique of tDCS research, see J.C.
- Horvath et al., "Evidence that Transcranial Direct Current Stimulation (tDCS) Generates Little-to-No Reliable Neurophysiological Effect Beyond MEP Amplitude Modulation in Health Human Subjects: A Systematic Review" (2015), a "meta-analysis" that yielded mostly negative findings.
- "Electrified" by Elif Batuman, New Yorker, 04/06/2015, a more personal, journalistic account.
Transcranial Magnetic Stimulation
Transcranial magnetic stimulation (TMS) is a variant on electrical stimulation, creating the functional equivalent of a temporary, reversible lesion in a discrete portion of brain tissue without requiring surgery to open up the scalp and skull (other techniques for creating temporary, reversible lesions, such as hypothermia and spreading depression, require surgical access to brain tissue). In TMS, a magnetic coil is applied to the scalp, and a magnetic pulse (which can approach the magnitude of 2 Tesla, about the strength of the MRIs used clinically, but not as strong as the pulses produced by the 4T machine used for research at UC Berkeley. The rapidly changing magnetic field induces an electrical field on the surface of the cortex. This field in turn generates neural activity which is superimposed on, and interferes with, the ongoing electrical activity of nearby portions of the brain. This temporary disruption of cortical activity, then, interferes with the performance of tasks mediated by parts of the brain near the site of application.
For example, TMS applied over a particular region of the occipital lobe interferes with visual imagery, supporting findings from other brain-imaging techniques that striate cortex is involved in visual imagery, as it is in visual perception.
Another method is to record physiological activity when people are engaged in particular forms of mental activity. Most psychophysiology focuses on measures of autonomic nervous system activity, such as measures of heart rate (the electrocardiogram, or EKG), blood pressure (the plethysmograph), and the electrical properties of the skin (the electrodermal response, or EDR, of which a good example is the skin conductance response, or SCR). There are also measures of somatic nervous system activity, such as the electromyogram (EMG). These measures will be familiar from crime dramas that use the polygraph as a lie detector (which, by the way, doesn't work very well).
But psychophysiology also includes
measures of cortical activity, the most famous of which is
the electroencephalogram (EEG). Because
the brain is an electrochemical system, brain activity
generates small electrical currents that can be recorded by
electrodes placed on the scalp. Over the years, a number of
different kinds of EEG activity have been recognized.
- The normal waking EEG is a mix of "alpha" (high-voltage, low frequency) and "beta" (low voltage, high frequency) activity.
- Alpha activity drops out when people fall asleep.
- Certain stages of sleep are characterized by "delta" activity (very high voltage, very low frequency).
- There are other bands associated with other states.
In ERP, as in conventional EEG, electrodes are placed on the scalp to record the electrical activity of the neural structures underneath. Then, a particular stimulus is presented to the subject, and the response in the EEG is recorded. If you present the stimulus just once, you don't see much: there are lots of neurons, and so there's lots of noise. But in ERP, the brain's response to the same (or very similar) stimulus is recorded over and over again: when all the responses are combined, a particular waveform appears that represents the brain's particular response to that particular kind of event.
The ERP has several
- Those that lie in the first 10 milliseconds or so reflect the activity of the brainstem;
- those that lie in the next 90 milliseconds, up to 100 milliseconds after the stimulus, reflect the activity of sensory-perceptual mechanisms located in the primary sensory projection area corresponding to the modality of the stimulus (seeing, hearing, touch, smell, etc.);
- those that lie beyond 100 milliseconds reflect the activity of cortical association areas.
characteristics of these components varies with the nature
of the subject's mental activity.
- For example, the N100 wave (also known as N1), a negative potential occurring about 100 milliseconds after the stimulus, increases if the stimulus was in the focus of the person's attention, and decreases if it were in the periphery.
- The N200 wave (N2), another negative potential appearing 200 milliseconds after the stimulus, is elicited by events that violate the subject's expectations.
- The P300 wave (P3), a positive potential about 300 milliseconds out, is increased by some unexpected, task-relevant event (such as a change in the category to which the stimulus belongs); it seems to reflect a sort of "updating" of the subject's mental model of the environment.
- And the N400 wave (N4), a negative potential about 400 milliseconds after the stimulus, is increased by semantic incongruity: for example, when the subject hears a nonsensical sentence.
ERPs can be recorded from the scalp as a whole, or they can be collected individually at lots of separate sites. In the latter case, the availability of powerful, high-speed computers permits a kind of brain-imaging: we can see where the ERP changes are the largest (or the smallest); and we can see how the ERP changes move with time. In this way, we can see how the brain shifts from one area to another when processing a stimulus or performing a task.
In brain imaging,
techniques such as PET or fMRI (see below) are employed to
literally watch the activity of various parts of the brain
while subjects perform some mental task. For example,
different parts of the brain "light up" when subjects
examine a visual stimulus or listen to a tune.
Here are some combined MRI and PET
images of the brain of a single subject engaged in different
kinds of thinking, collected by Dr. Hanna Damasio of the
University of Iowa College of Medicine, and published in the
New York Times Magazine (May 7, 2000). Red areas
indicate increased brain activation, as indicated by blood
flow, while areas in purple indicate decreased brain
- In the upper left corner, the subject is thinking pleasant thoughts;
- in the upper right corner, depressing thoughts.
- In the lower left corner, anxious thoughts;
- in the lower right, irritating thoughts.
imaging has progressed quite a bit since those early days,
and using fMRI, coupled with immense computational power
and sophisticated processing algorithms, it is now
possible to identify discrete patterns of neural activity
associated with specific thoughts – or, at least, specific
what I consider to be the most outstanding technical
achievement of cognitive neuroscience to date, Jack
Gallant and his group at UC Berkeley took advantage of the
“topographical” organization of the visual system of the
is to say, each part of a visual stimulus projects onto a
discrete portion of the retina of the eye, which in turn
projects onto a discrete portion of primary visual cortex
in the occipital lobe.
Gallant recorded activity in this portion of the
brain while subjects viewed some 1,750 different pictures
of natural objects, and then created a model representing
each voxel (like a pixel on a computer screen, only
three-dimensional) in the fMRI image acquired while the
subject viewed each individual image. They then
recorded activity in this same area while the subject
viewed 120 novel images, and attempted to identify which
picture the subject was viewing, based solely on his
pattern of brain activity, aggregated over 13 views of
The amazing thing is that they were actually able to do this. Based just on chance, we would expect these predictions to be accurate less than 1% of the time - -that is, 1 chance out of 120. But in fact, their predictions were accurate 92% of the time in one subject, and 72% of the time in another subject. Even when they considered only a single viewing trial, accuracy was far greater than we would expect just by chance – 51% in the one subject, 32% in the other.
In a further
experiment, Gallant and his colleagues ran their program
“backwards”, as it were, and attempted to reconstruct the
pictures themselves, based solely on the subjects’ pattern
of brain activity. In
this slide, the target image is in the right-hand column,
and the reconstructions, based on different algorithms,
are in the others. The
reconstructions are far from perfect – this is
cutting-edge work that pushes available technology to its
very limits! But
they’re pretty good, and support the idea that every
image, memory, and thought is represented in the brain by
a unique pattern of neural activity.
Of course, brains are active even if you're not thinking of anything in particular -- because you're always thinking of something, even if you're thinking about not thinking. this constant background activity of the brain is known as the default mode, and is visible through brain imaging when a person is daydreaming, asleep, or even anesthetized. The default mode itself involves a specific pattern of neural activity, not just randomness, known as the default mode network (DFN). And the default mode is active indeed, consuming as much as 20 times the energy used in conscious mental activities -- which is why the DFN is sometimes referred to as the "dark energy" of the brain. When we engage in some conscious mental activity, such as thinking pleasant or irritating thoughts, the brain shifts from this default mode into some other pattern. Disruptions to the DFN may be implicated in a wide variety of mental illnesses, such as Alzheimer's disease or depression -- and even to thinking errors made by normal people in the ordinary course of everyday living.
For more on the default mode network in the brain, see "The Brain's Dark Energy" by Marcus E. Raichle, one of the pioneers of functional brain imaging, in Scientific American, 03/2010.
Brain imaging techniques are gaining in popularity, especially in human research, because they allow us to study brain-behavior relations in human subjects who have intact brains. But still, some of the clearest evidence for mind-brain relations is based on the lesion studies that comprise classic neuropsychological research.
The fact that the nervous system operates according to certain electrochemical principles has opened up a wide range of new techniques for examining the relationship between brain and mind. Some of these techniques are able to detect lesions in the brain, without need for exploratory surgery or autopsy. Others permit us to watch the actual activity of the brain while the subject performs some mental function.
CT (CAT) Scans. In x-ray computed tomography (otherwise known as CAT scan, or simply CT), x-rays are used to produce images of brain structures. This would seem to be an obvious application of x-ray technique, but there are some subtle problems: (1) radiation can damage brain tissue; (2) brain tissue is soft, and so x-rays pass right through it; and (3) x-rays produce two-dimensional images, and so it is hard to distinguish between overlapping structures (that is, you can see the edges of the structures, but you can't detect the boundary between them). The CT scan uses extremely low doses of x-rays, too weak to do any damage, or to pass through soft tissue. It also takes many two-dimensional images of the brain, each from a different angle. Then a computer program takes these hundreds of individual two-dimensional images and reconstructs a three-dimensional image (this requires a very fast, very powerful computer). CT scans allow us to determine which structures are damaged without doing surgery, or waiting for the patient to die so that an autopsy can be performed.
Magnetic Resonance Imaging (MRI). The technique of magnetic-resonance imaging (MRI) is based on the fact that some atoms, including hydrogen atoms, act like tiny magnets: when placed in a magnetic field, they will align themselves along lines of magnetic force. Bombarding these atoms with radio waves will set them spinning, inducing a magnetic field that can be detected by sensitive instruments. In a manner similar to CT, readings from these instruments can be used to reconstruct a three-dimensional image of the brain. However, this image has a much higher resolution than CT, and so can detect much smaller lesions.
MRI is such an important advance in medical technology that Nobel prizes have been awarded on several occasions for work relating to it. The 2003 Nobel Prize for Medicine or Physiology was awarded to Paul C. Lauterbur, a physical chemist at the University of Illinois, and Peter Mansfield, a physicist at the University of Nottingham, in England, for basic research that led to the development of the MRI. Lauterbur published a pioneering paper on 2-dimensional spatial imaging with nuclear magnetic resonance spectroscopy (when NMR was picked up by medicine, the word "nuclear" was dropped for public-relations reasons, so that patients would not think that the technique involved radiation -- which it doesn't). Mansfield later developed a technique for 3-dimensional scanning. Eager young scientists should note that Lauterbur's paper was originally rejected by Nature, although the journal eventually published a revision. And if that's not inspiration enough, Mansfield dropped out of the British school system at age 15, returning to college later; now he's been knighted!
Some controversy ensued because the prize committee chose not to honor the contributions of Raymond Damadian, a physician, inventor, and entrepreneur, who made the initial discovery that cancerous tissue and normal tissue give off different magnetic resonance signals. Damadian also proposed that it be used for scanning tissues inside the body, and made the first working model of a MR scanner (now on display in the Smithsonian National Museum of American History). Damadian subsequently took out expensive full-page advertisements in the New York Times and other publications to assert his priority. But even before Damadian, Vsevolod Kudravcev, an engineer working at the National Institutes of Health, produced a working MRI device in the late 1950s by connecting an NMR to a television set: his supervisor told him to get back to his assigned job, and nothing came of his work. (See "Prize Fight" by Richard Monastersky, Chronicle of Higher Education, 11/07/03).
The Nobel committee, following its tradition, has been silent on its reasons for excluding Damadian. Everyone seems to agree that Lauterbur and Mansfield's work was crucial to developing MRI as a clinically useful technique. Still, no one doubts the importance of Damadian's pioneering work, either, and the Nobel rules make room for up to three recipients of a prize. The decision may just reflect an admittedly unreliable historical judgment. On the other hand, science has its political elements, and there could have been other reasons for denying Damadian the prize. It is possible that Damadian was denied the prize because he is a practicing physician and business entrepreneur rather than an academic scientist. Perhaps because he is relentlessly self-promoting (as in his newspaper as, which are without precedent in Nobel history), and famously litigious (he won a settlement of $129 million from General Electric for patent infringement) and rubs people the wrong way. Perhaps the Nobel committee did not want to give an award for biology, and thus at least indirect legitimacy, to someone who rejects the theory of evolution, the fundamental doctrine of modern biology, believes in a literal reading of the Bible, and has lent his support to creationism.
Positron Emission Tomography (PET). CT and MRI are new ways of doing neuroanatomy: they help us to localize brain damage without resort to surgery or autopsy. And that's important. But we'd also like to be able to do neurophysiology in a new way: to watch the brain in action during some mental operation. A technique that permits this is positron-emission tomography (PET). This technique is based on the fact that brain activity metabolizes glucose, or blood sugar. A harmless radioactive isotope is injected into the bloodstream, which "labels" the glucose. This isotope is unstable, and releases subatomic particles called positrons; the positrons collide with other subatomic particles called electrons, emitting gamma rays that pass through the skull and, again, are detected by sensitive instruments. When a particular portion of the brain is active, it metabolizes glucose, and so that part of the brain emits more gamma rays than other parts. Fast, powerful computers keep track of the gamma-ray emissions, and paint a very pretty picture of the brain in action, with different colors reflecting different levels of activity.
Functional MRI (fMRI). This is a variant on MRI, but with a much shorter temporal resolution than standard MRI. Like PET, it can be used to record the activity of small regions of the brain over relatively small intervals of time. However, the temporal resolution of fMRI is smaller than that of PET, permitting investigators to observe the working brain over shorter time scales. Whereas most brain-imaging studies have to "borrow" time on machines intended for clinical use, UC Berkeley has a very powerful fMRI machine dedicated entirely to research purposes.
Tensor Imaging (DTI) or Diffusion
Spectrum Imaging (DSI) is a
variant on fMRI, and complements fMRI in several ways.
Whereas fMRI tracks the flow of blood in the brain, and
indicates activity in the "grey matter", or neurons that
comprise the surface sheet of the cerebral cortex, DTI
tracks the movement of water and indicates the activity of
the "white matter", the long axons that connect various
locations in the brain. While fMRI can reveal distinct brain
centers that are involved in various functions, DTI can
reveal how distinct brain centers link with each other in
systems.This DTI image, looking down on a mammalian brain
from the top, shows three different groups of fibers:
- in blue, connecting structures in the top and bottom of the brain;
- in green, connecting front and back;
- in red, connecting left and right hemispheres.
Here, in an
image taken from American Scientist (9-10/2010) is
a DSI image of a cat's brain at 10 days of age, and again at
3 months, you can see the dense interconnections
characteristic of the mature mammalian brain.
The interconnections between different brain regions are implicated in traumatic brain injury induced by forceful blows to the head, such as those sustained in vehicle accidents, on the football field, and and in explosive blast injuries suffered by servicemembers in the Iraq and Afghanistan wars.
- In the case of explosions, the brain damage is caused by the primary shock wave caused by the expansion of gases triggered by the ignition of explosive materials.
- There is also a secondary blast effect -- a wind traveling at supersonic speeds, turning blast debris into projectiles, causing penetrating head wounds.
- And a tertiary blast effect which throws humans, animals, and vehicles around the field, causing concussive blows to the head.
- And quarternary blast effects, including fire, chemical burns, and dust which can smother humans and other animals (even a temporary loss of oxygen supply can cause brain damage).
- The chief effect of the primary shock wave is to shake the victim's head back and forth.
- One effect is to crush opposite sides of the surface
of the brain against the interior of the skull,
resulting in an injury known as coup contre coup,
causing lesions on opposite poles of the the surface
of the brain -- for example, the frontal and occipital
lobes. This is commonly observed in victims of
concussive head injuries.
- But another, more subtle effect is to tear the
white-matter tissue connecting different brain
regions. This kind of damage appears to be
unique to those who have been exposed to explosive
- This blast-induced neurotrauma (BTI) may be associated with chronic traumatic encephalopathy (CTE), a degenerative neurological disease also seen in football players, boxers, and others who have suffered repeated concussive blows to the head.
Beginning in 2010, the NIH Human Connectome Project, modeled after the Human Genome Project, has begun a project to use various histological and imaging techniques to trace the connections between various parts of the brain (so far, they've succeeded in tracing all the connections in the nervous system of a worm species with only 300 neurons. The next step is to perfect their techniques with the 100 million neurons of the mouse brain, before getting to the 100 billion neurons of the human brain. For the present, human connectome researchers have to be content with tracing the connections between larger areas of the brain.
Functional Specialization in the Brainstem
We've already seen
examples of functional specialization at the cell and tissue
- There are afferent and efferent neurons, and interneurons;
- There are excitatory and inhibitory neurotransmitters;
- there are afferent and efferent nerves, ganglia, and nuclei.
- There are the sympathetic and parasympathetic branches of the autonomic nervous system.
- In the somatic nervous system, some cranial nerves are afferent in nature, others efferent.
- There are afferent and efferent tracts in the spinal nerves.
- And there are afferent and efferent tracts in the spinal cord itself.
- The cerebellum coordinates sensation and action.
- In the hindbrain, the medulla (also sometimes known as the myelencephalon) handles vegetative functions. involving the cardiovascular and respiratory systems, while the pons (also known as the metencephalon)is important for regulating cortical arousal.
- In the midbrain (also known as the mesencephalon), the reticular formation is also important for regulating cortical arousal.
The midbrain is involved
in two neurological syndromes:
- In coma, there is a general loss of consciousness, although vegetative functions are normal. The patient is not responsive to stimulation, and shows no signs of emotion. The eyes are generally closed, as if in sleep, but electroencephalogram (EEG) shows that the normal sleep-wake cycle has disappeared. Coma can result from bilateral damage to the thalamus, a structure in the forebrain (see below). But it also results from damage to the reticular formation and other parts of the posterior brainstem.
- The persistent
vegetative state, which often follows if the
patient does not recover consciousness,is characterized
as "wakefulness without consciousness". The patient's
eyes may be open at times, and the sleep cycle is
normal. But the patient is still unresponsive to
stimulation, except perhaps for some reflex functions
that may give the appearance of vigilance.
- In the minimally conscious state, tests involving EEG or brain-imaging may show some discriminative responsiveness on the patient's part, suggesting that the patient is conscious to at least some degree. But because certain discriminative responses can be performed unconsciously, these tests are not conclusive. The damage in these syndromes also affects the reticular formation, though apparently not quite as extensively as in coma.
- The locked-in syndrome made famous by Oliver Sacks' book Awakenings, (and the movie based on the book starring Robin Williams, may also follow a period in coma. Otherwise, the patients are immobile, largely unable to interact with their environment. However, they are able to communicate with others through vertical eye movements and blinking -- and, in fact, the film The Diving Bell and the Butterfly was based on the memoir of a patient with locked-in syndrome, "dictated" through just such a procedure. Here the damage is to the upper part of the anterior brain stem, excluding the reticular formation, but including portions of the pons. The damage extends to the trigeminal nerve (V), which innervates the muscles of the lower face, so that the patient cannot speak (and also other motor centers that innervate the skeletal musculature). However, the damage spares the oculomotor nerve (Cranial Nerve III) and the trochlear nerve (Cranial Nerve IV), permitting voluntary movements around the eyes.
Concussion: Wear Your Helmet!
In the past, a concussion was defined as a traumatic brain injury resulting in a loss of consciousness, but recent attention to sports-related brain injuries have led neurologists to adopt a new definition. The US Centers for Disease Control now defines a concussion as "a complex pathophysiologic process affecting the brain, induced by traumatic biomechanical forces secondary to direct or indirect forces to the head" (quoted in "The Era of Sport Concuss9ion: Evolution of Knowledge, Practice, and the Role of Psychology" by Julie L. Guay et al., American Psychologist, 2016). According to Guay et al., the features of a concussion are:
- a direct blow to the head or body that transmits sudden physical force to the head;
- the rapid onset of neurological symptoms that may evolve (change or grow worse) or spontaneously resolve (get improve);
- the disturbance affects brain function (e.g., metaolism), not structure (as might be revealed by a standard MRI scan);
- there is a graded set of symptoms (i.e., varying in severity) which may be prolonged, but typically follow a sequential course of recovery.
The incidence of sports-related concussions varies,
depending on how "concussion"is defined. A
frequently touted figure of 300,000 per year is based on a
narrow definition involving loss of consciousness.
But loss of consciousness occurs in only a minority (less
than 10-20% of "mild" traumatic brain injuries (TBIs), so
it's no longer considered a diagnostic feature of
Concussions comprise 8-13% of all high-school
sports-related injuries, and 8% of all college sports
injuries. Most of these occur in (men's) football,
with men's and women's soccer not far behind.
On March 18, 2009, the actress Natasha Richardson, born into a great acting family (she was the daughter of Vanessa Redgrave and Tony Richardson, niece of Lynn Redgrave, and granddaughter of Sir Michael Redgrave; her first acting role, at age 4, was playing a bridesmaid at the wedding of the character played by her mother in The Charge of the Light Brigade, directed by her father), winner of a Tony Award in 1998 for her performance as Sally Bowles in Cabaret (she also played in Patty Hearst and the Handmaid's Tale, among many other plays and films), died of a cerebral hemorrhage following an apparently minor skiing accident at a resort in Canada. During a lesson, she fell on a "beginner's slope" and struck her head (she was not wearing a helmet). She showed no obvious signs of injury, and left the slopes to rest. Later she complained of a headache, was taken to a local hospital, then to a larger hospital in Montreal, and finally to a hospital in New York City. By this time, she had lapsed into a coma, from which she never recovered.
What happened was this. The fall, although apparently minor, ruptured one or more blood vessels on the surface of the brain, and the resulting pool of blood created increasing pressure on her brain, effectively pushing it out the bottom of the skull, and crushing the brainstem structures that maintain consciousness. If the internal bleeding had been discovered in time, the pressure might have been relieved by opening the skull, draining the blood, and repairing the rupture.
All this from a fall that could have
happened to anyone, and which was seen by everyone as
inconsequential at the time. Similar accidents happen all
the time in sports such as football, to riders of bicycles
and motorcycles who fall off their vehicles, and to
victims of automobile accidents who hit their heads
against the steering wheel or windshield.
the discovery of an apparently high incidence of chronic
traumatic encephalopathy (CTE) in boxers,
football players, and other athletes (professional,
collegiate, and scholastic; it's not clear whether soccer
players, with their penchant for "header" shots, might
also be at risk) has drawn attention to the long-term
effects of concussion -- a brain injury, caused by
a blow to the head, which results in headaches, dizziness,
nausea, and blackouts. Often, these resolve
themselves within minutes, but sometimes, the effects of a
concussion can last for weeks or months. Research
indicates that exposure to a number of concussions can
lead to permanent brain damage. Concussion is
different from blunt-force brain trauma, in which
the shock of an impact, like a baseball, is directly
transmitted to the brain. In concussion, the brain
itself shakes back and forth, bouncing of the interior
walls of the skull, in what is sometime called a coup
contrecoup injury, both at the site of impact and on
the opposite side of the brain. This action
stretches, deforms, the axons of the neurons comprising
brain tissue. It can also shear them, causing them
to spill their contents into the intercellular
space. What contents? Well, tau -- the
same protein that is implicated in Alzheimer's Disease.
So buy the best helmet you can
afford. And wear your helmet. And if
you ever fall and hit your head on a hard surface, even if
you don't lose consciousness, get yourself checked
To learn more, see "Six Things You Should Know About Concussions" by Karen Schrock Simring, Scientific American Mind, 01-02/2016.
Functional Specialization of Subcortical Structures
H.M, whose severe epilepsy led to a desperate surgical procedure involving the excision of the medial (interior) portions of his temporal lobe -- an operation that also excised his hippocampus and portions of the limbic system. After the surgery, H.M. displayed a retrograde amnesia for events occurring during the previous years prior to the surgery. But more important, he displayed an anterograde amnesia for all new experiences. He could not remember current events and he could not learn new facts. The retrograde amnesia remitted somewhat after the surgery, but the anterograde amnesia persisted until his death in 2008, at the age of 82. H.M. remembered nothing of what happened to him since the day of his surgery in 1953, when he was about 27 years old. He read the same magazines, and worked on the same puzzles, day after day, with no recognition that he had done them before. He met new people, and participated in an extensive series of experiments to document the extent of his memory loss, but when he met the experimenters and other visitors again he did not recognize them as familiar. He knew nothing about the deaths of his parents, the Vietnam War or the War in Iraq, the assassination of President Kennedy or the election of President Obama.
Work with H.M. and similar patients reveals a brain system important for memory. This circuit, known as the medial temporal-lobe memory system, (MTL) includes the hippocampus and surrounding structures in the medial portion of the temporal lobe. This circuit is not where new memories are stored. But it is important for encoding new memories so that they can be retrieved later. H.M. lacks this circuit, and so he remembers nothing of his past since the surgery.
H.M.'s real name was Henry Molaison. He was born on February 26, 1926, had his operation on September 1, 1953, and died on December 2, 2008. He received a beautiful obituary in the New York Times, which called him a "memorable patient" (12/04/2008). For an insightful portrayal of H.M.'s life, read Memory's Ghost: The Nature Of Memory And The Strange Tale Of Mr. M (1995) by Philip J. Hilts, and also Permanent Present Tense: The Unforgettable Life of the Amnesic Patient. H.M. (2013) by Suzanne Corkin, a neuropsychologist who worked with H.M. almost from the beginning of his "career" as the world's most famous neurological patient.
A Nobel Prize for the Hippocampus
Memory isn't the only function of the hippocampus.
John O'Keefe, a British-American psychologist, shared the
2014 Nobel Prize in Physiology or Medicine with May-Britt
and Edvard I. Moser, a usband-and-wife team of Norwegian
psychologists (who once worked in O'Keefe's laboratory) for
their discovery that the hippocampus also serves as a kind
of "inner global positioning system" in the brain.
In 1971, O'Keefe discovered neurons in the hippocampus that were always activated whenever a rat was at a particular location in a room. Each location was associated with activation in different neurons. The implication was that these neurons were place cells, and that together they constituted a sort of cognitive map of the animal's environment.
These place cells remain active even after the animal has ceased moving around the environment -- as if the animal were replaying" the learning experience in it's little rat-mind. Wilson and McNaughton (1994) found that place cells which fired together while an animal was exploring an environment also fired together while the animal slept -- as if it were dreaming about the various locations. They also fire in the same temporal sequence in which the locations had been originally learned. Presumably, this reflects the process by which the memory trace is "consolidated" after learning.
In 2005, the Mosers discovered neurons in the entorhinal cortex, adjacent to the hippocampus, which fired at multiple locations, not just a single one; because these multiple locations formed a hexagonal grid, the Mosers named them grid cells. The entire set of grid cells forms a kind of coordinate system that supports navigation through a particular environment.
It is possible that the spatial-navigation function of the hippocampus is related to its memory function more generally. In the first place, .as a spatial navigation system, the hippocampus and related structures tell the animal where it is. This cognitive map is itself a form of memory -- it's a representation of a place the animal knows. Moreover, this function may be co-opted by the memory system generally, so that the hippocampus and other structures in the MTL perform an indexing function -- recording where, as it were, particular memories are stored in the cerebral cortex.
For an account of the Mosers' work, see "Where Am I? Where Am I Going?", Scientific American, 01/2016.
example comes from the patient S.M., who suffered damage to
the amygdala, another subcortical
structure, but no damage to the hippocampus and other
structures associated with the medial temporal lobes. This
resulted from a rare medical condition, lipoid proteinosis,
that results in the calcification, and eventual
disappearance, of the amygdala but no other brain structure
(something about the neural tissue that makes up this
structure, apparently). S.M. suffers no memory deficit, nor
any other problems in intellectual functioning. But she does
display gross deficits in emotional functioning: a
general loss of emotional responses to events, especially
fear; an inability to recognize facial expressions of
emotion, especially fear, as well as difficulties
distinguishing among negative emotional expressions; and an
inability to produce appropriate facial expressions herself.
Research on S.M. and similar patients suggests that the
amygdala is part of another brain circuit that is important
for regulating our emotional life, particularly fear.
Interestingly, S.M. does experience panic. In an experiment by Wemmie et al. (2013), she inhaled carbon dioxide in amounts sufficient to cause feelings of suffocation (sort of like waterboarding without the waterboard, I suppose). She didn't do it for very long, so no actual harm was done, but the interesting thing is that S.M. experienced a flood of fear. This is implies that, however much the amygdala is involved in fear, it is not the only such structure.
It is often claimed that the amygdala
is the seat of our emotions, but most of the research on the
amygdala involves fear and anger -- components of Cannon's "flight
or fight" syndrome. It's not yet clear that the
amygdala plays a similar role in other emotions, such as
joy, surprise, sadness, and disgust. In fact,
affective neuroscientists have begun to trace links between
these emotions and other brain structures.
The Triune Brain
midbrain, forebrain -- that's one way to organize the
basic anatomy of the brain. Another is in terms of the triune
brain, a model first proposed by Paul Maclean,
a pioneering neuroscientist, in 1970. According to
MacLean, the brain is actually three brains:
- The R-Complex, or "reptilian brain", includes the brain stem and the cerebellum, and is involved in vegetative functions and the autonomic nervous system. Basically, the R-complex reacts to events in the internal and external environment.
- The Limbic System (a term that MacLean himself coined in 1952), or "old mammalian brain", includes the amygdala, hypothalamus, and hippocampus, and is involved in emotions, biological motives, and instinctual behavior.
- The neocortex, or "new brain", the most recent brain to evolve, is involved in "higher" cognitive functions such as reasoning and problem-solving.
MacLean's system has been criticized as simplistic, and indeed it is. But, as indicated by the fact that we still use terms like "limbic system", it also has had some staying power and cultural resonance.
Simplistic as it is, it has the virtue of combining anatomy with function in a way that the old formula of "hindbrain, midbrain, forebrain" doesn't.
Functional Specialization in Cerebral Cortex
Turning from subcortical structures to the cerebral cortex itself, we find that different areas also have specialized functions. Some of these have been revealed by studies of neurological patients who have suffered from strokes. In stroke, there is a blocking of an artery, and a consequent loss of blood supply to one or another portion of the brain. This in turn causes a lack of oxygen (carried by the blood), and the death of brain tissue in the area of the brain served by the artery. This, in turn, causes a loss of function such as paralysis (inability to make voluntary motor movements), tactile anesthesia (inability to feel touch or pain), and aphasia (loss of speech functions).
If we examine a map of the vascularization of the brain, we see that a number of major arteries run up the central fissure separating the frontal and parietal lobes. This finding suggests that these areas are important for movement, touch, and speech. The sensorimotor areas are areas of the frontal and parietal lobes immediately surrounding the central fissure.
Primary Motor and Somatosensory Areas
Viewing the interior portion of the cerebral cortex, "sliced" vertically at the central fissure, the brain opens up to reveal the cerebral commissure at the top, the cortical areas adjacent to the central fissure in the middle, and the fold along the lateral fissure at the bottom.
This portion of the frontal lobe, known as the precentral gyrus (because it is in front of the central fissure) or Brodmann's Area 4, the comprises the primary motor area of the brain, which controls voluntary movement of various body parts. There is also a premotor cortex, a kind of secondary motor area, comprising Brodmann's Areas 6 and 8), just anterior to the primary motor area.
Similarly, the corresponding portion of the parietal lobe, known as the postcentral gyrus or Brodmann's Areas 1-3, comprises the primary somatosensory area of the brain, which controls the sensation of touch, heat, and cold in various body parts.
The motor homunculus (from the Greek, freely translated as "little man in the head") represents the regular association between the location of a particular body part, and the cortical site that controls its motor activity. The legs and feet are controlled by portions of the frontal lobe tucked inside the longitudinal fissure; the trunk and arms and hands along the upper surface of the frontal lobe, along the central fissure; the face is there too, right near the temporal lobe; and the vocal apparatus is tucked inside the lateral fissure. Note that the amount of cortex dedicated to each are of the body is proportional to the need of that body part for precise motor control. There is also a "speech area" in the frontal lobe, adjacent to the motor areas controlling the mouth, tongue, throat, and larynx: more about this later.
Similarly, the sensory homunculus represents the regular association between the location of a particular body part, and the cortical site that receives tactile sensations from that body part: the feet are near the legs, adjacent to the longitudinal fissure, the hands are near the arms, adjacent to the central fissure, with the trunk is in between them; the face is near the temporal lobe, near the lateral fissure; and the internal organs are tucked inside the lateral fissure. Again, there is proportional representation, with greater amounts of cortical tissue devoted to those body parts (like the hands and feet) that make fine tactile discriminations.
Primary Auditory and Visual Areas
In addition, a portion of the temporal lobe is the primary auditory projection area, or AI (A-one), is specialized for auditory function (this area is also known as the superior temporal gyrus, or Heschl's gyrus, and includes Brodmann's areas 41 and 42. Ai is characterized by "tonotopic"organization, meaning that different portions of auditory cortex correspond to different auditory frequencies, or pitches.
Similarly, a portion of the occipital lobe is the visual projection area (also known as VI, or striate cortex, corresponding to Brodmann's area 17, is specialized for visual function.VI is characterized by "retinotopic" organization, meaning that different portions of striate cortex correspond to different portions of the retina of the eye.
Certain visual functions are also performed by the extra-striate cortex of the occipital lobe, including Brodmann's areas 18 and 19.
In fact, the visual area itself is very complex, consisting of several different areas and circuits, each performing different functions. For example, there is one visual area specialized for color, and another specialized for motion.
The remaining portions of the brain were once thought to be largely undifferentiated association areas involved in "putting things together": sensory and sensory-motor integration, learning, memory, thinking, and language. As a general rule, the posterior association area is specialized for perceptual integration, linking he visual, auditory, and tactile centers in the occipital, temporal, and parietal lobes. By the same token, the frontal association area, is specialized for problem-solving and strategically organized activity ("executive functions").
The prefrontal cortex (so called because it consists of frontal areas forward of the motor strip) is especially well developed in humans: we devote a great deal of our cortical matter to complex intellectual functions.Prefrontal cortex includes the superior, middle, and inferior frontal gyri, and the superior and inferior frontal sulci separating them.
The prefrontal cortex is commonly thought to be the brain location of executive control functions -- including the control of our emotions. During times of stress, the release of norepinephrine and dopamine during the "flight or fight, tend and befriend" response stimulates the activity of the amygdala, but inhibits the functioning of the prefrontal cortex. So, our emotional states (mediated by the amygdala and other structures) are not just more intense than they would be otherwise; they are also less subject to executive control. This, apparently, is what happens when, under acute stress, we "lose it" or choke under pressure [see "This is Your Brain in Meltdown" by Amy Arnsten, Carolyn M. Mazure, and Rajita Sinha, Scientific American, 04/2012].
However, the notion of
"association areas" is a little misleading, because it
implies that most of the brain is a general-purpose
information processor. In fact, neuropsychological studies
suggest that there is a lot of specialization within the
"association areas" as well. Consider the wide variety of
neurological syndromes that have psychological consequences:
- amnesia (difficulties in memory, already discussed in cases like Patient H.M.)
- aphasia (language)
- alexia (reading)
- agraphia (writing)
- acalculia (mathematics)
- apraxia (motor action)
- agnosia (naming objects and understanding events).
There are various subtypes within each of these syndromes, each of which is associated with a different site of brain damage, or involving several different sites organized into a circuit or system.
We see this combination
of specialization and integration in the various forms of aphasia,
involving loss or impairment of various speech and language
- In Broca's aphasia, also known as expressive aphasia, the patient's speech is nonfluent: speech is slow, labored, inarticulate, and ungrammatical, but the person's speech behavior is sensible. These patients typically have little problem understanding speech or reading, but they may have problems in writing or reading aloud.
- In Wernicke's aphasia, also known as receptive aphasia, the patient's speech is fluent but contaminated with paraphasias -- it is normal from a phonetic and grammatical point of view, but the words are not chosen properly to convey the patient's intended meaning. As a result, the patient's speech is semantically deviant to the point of being meaningless. He or she may also have difficulty in understanding other people's speech, and in writing.
- Broca's (expressive) aphasia is associated with lesions in a site in the lateral frontal lobe known as Broca's area, near the lateral fissure -- and thus near the motor areas controlling the mouth, tongue, and hands.
- Wernicke's (receptive) aphasia is associated with lesions in a site in the lateral temporal lobe (technically, the superior temporal gyrus) known as Wernicke's area, near the auditory projection area as well as the parietal lobe.
Based on evidence such as this, it seems that linguistic communication requires the coordination of several centers. It turns out that Broca's and Wernicke's areas are connected by a bundle of nerve fibers. According to one account, the structure of the person's speech utterance is encoded in Wernicke's area, which also decodes the utterances of others. The encoded utterance is transferred to Broca's area, which organizes vocalization. These "instructions" are then transferred to motor areas which actually produce speech and writing Speaking a written word recruits different brain areas than speaking a spoken word (Posner et al., 1988; Petersen et al., 1989).
- When a person repeats a spoken word, the word is first processed in primary auditory cortex, the meaning of the response is initially formulated in Wernicke's area, then transferred to Broca's area, which organizes the speech output, and then to primary motor cortex, which executes the speech act itself.
- When a person repeats a visually presented word, the word is first processed in primary visual cortex, the response is then formulated in Wernicke's area, transferred to Broca's area, and then to primary motor cortex.
same point is made by a recent study by Robert Knight’s
group at UC Berkeley.
Working with patients being prepared for
neurosurgery, they placed a grid of 64 electrodes directly
onto the surface of the brain, a technique called
“electrocorticography”, and then recorded brain activity
at each of the sites while the subjects performed a simple
linguistic task: they were given a noun, such as “cake”,
and had to produce an associated verb, such as “bake”. This slide is
animated, and when you push the “play” button, you’ll see
what they saw, except you’ll see it in slow motion. When the
stimulus is presented, the auditory cortex is activated
first, followed immediately by Wernicke’s area, as the
subject decodes the word and searches for an association;
then Broca’s area is activated to organize the response,
which is finally executed by that part of motor homunculus
that controls the speech apparatus.
If you watch the animation carefully, though, you’ll also see some other things going on. In the first place, Broca’s area is also activated when the stimulus is presented, as if the speech apparatus is involved in speech perception as well as speech production. And Wernicke’s area is activated when the response is spoken, as if the subject was checking his response. So while we generally think of Broca’s area as specialized for speech production and Wernicke’s area for speech perception, things are actually a little more complicated than that. But the basic point is that even very simple actions require the coordinated activity of a number of different specialized centers.
Here are some other examples of functional specialization:
- Neurological patients with a syndrome known as prosopagnosia (a term coined by Bodamer in 1947) cannot recognize or identify faces of people who should be familiar to them. They can see perfectly well, and describe the several features of the faces in question. But they simply don't recognize the faces of the spouses and other family members, famous individuals such as the president or movie stars -- or even their own faces! These patients typically have brain damage in Brodmann's area 37, and sometimes in adjacent locations such as Brodmann's areas 18 and 19.
- Brain-imaging studies of neurologically intact subjects, using fMRI , indicate that the fusiform gyrus is activated when these individuals are engaged in a face-recognition task.
Results like these have led some cognitive neuroscientists to label the fusiform gyrus as the fusiform face area (FFA). This move is somewhat controversial, because there is some evidence that the recognition deficits in prosopagnosia are not specific to faces, but can extend to other kinds of objects. In addition, the FFA also is activated by other kinds of recognition tasks. The details of the debate are not important for present purposes: in either case, the fusiform gyrus seems to be specialized, either for face recognition (as the fusiform face area) or for a more general category of recognition of which face recognition is a good example (as a "flexible face area").
A part of the
limbic lobe known as the anterior cingulate gyrus,
or anterior cingulate cortex (ACC),
appears to play an important role in self-regulation.
The precise function of ACC is still subject to debate.
According to one theory, it detects errors in responding, so
that they can be corrected; according to another, it
monitors conflict between responses, so that the conflict
can be resolved. In either case, ACC is involved in executive
functions generally associated with controlled,
deliberate (as opposed to automatic) processing.
A portion of the parietal lobe near the temporoparietal junction (TPJ), where the temporal and parietal lobes meet, plays an important role in attention. For example, a neurological syndrome known as hemispatial neglect often follows damage to this region of the parietal lobe. In hemispatial neglect, the patient seems to ignore that portion of space which is opposite (contralaterial) to the patient's lesion. Thus, if asked to bisect a horizontal line, a patient with a lesion in the right TPJ may draw his or her line only about 1/4 of the way in from the right. It is as if s/he doesn't see the left half of the line: but the patient's visual system is intact. It is as if there were a kind of "magnet" that draws the patient's attention away from one portion of space. Neglect is more common in patients with lesions in the right TPJ, creating neglect in the left portion of space (more about this shortly) -- which suggests that it's the right TPJ that is particularly specialized for attention.
The truth be told, attention is a complicated business. As with repeating written and spoken words (discussed above), attention provides an opportunity to show how different parts of the brain, each performing their own specialized function, act together in an integrated fashion.
According to a stage
model of attention proposed by Posner and his
colleagues, "paying attention" to something actually
involves a sequence of activities.
- First, the subject has to alert to the presence of a particular stimulus, which interrupts whatever s/he is currently doing. Brain-imaging studies using PET and fMRI show that this alerting function is performed by areas in the frontal and parietal lobes, as well as the thalamus.
- Then, the subject has to orient to the new stimulus and locate it in space. This orienting function is performed by areas in the superior parietal cortex.
- Alerting and orienting occur more or less automatically. The final stage, executive control, requires more cognitive effort. It is composed of a number of sub-stages:
- The subject must first disengage from the current object of attention.
- Then the subject must move or shift attention from one location (or object) to another.
- Then the subject must engage (or, perhaps, re-engage) with that new object.
- While engaged on the new object, the subject must actively inhibit responses to other, extraneous stimuli.
- These executive control processes are mediated by the anterior cingulate gyrus -- which, you'll remember, is involved in error correction or conflict monitoring.
The Doctrine of Modularity
While it was once thought that the brain was a general-purpose information processor which forms associations between stimuli and responses, we now know that the brain is highly specialized -- a situation characterized by the philosopher Jerry Fodor (1983) as the modularity of mind. The basic idea behind modularity is that different mental functions are served by specific mental "organs" that are dedicated to those particular purposes; these mental organs are, in turn associated with different brain structures or systems -- for example, the medial temporal lobe memory system involved in memory, or the language system that includes Broca's and Wernicke's areas.
This is, of course, the central tenet of the fundamental neuroscientific doctrine of functional specialization. But there's more. According to Fodor, mental modules have a number of properties in common:
- Domain Specificity: each module processes only a certain class of inputs. Fodor suggests that there are modules for processing language, and for processing visual stimulation, and he has suggested that there are many more such modules as well. Each module provides outputs to other modules, or to a central executive.
- Informational Encapsulation: The processes performed by a module are inaccessible to other parts of the mind, so we can know how they operate only by inference, not introspection; encapsulation also means that the operation of a module is uninfluenced by any other part of the mind.
- Hard-wiring: Modules are not assembled from more elementary or primitive processes.
- Innate Specification: Modules are part of the organism's evolutionary and genetic heritage, and are not acquired through experience and learning.
- Automaticity: Modules operate automatically and rapidly, independent of other goal-oriented mental processes.
- Fixed Neural Architecture: Each module is associated with a particular neural structure or system that is dedicated to that module.
Fodor's theory is an extreme view of modularity, and its details are controversial. For example, it may be that some modules are not innate, but rather develop as a result of certain learning experiences. Still, the general principle that different mental functions are served by different "mental organs", which in turn are associated with different brain structures (or systems), is quite widely accepted within psychology, cognitive science, and cognitive neuroscience.
The result is a map of cortical specialization that looks something like the old phrenological head, but with different faculties and, where the old and new lists of faculties happen to overlap, different locations. This figure, which appeared in the New York Times in 2000, shows how much progress had been made in localizing various functions in the brain by that time, based mostly on neuropsychological studies of brain-damaged patients, studies of animals, and neuro-imaging studies (mostly employing PET and fMRI). The comparison with the traditional phrenological head is striking: the phrenologists had the right idea about functional specialization, but they got every single detail wrong.
- The map shows a number of the areas discussed in this course:
- the primary motor cortex in the frontal lobe;
- the primary somatosensory cortex in the parietal lobe;
- the primary visual cortex in the occipital lobe;
- Broca's area in the frontal lobe;
- Wernicke's area in the temporal lobe;
- areas of the frontal and parietal cortex specialized for attention;
- the putative area for face perception in the fusiform gyrus of temporal lobe.
- The map also shows many more specialized brain areas, and even more have been discovered, or at least claimed, since then.
- What the map doesn't show, of course, is the controversy that surrounds some of these claims.
- The fusiform gyrus may contain a "fusiform face area" specialized for the perception, recognition, and identification of faces, or it may contain a "flexible face area" that is involved in recognizing a broader range of objects, including but not limited to faces.
- The area of the temporal lobe devoted to processing the meaning ("semantic priming") of written words can't be dedicated to that task. The reason is that specialized brain structures owe their existence to the evolution of the brain. Writing has only been around for about 5000 years -- not nearly enough time for the brain to evolve structures specialized for written, as opposed to oral, language. However, as may also be the case with the fusiform gyrus, it may well be that this area of the temporal lobe has a specialized function.
- All of which is simply to say that the task of functional localization proceeds as part of normal science, combining discovery and correction.
A New Phrenology?
Functional specialization is so dominant a view within contemporary psychology and neuroscience that it sometimes seems that not a week goes by without someone announcing that he or she has discovered the "locus" of some function or another. This focus on ever more detailed decomposition of mental and brain functions sometimes verges on what William Uttal has called "the new phrenology". Similarly, with a nod to the illuminated pixels that are the product of PET and MRI scans, Steven Hyman, the former director of the National Institute has called this tendency "false-color phrenology". In his book, The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain (2001), Uttal sets out some of the logical problems with extreme arguments for localization of function, as well as conceptual problems surrounding the evidence for localization. Are Broca's and Wernicke's areas, for example, specialized for speech and language? Or do they simply connect other areas, as yet unknown, that perform these functions? Uttal argues that it is a mistake to think of the brain (or the mind, for that matter) as a set of discrete sub-components, and he reminds us that both the brain and the mind are integrated organs -- especially when it comes to complex mental functions beyond sensation and perception. He reminds us that the various parts of the brain, such as they are, are densely interconnected -- that brain and mind work as an integrated system, not as a bunch of independent modules.
The Human Connectome
The Doctrine of Modularity represented
an important corrective to the traditional view that the
brain -- or, at least, "association cortex" -- functioned as
a general-purpose information-processing device.
Rather, it appears that some cognitive tasks are performed
by dedicated mental modules that are associated with a fixed
neural architecture -- Wernicke's and Broca's areas for
speech and language, the hippocampus and the medial temporal
lobes for memory, etc. At least since the case of
H.M., something like the Doctrine of Modularity has guided
neuropsychology for decades: If you think about it, it
doesn' make any sense to use a technique like fMRI to look
for task-specific patterns of brain activation unless there
are task-specific modules located in specific areas of the
brain. And the Doctrine of Modularity continues to
generate interesting findings on the neural bases of various
At the same time,
neuropsychologists have come to appreciate the Doctrine's
limitations. For example, many forms of mental
illness, such as schizophrenia, depression, anxiety
disorder, and autism do not seem to be associated with
lesions in discrete brain modules, as revealed by patterns
of brain activation (or, for that matter, deactivation) --
as is the case, for example, with such neurological
disorders such as aphasia and amnesia -- or even Alzheimer's
disease. More important, at a theoretical level, it's
clear that these brain modules are not, in fact, as
independent as the Doctrine would seem to suggest. The
conventional view of modularity is that the brain is like a
sort "Swiss Army knife", which contains a whole bunch of
independent tools bound together in a single package (like
the skull). On the contrary, it's clear that the
various brain modules work together. So, they
have to be connected to each other, so that they can pass
information among themselves -- much like the examples of
speaking spoken and written words discussed earlier.
Having a conversation is going to require modules for speech
perception, syntactic analysis, semantic memory, thinking,
and speech production, all integrated with each other.
Put another way, the
brain may be a collection of specialized modules, but these
modules have to interact with each other, and this
interaction requires that they be connected to each other in
an integrated system. The most recent trend in
neuropsychology and neuroscience has been to focus on the
connections by which different modules pass information from
one to each other. As important as this new focus will
be for theory, researchers also hope that it will help
unlock the neural bases of various forms of
psychopathology. It may be, for example, that the
problem in schizophrenia or autism is not with a specific
mental module (or two or three), but rather with the
connections between them. Alzheimer's Disease, with
its plaques and tangles, may well prove to be a problem of
dysfunctional connections. These dysfunctional
connections, if indeed they exist, will not be observable
with conventional fMRI, but may well yield to other
techniques like Diffusion Tensor Imaging (DTI)
or Diffusion Spectrum Imaging (DSI),
variants on fMRI which pick up activity in the white matter
which makes these connections.
- At the moment, mapping the connections between
individual neurons requires slicing brain tissue into
small slivers, each about 100 microns deep, and then
piecing together how they're connected.
- Also important will be new technologies for recording
the activity of individual neurons -- as well as the
computational power (speed and storage capacity) to
store all the information that will be generated. It's
been estimated that the human brain, with its 80-100
million neurons, generates about 300,000 petabytes of
data in a year. That's 300 billion gigabytes of
When you consider that
the human brain has about 80-100 billion neurons, with more
connections among them than there are stars in the sky, this
is going to be quite a project. So, it's going to
start slowly. Here's one road map to the human
neuroscientists mapped C. elegans, a worm with
so few neurons that they can actually be counted: there
are only 302 of them, making about 7,000
connections. Building on earlier work by others,
Cornelia Bargmann and her colleagues identified networks
in the worm's brain that control particular aspects of
behavior. One, centered on a neuron known
as URX, senses oxygen in the environment; another,
centered on two neurons known as ASH and ADL,
senses toxic and noxious stimuli; another, centered on
RMG, controls social behavior; and so on. The
figure at the right shows all 305 neurons, and about
half of the connections between them; it also identifies
particular neurons that appear to be the "hubs" of
networks controlling various aspects of worm behavior
(from "In Tiny Worm, Unlocking Secrets of the Brain" by
Nicholas Wade, New York times, 06/21/2011).
- Now neuroscientists are likely to move on to Drosophila, the model organism for generations of geneticists, which has about 135,000 neurons.
- Then to the zebra fish, with about 1 million neurons.
- Then to small mammals, like the mouse.
- And then to the human brain.
In 2013 the Obama
Administration announced a new BRAIN initiative (Brain
Research through Advancing Innovative
Neurotechnologies). A major element in the
BRAIN Initiative is the "Human Connectome Project" (the
allusion to the Human Genome Project is intentional), a
public-private collaboration involving the National
Institutes of Health, the National Science Foundation, and
other federal agencies focused on mapping the connections in
The resulting Brain
Activity Map is intended to show how the brain is
wired up, at all levels of analysis from pathways among
lobes, ganglia, and Brodmann's areas to the links between
individual nerve cells.
For more information on the connectonomics and the Human Connectome Project, see Connectome by Sebastian Seung (2012).
A New Map of the Brain
-- and a New Guide for Identifying Functional
For a long time, the
brain was mapped in terms of topographical features such as
the lobes, fissures, sulci, and gyri visible on the surface
of the cortex; and also, based on histological examination,
in terms of Brodmann's areas.
- For, example, neuroanatomists might refer to the
"temporal pole" (the anterior portion of the temporal
lobe), or to Brodmannn's area #38.
- Alternatively, when the psychological function of some
particular brain area had been identified, they might
refer to something like "Broca's area" (which is linked to
speech production), located in the inferior frontal gyrus,
corresponding to Brodmann's areas 44 and 45.
- The so-called "fusiform face area", ostensibly specialized for recognizing faces, is located in the lateral portion of the fusiform gyrus, on the ventral (lower) surface of the temporal lobe, corresponding to Brodmann's area 37.
And these maps have served us well in identifying various functional specializations in the brain.
recently, however, a newer approach has delivered a revised
map or parcellation of the brain, based on the
simultaneous use of a number of different methods.
Glasser, Van Essen, and their colleagues in the Human
Connectome Project (HCP) took information based on brain
topography (e.g., sulci and gyri), architecture (e.g.,
cortical thickness and relative myelin content), function
(e.g., Broaca's area or the (fusiform face area"), and
connectivity (as determined by diffusion tensor
Around the turn of the 20th century, Brodmann (1909) had identified about 50 areas; toward the end of the century, neuropsychological and brain-imaging studies of functional specialization had brought that total up to about 83. Combining the four methods, Glasser, Van Essen, et al. identified 97 new areas, for a total of 180 distinct areas in each hemisphere. They call this the HCP Multi-Modal Parcellation, Version 1.0 (HCP_MMP1.0) -- implying that, at some point in the future, they'll release Version 2.0, presumably identifying even more areas?
Mapping the brain is one thing, but for psychologists the important question is whether these areas are differentiated with respect to function. If you look, you can see, among other specialized areas, Broca's area in the left frontal lobe, Wenicke's area in the left temporal lobe, and the so-called "fusiform face area" in the temporal lobe. And in fact, Glasser, Van Essen, et al. have already identified a new area, #55b in the frontal lobe, that appears to be active when people are listening to stories.
More work of this type will go on, using standard brain-imaging methodologies to identify, or more precisely delineate, various functional specializations. But it's important to remember that the brain isn't just a kit of tools that more or less operate independently. The brain is an integrated organ, with lots of different parts working together to perform various tasks. For example, here is a depiction of the areas of the brain, including the new area 55b, which are activated (or, in some cases, deactivated) when subjects listen to stories.
Lateralization of Function
The specialization of function extends to the two cerebral hemispheres, right and left, created by the central commissure, or corpus callosum. Anatomically, these two hemispheres are not quite identical: on average, the left hemisphere is slightly larger than the right hemisphere.
One aspect of hemispheric specialization is known as contralateral projection, meaning that each hemisphere controls the functions of the opposite side of the body. Thus, the right hemisphere mediates sensorimotor functions on the left part of the body, auditory function of the left ear, and visual function of the left half-field of each eye. The left hemisphere does just the opposite. The two hemispheres communicate with each other, and integrate their functions by means of the corpus callosum, a bundle of nerve fibers connecting them.
"Split Brain" Patients
projection is vividly illustrated by so-called "split-brain"
patients. These patients suffer from severe, intractable
epilepsy. In an effort to help control their seizures, their
corpus callosum and other transcortical connections have
been severed -- so that, at least, seizure activity arising
in one hemisphere cannot pass over into the other
hemisphere. This surgery has no negative consequences for
their normal behavior, but careful laboratory experiments by
Sperry, Gazzaniga, Bogen, and their colleagues have revealed
some interesting anomalies (the work by Sperry, Gazzaniga,
and Bogen helped win the Nobel Prize in Physiology or
Medicine for Sperry -- a prize which he shared with David
Hubel and Torsten Wiesel, whose work we will discuss later).
to an interview with Michael Gazzaniga.
Consider, for example, a
split-brain patient who is asked to fixate on a dot in the
middle of his visual field; then a picture of a horse is
presented in his left visual field.
- When asked what he sees, the patient says "Nothing", because the left visual field projects to the right hemisphere, and the left hemisphere, which controls speech, doesn't know anything about it.
- If asked to draw a picture of a related object with his right hand, he can't -- again, because his right hand is controlled by his left hemisphere, and the left hemisphere doesn't know what the stimulus is.
- But if asked to draw the picture with his left hand, he can -- maybe he draws a saddle -- because his right hemisphere processed the stimulus, and the right hemisphere also controls his left hand.
Similarly, imagine that
the word key is presented in in the left visual
field, and the word ring is presented in the right
- When asked what word he saw, the patient will say "ring", because the right visual field projects to the left hemisphere, which controls speech.
- And if asked to pick out by feeling (not visually) with his right hand, the object associated with the word he saw, the patient will select a key.
- But if asked to pick out by feeling with his left hand, the patient will select a ring.
Observations such as this have
suggested to some theorists that the conscious awareness is
localized in the left hemisphere, and that the right
hemisphere operates unconsciously, outside of conscious
awareness. But that's a mistake. The right hemisphere is
conscious, just as the left hemisphere is, but it simply
can't express its conscious awareness verbally, because it
doesn't have the necessary centers for speech and language.
Genesis and Agenesis
of the Corpus Callosum
(1951-2009), a "savant" who inspired Raymond Babbitt, the
autistic character played by Dustin Hoffman in the movie
"Rain Man" (1988), was not himself autistic, but he was born
without a corpus callosum -- an anatomical defect that,
apparently, permitted him to read facing pages of a book at
once, one page with each eye. He also had a number of other
remarkable abilities, such as an ability to recall obscure
facts across a wide range of topics (Photos by Barton
Glasser/Deseret News, via Associated Press, and United
Artists, from the obituary by Bruce Weber, New York
This condition, known as agenesis
of the corpus callosum, occurs in 1 out of about 4,000
infants, and is especially prominent (about 7%, which is a
lot more than 0.03%) in children with fetal alcohol
syndrome. The typical result is a severe delay in the
development of various verbal and motor skills. This
is not surprising, because the corpus callosum is the
principal "highway" connecting the two hemispheres, and a
disruption in its functioning will severely hamper the
coordination among various brain centers.
However, it is possible to compensate
for the absence of a corpus callosum. Under ordinary
conditions -- not the laboratory setups used to study
split-brain patients -- naturally occurring eye movements
can bring information from both visual fields into the two
hemispheres. Moreover, there are other pathways that
permit interhemispheric communication, such as the anterior
commissure, or the thalamus and other subcortical
- The development of a large anterior commissure to
compensate for the absent corpus callosum is one example
of the plasticity of the brain -- its ability to
- Another example is the fact that musicians who began to study their instruments at an early age have a much larger corpus callosum than non-musicians. Violinists have to coordinate their right hand, which is doing the bowing, with their left hand, which is doing the fingering. And that coordination is not just mediated by the corpus callosum - -it actually promotes the development of this structure.
Studies of "split-brain" patients, and also of subjects with intact brains, show that there is specialization of function as well as location. For example, the aphasias arise from lesions in the left hemisphere, indicating that speech and language functions are confined to that portion of the brain.
hemisphere is specialized for
- linguistic functions (reception and expression);
- sequential analyses;
- mathematical computation; and
- fine motor control (in right-handed individuals).
By contrast, the right
hemisphere is specialized for simple functions
corresponding to those of the left hemisphere, including
language (which is how split-brain patients can respond to
directions with their right hemispheres); and nonverbal
- spatial analyses (perceiving such relations as up-down, front-rear, and left-right); and
- pattern perception (especially in the visual and auditory domains).
These specializations are most
apparent in a selected sample of the population consisting
of right-handed males who have no history of "sinistrality"
(left-handedness) in the family. Specialization is not
necessarily reversed in sinistrals (left-handers) and
females, however (for a discussion of sex differences in
lateralization, see Sex and Cognition by Doreen
Kimura, 1999). If a person is left-handed, the right
hemisphere has better fine motor control. But language is
localized in the left hemisphere in both right- and
left-handers. As a rule, left-handers and females don't show
evidence of strong lateralization, meaning that they enjoy
greater flexibility of functioning in each hemisphere. This
has implications for the recovery of function,
as noted below.
The brain does display some degree of hemispheric specialization -- differences that are sometimes labeled "analytic" (for the left) and "holistic" (for the right). Unfortunately, these differences have been grossly exaggerated in popular psychology. Brain-imaging studies show that both hemispheres are activated when subjects are engage in both "analytic" or "holistic" tasks. When we think, we use the whole brain, not just a piece of it.
Ontogeny and Phylogeny of Lateralization
Why should mental function be lateralized?
From a developmental point of view, human cerebral lateralization seems to be intimately connected with language capacity. And there's a genetic contribution to handedness. Left-handedness runs in families, and may be associated with a specific gene variant known as LRRTM1.
With respect to the phylogeny of lateralization, or the development of lateralization across species, something like "handedness" occurs in a wide variety of species -- if you define "handedness" as stable, consistent, individual difference in preference for doing things with one side of the body as opposed to the other. It can be predicted that spatial perception should be favored by the right hemisphere, while communication should favor the left. In some birds, singing is, in fact, lateralized on the left. And in rats, lesions to the right hemisphere diminish exploratory activity. But for non-primates, and especially for non-mammals, evidence of lateralization of function is extremely limited. What about primates, those species that are closest to us on the evolutionary tree?
Among the lesser apes, such as monkeys, cortical mass is evenly divided between the right and left hemispheres. There doesn't seem to be any anatomical difference between the hemispheres. While there is no evidence of a strong specialization for visual patterns, there is evidence of left-hemisphere superiority in the processing of meaningful sounds.
Among the great apes, such as chimpanzees (our closest evolutionary cousin), the left hemisphere has more cortical mass than the right -- anatomical evidence of an emerging difference between the hemispheres. But there is no good evidence of paw preference, analogous to handedness, in any nonhuman species. Individuals may prefer the right paw or the left, but the species divide about 50-50, compared to the 9:1 ratio of right- to left-handers among humans.
Of course, contralateral projection is clearly adaptive: it would be extremely maladaptive if both the left and the right hemisphere controlled the right or left part of the body. Aside from this trivial aspect of laterialization, some theorists have argued that lateralization of function is as old as vertebrate species themselves -- that is to say, about 500 million years old. These theorists have argued that, even very early in vertebrate evolution, the left hemisphere was specialized for controlling well-established patterns of behavior, while the right hemisphere was specialized for detecting and responding to unfamiliar or unexpected events. Human specializations for language (chiefly on the left), a preference for right-handedness (capitalizing on the fine motor control of the left hemisphere that came with speech), and face recognition (chiefly on the right) then built on these primeval specializations. (See "Origins of the Left and Right Brain" by Peter F. MacNeilage et al., Scientific American, July 2009).
With respect to the ontogeny of lateralization, or the development of lateralization within a species, there is evidence of hemispheric specialization from birth. Because the left hemisphere is specialized for language, we might hypothesize that speech perception should be favored by the left hemisphere even in infants; and so it is. Among newborns, even those born prematurely, the left hemisphere is more responsive to speech, while the right hemisphere is more responsive to music. This difference does not increase over the first four years of life, while the child is acquiring language function. Thus, the hemispheric difference precede language ability, and are not caused by the acquisition of language. The human infant is born with a left hemisphere already prepared for eventual language.
This pattern illustrates a contradiction of the principle that ontogeny recapitulates phylogeny -- that the development of a structure or function in the individual mirrors or parallels its evolution across species. Human infants are lateralized from birth, while nonhuman animals aren't well lateralized at all. Thus, there are no orderly parallels between ontogeny and phylogeny.
The Advantage of Lateralization
The fact of two hemispheres makes sense: there is a lot of symmetry and redundancy in the human body. Given the fact that we have two brains, corresponding to the right and left hemisphere, contralateral projection makes sense: if one side of the body is controlled by one side of the brain, that will reduce conflict between them (though it is not clear why projection should be contralateral rather than ipsilateral; this is a clue). Lateralization of speech also reduces conflict: we may have two brains but we only have one mouth. the fact that there is no duplication of the speech apparatus means that the two brains will not be competing for the same output channel.
Handedness enhances the transfer and
generalization of motor skills. If the right hand is already
good at something, it may be easier for it to become good at
other things. For similar reasons, it makes sense to have
the receptive and expressive functions of language on the
same side of the brain. Finally, we want handedness and
speech on the same side, since both require fine motor
But why the left as opposed to the
right? Note that all species that show a bias show superior
development of the left hemisphere. Similarly, there appears
always to have been a human bias toward right-handedness: A
study of Paleolithic cave paintings made in France and Spain
between 10,000 and 35,000 years ago found that 77% of
"negative hands" (created by blowing pigment onto a hand
placed against a cave wall) were of left hands -- suggesting
that prehistoric painters generally held the blowing tube in
their right hands (C. Faurie & M. Raymond, "Handedness
Frequency over more than Ten Thousand Years", Biology
Letters, 09/25/03). Right-handedness predominates,
but we don't have the foggiest idea why this should be so.
There is no obvious advantage of the left over the
right side, per se.
In the final analysis, the question Why the left? illustrates the adaptationist fallacy: the popular assumption that every feature of the human body (and mind) evolved for some purpose. In this Panglossian paradigm (named for Dr. Pangloss, a character in Voltaire's satirical play Candide, who thought that all was for the best in this best of all possible worlds), whatever exists, exists because it is best for the organism. This is not necessarily the case. There is nothing inherently adaptive about having two arms and two legs. We could easily have had four of each. We have two arms and two legs for one reason, and one reason only: We are descended from fish with four fins. If the fish had eight fins, then we would have had four arms and four legs, or some other combination adding up to eight.
Ultimately, all species are descended from a single common ancestor. That organism happened to be biased in favor of the left hemisphere; therefore, so are we. It's as simple as that.
Then why does left-handedness occur
at all? That's still a mystery.
But handedness does have consequences (aside from the obvious). It turns out that people prefer objects that are located on their dominant sides: Right-handed people tend to prefer things that are on the right, lefties prefer things that are on the left (Casasanto, 2009, 2012). This is a demonstration of what is known as embodied cognition, which is the theory that our experience of the world is grounded in our physical experience of the world -- of our environment and our bodies. Put another way, our thinking is shaped by our bodies. We'll have more to say about "embodiment" in the lectures on Perception and Emotion.
Left Brain, Right Brain; Top Brain, Bottom Brain; Front Brain, Back Brain?
The distinction between the left and right hemispheres has wormed its way into popular culture, but it may be displaced by other dichotomies.
Stephen Kosslyn, a distinguished
cognitive neuroscientist, has proposed that a more important
distinction is between the "top" of the brain and the
"bottom", the two halves being divided by the lateral
fissure -- also known as the Sylvian fissure or the fissure
of Sylvius (Kosslyn & Miller, 2013). The general
idea is that the top part of the brain, including the
parietal lobe and the superior portion of the frontal lobe,
is involved in generating expectations, formulating plans,
and monitoring progress as these plans are being carried
out. The "bottom" part of the brain, including the
temporal and occipital lobes, and the remaining (inferior)
portions of the frontal lobe, organizes sensory signals and
interprets and classifies sensory-perceptual information in
terms of information stored in memory.
Of course, like the two hemispheres, these two halves of the brain work together: the bottom half tells us what some event means, and the top half figures out what to do about it. Still, Kosslyn and Miller argue that there are big individual differences in the balance that between top and bottom, generating four basic cognitive "modes" or styles:
- The mover mode reflects the optimal balance
between top and bottom. They're good at planning
and execution, and they're also good at monitoring the
consequences of their actions.
- The perceiver mode favors the bottom over the top. Perceivers are good at making sense of what is going on, but not so good at making plans.
- The stimulator mode favors the top over the
bottom. They are good at planning and execution, but
don't adjust their plans when they don't work out.
- The adaptor mode doesn't use either top or bottom in optimal ways. Adaptors respond to the requirements of the immediate situation, without engaging in much reflection or planning.
Now, of course, it remains to be seen
whether the top-bottom distinction holds up any better than
the left-right distinction did. And the point remains
that most mental activities use the entire brain, requiring
the integrated activity of lots of different modules.
Still, you can see where this is
going: in the not-too-distant future, we can expect to see yet
another distinction proposed, this time between the
"front" and the "back" of the brain, as divided by the
central sulcus -- also known as the fissure of Rolando --
which separates the frontal lobe from the parietal
lobe. Actually, the future is now. Recall that
among the earliest descriptions of functional specialization
divided "association cortex" into an anterior portion
specialized for thinking and a posterior portion specialized
So, if you really want to go crazy,
you've got the makings of a giant classificatory scheme,
resulting from crossing two hemispheres with top and bottom,
and front and back, yielding a 2x2x2 = 8-fold classification
of neurological "types". That seems to be where the
Kosslyn-Miller scheme is going. At the same time, we've
learned to be wary of the simple left-right scheme, and
there's no reason to think that this one will fare any
better than the earlier ones. Things are likely to be
more interesting, and complicated, than that.
Localization of Content and the Search for the Engram
There is considerable evidence for localization of function in the brain. Particular psychological functions seem to be served by specific areas of the brain, and when these areas are damaged, their corresponding psychological functions are impaired or lost. But is there evidence for localization of content as well? That is, are specific pieces of knowledge -- your mental picture of your grandmother, for example, or your knowledge about sports cars -- associated with specific bundles of adjacent neurons that make up the memory?
Competing Views of Neural Representation
The easiest answer is that the every
memory is represented by a single neuron, or perhaps a small
cluster of neurons, located in a particular part of the
brain, and that person memories are no exception to this
rule. Thus, the nodes in associative-network models of
person memory, like those discussed here, have their neural
counterparts in distinct (clusters of neurons). This
was the view taken by Richard Semon, a German biologist who
characterized the engram as the "permanent record"
of knowledge and experience, "written or engraved on the
irritable substance" of neural tissue (Semon 1921, p.
24). For an excellent account of Semon's work and its
relation to modern theories of memory, see Stranger
Behind the Engram: Theories of Memory and the Psychology
of Science (1982), and Forgotten Ideas, Neglected Pioneers:
Richard Semon and the Story of Memory(2001),
both by Daniel Schacter, himself a distinguished
neuropsychologist who has made important contributions to
our understanding of memory and its biological basis.
The Localist View
Although Semon himself was quickly forgotten, Early research by Wilder Penfield (1954), a Canadian neurologist, suggested that engrams actually exist. In the process of diagnosing and treating cases of epilepsy, Penfield would stimulate various areas of the brain with a small electrical current delivered through a microelectrode implanted in the brain. This procedure does not hurt, because the cortex does not contain afferent neurons, and patients remain awake while it was performed. Accordingly, Penfield asked patients what they experienced when he stimulated them in various places. Sometimes they reported experiencing specific sensory memories, such as an image of a relative or the sound of someone speaking. This finding was controversial: Penfield had no way to check the accuracy of the memories, and it may be that what he stimulated were better described as "images" than as memories of specific events. In any event, the finding suggested that there were specific neural sites, perhaps a cluster of adjacent neurons, representing specific memories in the brain.
However, evidence contradicting Penfield's conclusions was provided by Karl Lashley (1950), a neuroscientist who conducted a "search for the engram", or biological memory trace, for his entire career.Lashley's method was to teach an animal a task, ablate some portion of cerebral cortex, and then observe the effects of the lesion on learned task performance. Thus, if performance was impaired when some portion of the brain was lesioned, Lashley could infer that the learning was represented at that brain site. After 30 years of research, Lashley reported that his efforts had been entirely unsuccessful. Brain lesions disrupted performance, of course. But the amount of disruption was proportional to the amount of the cortex destroyed, regardless of the particular location of the lesion.
Lashley's Law of Mass Action
states that any specific memory is part of an extensive
organization of other memories. Therefore, individual
memories are represented by neurons that are distributed
widely across the cortex. It is not possible to isolate
particular memories in particular bundles of neurons, so it
is not possible to destroy memories by specific lesions.
At about the same time, D.O. Hebb, a pioneering neuroscientist, argued that memories were represented by reverberating patterns of neural activity distributed widely over cerebral cortex. Hebb's suggestion was taken up by others, like Karl Pribram, another neuroscientific pioneer, who postulated that memory was represented by a hologram, in which information about the whole object was represented in each of its parts.
Connectionist models are inspired, in part, by both Lashley's Law of Mass action and Hebb's reverberating-network model of memory.
Despite the power of
Lashley's data (not to mention his reputation as the leading
physiological psychologist of his day), Penfield's vision
held some attraction for some neuroscientists, who continued
to insist that individual memories were represented by the
activity of single neurons, or at most small clusters of
neurons, at specific locations in cortex.
- Sherrington (1941) postulated pontifical cells that represent sensory scenes.
- Konorski (1967) postulated gnostic neurons that represented unitary percepts.
- Barlow (1969, 1972) argued on the basis of a principle of "economy of impulses" that the brain should achieve a complete representation of a sensory scene with the fewest number of active neurons possible.
Problems with Penfield's
clinical studies aside, early advances in understanding the
neural basis of perception led support to the localist views
- Barlow (1953) had earlier identified specific cells in the frog retina that responded to particular elementary patterns of visual stimulation: contrast between light and dark, moving edges, dimming of light, and convexity (where a dark object appears against a bright field).
- Hubel and Wiesel (1959) won the Nobel Prize for similar studies that identified orientation-specific fields in the visual cortex of the cat.
While these neural
systems responded to the physical properties of the
stimulus, their discovery fed speculation that the meaning
of the stimulus, and other cognitive contents, might
similarly be represented by a localized cluster of neurons.
- Jerome Lettvin (1969) speculated that a mother cell, or rather mother cells, plural, might represent all that subjects knew about their mothers. It was Lettvin who called Barlow's convexity detectors cells "bug perceivers".
- Barlow himself (1972) speculated about a grandmother cell.
- Harris (1980) somewhat facetiously speculated that if we have cells that respond to yellow, and other cells that respond to Volkswagens, we might also have yellow Volkswagen cells.
Nobody, including Lettvin and Barlow
themselves, took any of this all that seriously, and
neuroscientific doctrine has emphasized distributed
representations of the sort envisioned by Lashley and Hebb
-- until recently, that is.
Xu Liu, Susumu Tonegawa, and their
colleagues (2013) may actually have succeeded where Lashley
failed, by looking in a different place -- the hippocampus,
rather than the cerebral cortex. It has been known at
least since H.M. that the hippocampus is critical for
memory. But it is also known as a cognitive map
-- a representation of the location of objects in space (the
term was introduced by Tolman on the basis of his work on
latent learning of mazes, which we'll discuss in the
lectures on Learning).
O'Keeffe and Nadel (1978) found place cells in the
hippocampus that become activated when the organism
(usually, a rat) visits a particular location. when an
animal learns its way through a maze, a particular pattern
of cells, apparently corresponding to the pathway, is
activated in the hippocampus. And, interestingly, that
same pattern of activation is "replayed" while the rat
sleeps, apparently aiding consolidation of the memory
(Wilson & McNaughton, 1994). Liu et al. allowed rats to
learn a particular environment, and identified the cells in
the hippocampus that represented what they had
learned. The next day, in a different
environment, they reactivated those cells at the same times
as they delivered a foot shock to the animal. On the
third day, they placed the animal in the first environment
(i.e., from Day 1), and observed that the rats froze, a
behavioral manifestation of fear. So, Liu et al. seem
to have discovered Semon's engram -- a specific group of
neurons that represent something that the organism has
On the other hand, the hippocampus is
a highly specialized structure -- specialized for cognitive
mapping. And rats are very specialized creatures --
they're really good at learning and remembering
places. It may well be that engrams for spatial
knowledge are represented locally, as discrete groups of
neurons, while engrams for other kinds of knowledge -- the
articles of the Bill of Rights, for example, or the names of
your immediate family members, or what you did last Thursday
-- may be represented in a more distributed fashion.
A "Halle Berry" Neuron?
A serendipitous finding, ingeniously pursued by a group of investigators at UCLA and Cal Tech, also suggests that there might be something to the idea of a "grandmother neuron" after all (Quian Quiroga, Reddy, Kreiman, Koch, & Fried, Nature, Vol. 425, pp. 1102-1107, June 23, 2005; see also "Brain Cells for Grandmother" by Quian Quiroga, Fried, & Koch, Scientific American, 02/2013).
These investigators worked with eight patients with intractable epilepsy. In order to localize the source of the patients' seizures, they microelectrodes in various portions of the patients' medial temporal lobes (the hippocampus, amygdala, entorhinal cortex, and parahippocampal cortex. Each microelectrode consisted of 8 active leads and a reference lead. They then recorded responses from each lead to visual stimulation -- pictures of people, objects, animals, and landmarks selected on the basis of pre-experimental interviews with the patients.
In one patient, the investigators identified a single unit (i.e., a single lead of a single electrode, corresponding either to a single neuron or to a very small, dense cluster of neurons, corresponding to only a single active lead), located in the left posterior hippocampus, that responded to a picture of the Jennifer Aniston, an actress who starred in a popular television series, Friends. (A response was defined very conservatively as an activity spike of magnitude greater than 5 standard deviations above baseline, consistently occurring within 1 second of stimulus presentation). That unit did not respond to any other stimuli tested. The investigators quickly located other pictures of Aniston, including pictures of her with Brad Pitt, to which she was once (and famously) married. The same unit responded to all the pictures of the actress -- except those in which she was pictured with Pitt!
Similarly, a single unit in the right anterior hippocampus of another patient responded consistently and specifically to pictures of another actress, Halle Berry (who in won an Academy Award for her starring role in Monsters' Ball). Interestingly, this unit also responded to a line-drawing of Berry, to a picture of Berry dressed as Catwoman (for her starring role in the unfortunate film of the same name), and even to the spelling of her name, H-A-L-L-E--B-E-R-R-Y (unfortunately, the investigators didn't think of doing this when they were working with the "Jennifer Aniston" patient -- remember, they were flying by the seat of their pants, doing this research under the time constraints of a clinical assessment). The fact that the unit responded to Berry's name, as well as to her picture, and to pictures of Berry in her (in)famous role as Catwoman, suggests that the unit represents the abstract concept of "Halle Berry", not merely some configuration of physical stimuli.
As another example, yet a third patient revealed a multi-unit (i.e., two or more leads of a single electrode, evidently corresponding to a somewhat larger cluster of neurons) in the left anterior hippocampus that responded specifically, if not quite as distinctively, to pictures of the Sydney Opera House. This same unit also responded to the letter string SYDNEY OPERA HOUSE. It also responded to a picture of the Baha'i Temple -- but then again, in preliminary testing this patient had misidentified the Temple as the Opera House! So again, as with the Halle Berry neuron, the multi-unit is responding to the abstract concept of the Sydney Opera House", not to any particular configuration of physical features.
Across the 8 patients, Quian Quiroga
et al. tested 993 units, 343 single units and 650
multi-units, and found 132 units (14%) that responded to 1
or more test pictures. When they found a responsive unit,
they then tested it with 3 to 8 variants of the test
pictures. A total of 51 of these 132 units yielded evidence
of an invariant representation of people, landmarks,
animals, or food items. In each case, the invariant
representation was abstract, in that the unit responded to
different views of the object, to line drawings as well as
photographs, and to names as well as pictures.
UCB's Jack Gallant and his colleagues has also provided
evidence for a sort of localization of knowledge. I
have already discussed Gallant's astounding research on
visual perception, in which they showed that specific scenes
were associated with specific patterns of brain activation
in the visual cortex -- so specific that they were able to
predict, from the pattern of brain activity, what images
their subjects were viewing. In further research, they
turned their attention to the representation of specific
concepts (Huth et al., Nature 2016). Using
fMRI, they recorded activity over the entire cerebral cortex
(about 50,000 voxels) for two hours while subjects listened
to stories presented on The
Moth Radio Hour series on public
radio. They first analyzed the co-occurrences between
each word used in the stories (over 10,000 of them)and other
words in the English language, to identify a small number of
semantic domains. For example, one domain related to
humans, society, and emotions; another consisted of concrete
terms, and yet another consisted of abstract terms.
The image on the left shows, at the top, various semantic
domains. In the middle is a "flattened" view of the
cerebral cortex. At the bottom are the more familiar
lateral and medial views of the cortex. The image on
the right illustrates what the "dictionary in the brain" of
one subject looks like.
- Every word activated a widely distributed area of the
brain, consistent with the Law of Mass action.
- At the same time, each word had a "center" of high
activation, consistent with the Doctrine of
- Semantically related words activated the same general regions of the brain.
- Most surprisingly, there were strong commonalities
across the subjects tested: they weren't identical, but
there were striking similarities.
So maybe there are "grandmother
neurons" after all! Quian Quiroga's research -- which,
remember, was performed in a clinical context and thus may
have lacked some desirable controls -- identified sparse
neural representations of particular people (landmarks,
etc.), in which only a very small number of hippocampal
units is active during stimulus presentation. Huth and
Gallant's research showed identified similar sparse
representations for particular semantic concepts in the
Of course, this evidence for localization of content contradicts the distributionist assumptions that have guided cognitive neuroscience for 50 years. Further research is obviously required to straighten this out, but maybe there's no contradiction between distributionist and locationist views after all. After all, according to Barlow's (1972) psychophysical linking principle,
Whenever two stimuli can be distinguished reliably... the physiological messages they cause in some single neuron would enable them to be distinguished with equal or greater reliability.
In other words, even in a distributed
memory representation, there has to be some neuron
that responds invariantly to various representations of the
same concept. Neural representations of knowledge may be
distributed widely over cortex, but these neural nets may
come together in single units.And it makes sense that the
invariant neurons associated with some concept are located
near to invariant neurons associated with semantically
Recovery of Function
has been generally thought that, once a part of the central
nervous system has been damaged through injury, insult, or
disease, the function served by the lesioned area is lost
forever. This is because central nervous system tissue does
not repair itself. However, there are some cases of recovery
of function, and they represent an interesting challenge to
our theories of the brain.
- In the aphagia induced by lesions in the lateral hypothalamus, the animal will likely starve to death unless it is provided with food and water by means of artificial life supports. However, an animal that is kept on artificial life supports may gradually recover lost eating and drinking functions.
- In the aphasia (sorry) induced by lesions in Broca's and Wernicke's areas, the person may, with careful training, regain normal or near-normal speech functions -- especially if the damage occurred early in life (the immature brain has more plasticity than the mature brain), or in left-handed or female individuals (whose hemispheres are, on average, somewhat less rigidly organized than those of right-handed males).For example, the American actress Patricia Neal, who starred in The Fountainhead (1949), suffered a triple cerebral aneurism in 1965 and lapsed into a coma. When she awakened, she was grossly aphasic (and paralyzed). But with extensive rehabilitation efforts, Neal regained the ability to walk and talk -- so well that she was offered the role of Mrs. Robinson in The Graduate, and played the role of the family matriarch in the TV pilot of The Waltons.
But recovery of function is by no means guaranteed. Christopher Reeve, the actor who starred in the Superman series of films, was in an equestrian accident in 1995 which severed his spinal cord very high up, between the first and second vertebrae, resulting in quadriplegia. Despite an extremely vigorous and expensive rehabilitation regime, he was confined to a wheelchair, and breathed only with the aid of a respirator, until he died in 2004.
In some cases, recovery occurs simply because the damage is incomplete, and enough tissue has been spared that some function can be regained. For example, Gregoire Courtine and his colleagues (2012) paralyzed the hind legs of rats by severing all direct connections from the spinal cord to the hind legs, without severing their spinal cords incompletely -- simulating the sort of injury that occurs in as many as one-third of paraplegic patients. Initially, the rats were unable to walk -- reasonably enough, as motor signals from the brain could not be transmitted through the affected spinal nerves to the hind-leg muscles. But over an extended period of physical rehabilitation, in which the animals were supported in a harness and then given a food reward for moving themselves forward, they were able to regain a fair amount of control over their hind limbs. It was apparently critical to this process that, in addition to the harness and the food, the animals received injections of serotonin and dopamine in the damaged area, as well as electrical stimulation in the brain, and in the spinal cord itself. Analysis of the spinal-cord tissue revealed an increase in density of nerve fibers running from the brain, down the spinal cord, past the damaged area and out to the legs.
But there are also some more interesting possibilities.
Redundancy in Neural Organization
In other cases, the recovery occurs because of redundancy in neural organization. For example, the ventral hypothalamus may be primarily responsible for regulating eating behavior, and Broca's area responsible for fluent speech, but some other center may also be able to take on this responsibility, if the primary center is damaged. For example, recovery from aphasia is more likely in children, who have immature nervous systems, and in fact there are cases where a very young child sustained damage in the left-hemisphere speech areas, and grew up to have speech localized in the right hemisphere. Recovery is also more likely in left-handers and females, whose brains are less rigidly organized than right-handed males. These outcomes suggest that the right hemisphere may also contain speech and language centers that go unused under ordinary circumstances. According to one theory, activity of the redundant module is suppressed by the activity of the primary module; when the primary module is destroyed, the redundant one is disinhibited, and begins to take over the function in question.
advances in scientific knowledge raise yet another
possibility: that recovery of function can be mediated by
the growth of new neural tissue -- a controversial
phenomenon known as neurogenesis. The
traditional doctrine within neuroscience, formulated by
Prof. Pasko Rakic of Yale University, has been that an
organism is born with all the neurons that it will ever
have, and that neural loss is, to all intents and purposes,
permanent -- especially so far as the central nervous system
is concerned. If so, the functions lost through brain
lesions can never be regained.
However, evidence for neurogenesis has steadily accumulated over the years. Prof. Fernando Nottebohm of Rockefeller University found it in some species of adult songbirds, and suggested that neurogenesis was necessary to enable the birds to learn new songs. In addition, early work performed independently by Joseph Altman and by Michael Kaplan found evidence of neurogenesis in mammalian species. More recently, Elizabeth Gould, working first at Rockefeller and then at Princeton University, found evidence of neurogenesis in the hippocampus of rats, marmosets monkeys, and macaque monkeys. Her findings have now been confirmed by Rakic himself, and by Prof. Fred Gage of the Salk Institute (Gage is actually a descendant of Phineas Gage, a famous 19th-century neurological case). In 1999, Gould reported neurogenesis in the neocortex (not just in subcortical structures like the hippocampus or the olfactory bulb) of macaque monkeys (Science, 286, 548f).
Why has evidence of neurogenesis not been forthcoming until now? One reason is social and political in nature: the doctrine that the organism is born with all the neurons it will ever have, and that neurons can only die, not be born, has been so powerful in neuroscience that most people simply accepted it as true. But, to be fair, it is only recently that new techniques have enabled investigators like Gould to perform really convincing studies demonstrating neurogenesis. In addition, the laboratory animals used in the older studies were kept, as was common at the time, in somewhat deprived living conditions. Gould argues that new neurons are born all the time, but that they only live when the environment is rich and challenging, providing the organism with things to do and learn.The more complicated the learning task, the more neurogenesis occurs, and the longer the new neurons last.
The debate continues. In 2001, Rakic reported that he could not replicate Gould's finding of neurogenesis in macaque neocortex (Science, 294, 2127-2131). New neural cells were formed, but only in the hippocampus and olfactory bulb, not the cerebral cortex. New cells were formed in the cortex, too, but inspection with a special microscope indicated that they were not neurons. Instead, the new cortical cells were of other types, such as glia cells, which are also found in neural tissue. Gould, Rakic claims, mistakenly identified these cells as neurons. Gould, for her part, asserts that she found neurons as well as non-neuronal cells, and suggests that Rakic's technique was not sufficiently sensitive to see what she saw. Stay tuned!
If Gould is proved to be right, her
research suggests that that the brain does actually produce
new neurons to replace those that die naturally during the
lifespan. Moreover, research by Gage suggests that stem-cell
transplantation may make it possible to regenerate
neural tissue.If this technique is going to work, the secret
is not likely to be in the surgical details of the
transplantation technique. Rather, the new tissue is going
to have to be stimulated to grow and make connections to
each other and to pre-existing brain tissue.
For an engaging account of these recent scientific discoveries, see:
- "Rethinking the Brain" by Michael Spector, in the July 23, 2001 issue of The New Yorker magazine;
- a profile of Prof. Elizabeth Gould in the September 2001 issue of Scientific American
- "Saving New Brain Cells" by Tracey J. Shors (one of Gould's collaborators) in Scientific American, March 2009.
This new research raises the
possibility that lost functions can be restored, at least to
some degree, by stimulating the growth of new neurons, and
directing them to sites where they can take over lost
functions. Experimental treatments for Parkinson's Disease
follow this model. It is the prospect of such cures, among
others, that makes research on embryonic stem cells so
exciting. However, this research is at a very early stage,
and it is very controversial. Don't expect major cures for
CNS damage in your lifetime: protect your CNS at all times!
Use that seat belt, wear that bicycle helmet, and don't dive
into shallow or unfamiliar water.
Recovery can also be mediated by the plasticity of the nervous system. It turns out that the functional organization of the nervous system is not fixed for all time, but can change depending on the organism's circumstances. For example, when people become deaf or blind, their corresponding sensory projection areas don't simply atrophy. Instead, it seems that they can be co-opted by other sensory functions.
Plasticity can be observed in classic
experiments by Merzenich, Kass, and others. Recall the motor
and somatosensory areas of the frontal and parietal lobes,
respectively. These areas show a somatotopic
cortical mapping such that -- to take
the example of the somatosensory area -- the part of the
parietal lobe that processes tactile sensation from the hand
is next to that which processes tactile sensation from the
arm, and the part that processes tactile sensation from the
foot is next to that which processes tactile sensation from
the leg, etc. This one-to-one or point-to-point mapping is
depicted in the motor and somatosensory homunculus, and it
is very detailed, such that the area for the index finger is
adjacent to the area for the middle finger, which in turn is
adjacent to the area for the ring finger, etc.
- If we sever the nerves running between the middle finger and the spinal cord, the corresponding area of the somatosensory cortex will become inactive, because it no longer receives stimulation. But eventually, this area will begin to respond to stimulation from the index and ring fingers.
- Something similar will happen if the middle finger is amputated.
- If we sew adjacent fingers together, such as the middle and ring fingers move together, the corresponding areas of somatosensory cortex will blend together, so that each area will respond to both fingers, not only one.
- If a subject is given extensive practice with one finger, the area of the motor cortex corresponding to that finger will also be enlarged.
- And yes, violin players have larger motor and somatosensory areas in their right hemispheres compared to the left, because they move around the instrument's fingerboard with their left hands. Presumably, a similar asymmetry will be observed in trumpet players (who finger with their right hands) versus French horn players (who finger with their left).
Here’s a famous example of plasticity in the hippocampus. You’ll remember, from an earlier lecture, that the hippocampus plays an important role in memory. This is especially true with respect to memory for spatial location, and the 2014 Nobel Prize in Physiology or Medicine was given to a group of scientists who showed that the posterior portion of the hippocampus is composed of “place cells” that fire whenever an organism is in a particular place in the environment, and which constitute a kind of “cognitive map”. Now in order to be licensed to drive a taxicab in London, drivers must learn how to navigate among thousands of different locations in the city – what is known in the trade as “The Knowledge”. A brain-imaging study of a group of London taxi drivers showed a redistribution of gray matter in the hippocampus: reduced volume in the anterior portion, and increased volume in the posterior portion, compared to a control group (Maguire et al., 2000). Further, this change was correlated with the subject’s experience as a cab driver. Apparently learning a detailed map of London, and perfecting this “Knowledge” by actually driving on the streets day in and day out, causes structural change in the hippocampus.
Along the same lines, Meschelli et al. (2004) found that English-Italian bilinguals showed increased gray-matter density in the left inferior portion of the parietal lobe, compared to English monolinguals. This area is activated when subjects perform tasks requiring verbal fluency. The earlier the subjects had learned Italian, and the more proficient they were in their second language, the bigger the difference in brain density.
One illustration of neural plasticity
may be phantom limb sensations Many patients
who have undergone the surgical amputation of a limb often
report that they continue to feel sensations, including
pain, emanating from that limb -- even though they know it's
it's not there anymore. They may also experience the
phantom limb as moving. For a long time, surgeons
believed that phantom limb pain emanated from swollen nerve
endings in the remaining stump of the limb, which they tried
to treat by pruning the stump further away -- even going so
far as to sever the nerves where they joined the spinal
cord. This didn't work, leading the surgeons to
conclude that the pain was "all in the head", reflecting
poor psychological adjustment to the amputation. One
prominent theory, initially proposed by Melcack (1989) and
promoted by Pons et al. (1991) and Ramachandran et al.
(1998; Ramachandran & Blakeslee, 1998) is that phantom
limbs are actually a consequence of neural plasticity.
For example, the region of somatosensory cortex which
originally coded for the amputated limb will no longer
receive somatosensory stimulation from that body part, and
may become attached to surrounding cortex; sensations
processed by this area are then referred to the amputated
limb. So, for example, sensations of touch to the face might
be referred to an amputated hand.
Similar experiments have been performed on the auditory area in the temporal cortex, which shows a tonotopic mapping, such that close frequencies are processed by adjacent areas: the area that processes a tone of 500 hertz (cycles per second) is adjacent to the one that processes a tone of 450 hertz. If an organism is given extensive practice in discriminating between two tones, the areas of the auditory cortex that process those tones become larger.
And similarly, the visual area of the
occipital cortex is characterized by retinotopic
mapping, such that adjacent areas of the retina
project to adjacent areas of the occipital cortex. If a
lesion is made in a particular area of the retina, so that
its corresponding visual area no longer receives stimulation
from that area, this area will expand into adjacent areas of
A famous example of plasticity occurs
in the hippocampus. You’ll
remember, from an earlier lecture, that the hippocampus
plays an important role in memory. This is
especially true with respect to memory for spatial
location, and the 2014 Nobel Prize in Physiology or
Medicine was given to a group of scientists who showed
that the posterior portion of the hippocampus is composed
of “place cells” that fire whenever an organism is in a
particular place in the environment, and constitute a kind
of “cognitive map”.
Now in order to be licensed to drive a taxicab in London,
drivers must learn how to navigate among thousands of
different locations in the city – what is known in the trade
as “The Knowledge”. A
brain imaging study of a group of London taxi drivers showed
a redistribution of gray matter in the hippocampus: reduced
volume in the anterior portion, and increased volume in the
posterior portion, compared to a control group. Further, this
change was correlated with the subject’s experience as a cab
learning a detailed map of London, and perfecting this
“Knowledge” by actually driving on the streets day in and
day out, causes structural change in the hippocampus.
Neural plasticity is involved in the acquisition and exercise of new skills, which may involve not just changes in the synaptic connections among existing neurons, but also the generation of entirely new ones. Many examples come from professional, semi-professional, and student musicians Jancke, 2009):
- 6-year-old children who took instrumental music lessons for 15 months showed structural changes in the motor areas of the brain.
- Right-handed string musicians (e.g., violinists) have larger cortical representations of the left hand (which works the fingerboard) than do controls.
- Pianists show increased volume in the hand-motor areas of both frontal lobes, while violinists show increased volume only in the right hemisphere (which controls the left hand).
- Musicians also have increased gray matter in Broca's
area -- which, apparently, prepares the execution of
finger movements as well as speech.
In addition to a surplus of glia cells, the posterior portion of Einstein's parietal lobes had larger gyri than average, implying a greater density of neurons. This difference has been linked to Einstein's use of mental imagery in thinking -- for example, the "train thought experiment" by which he propounded his theory of relativity. But in this case, we don't know which came first: the neural chicken or the imagistic egg.
Specialization versus Holism
Taken together, the literature on
recovery of function and the Law of Mass Action set limits
on the nature of functional specializations in the brain,
because it implies that every part of the brain is involved
in "higher" mental functions.
An excellent example of this point
comes from a study by UCB's Jack Gallant and his colleagues
(2013). They asked subjects to watch a very long
sequence of video clips, each of which included a living
(cat or plant) or nonliving (clock or building)
object. Some subjects were asked to indicate whether
each clip contained a person or a vehicle. While they were
performing this task, their brain activity was recorded
using fMRI. Using special software, the brain was
divided into a three-dimensional grid, and the activity in
each segment of the grid, known as a voxel, was recorded
every second that the subjects were watching the
- This task engaged the entire brain, from the visual areas of the occipital lobe to the executive functions of the prefrontal cortex.
- The pattern of activity was different, depending on whether the subjects were looking for people or vehicles.
- When other living things, like plants or cats, were present, the pattern of activity resembled that seen when the subjects were looking for people.
- When other nonliving things, like clocks or buildings, were present, the pattern of activity resembled that seen when the subjects were looking for vehicles.
It was as if the entire brain was
being "retuned", depending on the task, to pick up both the
attended category (people or vehicles) and semantically
related objects (plants and cats, or clocks and
The full story of cortical function is a blend of specialization and holism. There is no question that particular parts of the brain are specialized for particular mental functions. But the brain is not just a bunch of pieces, each operating independently of the others to perform some narrow function. On the contrary, evidence for the holistic organization of the brain is found by the fact that most functions are performed by "systems" or "circuits" involving several different brain modules. Also, the contents of the knowledge store are distributed widely across the cortex.
In the immature brain, there appears to be considerable equipotentiality, which means that several different areas are able to carry out a particular function. Functional specialization becomes solidified as the brain matures. In cases of severe damage, some degree of plasticity in the brain means that other modules may have the potential for taking over the lost function -- especially in a brain that is immature or not rigidly organized. But each site in the brain appears to be pre-ordained for a particular function. If a module should take on a new function, it may lose the capacity to perform its own specialty, or it may not be able to perform either the new or the old function optimally.
Performance of most mental and behavioral tasks requires the integration and coordination of many different brain modules. the brain operates as a whole, not as a collection of independent pieces. But each module makes its own contribution to the functioning of the whole.
Still, the implication of the modularity argument is that the brain is a "mental toolbox" whose various implements -- the various mental modules and their corresponding brain systems -- evolved to help humans meet the demands posed by what evolutionary psychologists call the Environment of Early Adaptation (EEA) -- basically, the East African savannah of the Pleistocene Era, which began bout 1.8 million years ago.
That is all well and good, and one could argue that this mental toolbox has served us well -- and, on the other side of the coin, it seems possible that some of the errors we make in thinking (about which there is more in the Lecture Supplement on Thought and Language) reflect the fact that evolution has not had time to catch up with the fact that we now navigate in a quite different environment than the EEA.
On the other hand, it also seems at
least equally likely, if not more so, that there is more to
mental life than a collection of highly specialized mental
tools. After all, we humans did not confine ourselves to the
East African savannah. Very quickly we began to migrate out
of that environment:
- first, beginning about 170,000 years ago, to other areas of Africa;
- then, beginning about 170,000 years ago, out of Africa to South Asia and Australia;
- then, beginning about 50,000 years ago, into Europe.;
- and finally, only some 12- to 15,000 years ago, to North and South America.
For such migrations to take place, we had to be able to draw upon more than a set of highly specialized mental tools, geared to the demands of a specific time and place. We also needed a capacity for general intelligence, enabling us to solve the problems posed by the new environments into which we migrated -- or, to be more blunt about it, adapt our new environments to us. We also needed a capacity for learning, allowing us to acquire new knowledge, and new mental tools, at a much faster place than biological evolution allows.
It is to this capacity for learning
that we turn next.
Two excellent textbooks on behavioral and
cognitive neuroscience, and on biological psychology
generally, are by professors at UCB:
- Biological Psychology by Rosenzweig, Leiman, & Breedlove (all three were members of the UCB faculty);
- Cognitive Neuroscience by Gazzaniga, Ivry, and Mangan (Ivry is on the faculty at UCB, Gazzaniga is at UC Santa Barbara, and is at UC Davis).
A fabulous introduction to neuroscience, actually written by a poet, is An Alchemy of Mind: The Marvel and Mystery of the Brain by Diane Ackerman (who is also author of A Natural History of the Senses).
The best source on neuroanatomy is The
Human Brain Coloring Book by UCB's own Prof. Marian
Diamond and her colleagues. By the time you have finished
coloring in the diagrams, you will known all you need to
know about the parts of the brain and where they are.
Link to Lecture 1 ("The Organization of the Body") of Prof. Diamond's famous class, "General Human Anatomy" (Integrative Biology 131).
Link to a trailer for a documentary film about Prof. Diamond: "My Love Affair with the Brain: The Life and Science of Dr. Marian Diamond.
Drawings and images of the brain can also be appreciated as artworks -- a point made clearly by Carl Schoonover in Portraits of the Mind: Visualizing the Brain from Antiquity to the 21st Century (2010). See also "Beauty of the Brain" by Laura Helmuth, Smithsonian, 3/2011.
For an excellent review of the literature on
hemispheric asymmetry, see:
- Hemispheric Asymmetry: What's Right and What's Left, ed. by Joseph Hellige (1993, 2001);
- Right Hand, Left Hand: The Origins of Asymmetry in Brains, Bodies, Atoms, and Cultures by Chris McManus
This page last revised 10/22/2017.