Free Novel Read

The Blank Slate Page 16


  What about our own species? Recall that a recent study of twins showed that differences in the anatomy of the cortex, particularly the amount of gray matter in different cortical regions, are under genetic control, paralleling differences in intelligence and other psychological traits.70 And demonstrations of the plasticity of the human brain do not rule out substantial genetic organization. One of the most commonly cited examples of plasticity in both humans and monkeys is that the cortex dedicated to an amputated or numbed body part may get reallocated to some other body part. But the fact that the input can change the brain once it is built does not mean that the input molded the brain in the first place. Most amputees experience phantom limbs: vivid, detailed hallucinations of the missing body part. Amazingly, a substantial proportion of people who were born with a limb missing experience these apparitions as well.71 They can describe the anatomy of their phantom limb (for example, how many toes they feel in a nonexistent foot) and may even feel that they are gesturing with their phantom hands during conversation. One girl solved arithmetic problems by counting on her phantom fingers! The psychologist Ronald Melzack, who documented many of these cases, proposed that the brain contains an innate “neuromatrix,” distributed across several cortical and subcortical regions, dedicated to representing the body.

  The impression that human brains are limitlessly plastic has also come from demonstrations that children can sometimes recover from early brain damage. But the existence of cerebral palsy—lifelong difficulties with motor control and speech caused by malformations or early damage in the brain—shows that even the plasticity of a child’s brain has severe limits. The most famous evidence for extreme plasticity in humans had been the ability of some children to grow up relatively normal even with an entire hemisphere surgically removed in infancy.72 But that may be a special case, which arises from the fact that the primate brain is fundamentally a symmetrical organ. The typically human asymmetries—language more on the left, spatial attention and some emotions more on the right—are superimposed on that mostly symmetrical design. It would not be surprising if the hemispheres were genetically programmed with pretty much the same abilities, together with small biases that lead each hemisphere to specialize in some talents while letting others wither. With one hemisphere gone, the remaining one has to put all its capabilities to full use.

  What happens when a child loses a part of the cortex in both hemispheres, so neither hemisphere can take over the job of the missing part in the other? If cortical regions are interchangeable, plastic, and organized by the input, then an intact part of the brain should take over the function of the missing parts. The child may be a bit slower because he is working with less brain tissue, but he should develop a full complement of human faculties. But that is not what seems to happen. Several decades ago, neurologists studied a boy who suffered a temporary loss of oxygen to the brain and lost both the standard language areas in the left hemisphere and their mirror images on the right. Though he was just ten days old when he sustained the damage, he grew into a child with permanent difficulties in speaking and understanding.73

  That case study, like many in pediatric neurology, is not scientifically pure, but recent studies on two other mental faculties echo the point that babies’ brains may be less plastic than many people think. The psychologist Martha Farah and her collaborators recently reported the case of a sixteen-year-old boy who contracted meningitis when he was one day old and suffered damage to the visual cortex and to the bottom of the temporal lobes on both sides of his brain.74 When adults sustain such damage, they lose the ability to recognize faces and also have some trouble recognizing animals, though they often can recognize words, tools, furniture, and other shapes. The boy had exactly this syndrome. Though he grew up with normal verbal intelligence, he was utterly incapable of recognizing faces. He could not even recognize pictures of the cast of his favorite television show, Baywatch, which he had seen for an hour a day for the preceding year and a half. Without the appropriate strips of brain, sixteen years of seeing faces and plenty of available cortex were not enough to give him the basic human ability to recognize other people by sight.

  The neuroscientists Steven Anderson, Hannah and Antonio Damasio, and their colleagues recently tested two young adults who had sustained damage to their ventromedial and orbital prefrontal cortex when they were young children.75 These are the parts of the brain that sit above the eyes and are important for empathy, social skills, and self-management (as we know from Phineas Gage, the railroad worker whose brain was impaled by a tamping iron). Both children recovered from their injuries and grew up with average IQs in stable homes with normal siblings and college-educated parents. If the brain were really homogeneous and plastic, the healthy parts should have been shaped by the normal social environment and taken over the functions of the damaged parts. But that is not what happened with either of the children. One, who had been run over by a car when she was fifteen months old, grew into an intractable child who ignored punishment and lied compulsively. As a teenager she shoplifted, stole from her parents, failed to win friends, showed no empathy or remorse, and was dangerously uninterested in her own baby. The other patient was a young man who had lost similar parts of his brain to a tumor when he was three months old. He too grew up friendless, shiftless, thieving, and hotheaded. Along with their bad behavior, both had trouble thinking through simple moral problems, despite having IQs in the normal range. They could not, for example, say what two people should do if they disagreed on which TV channel to watch, or decide whether a man ought to steal a drug to save his dying wife.

  These cases do more than refute the doctrine of extreme plasticity. They set a challenge for the genetics and neuroscience of the twenty-first century. How does the genome tell a developing brain to differentiate into neural networks that are prepared for such abstract computational problems as recognizing a face or thinking about the interests of other people?

  THE BLANK SLATE has made its last stand, but, as we have seen, its latest scientific fortifications are illusory. The human genome may have a smaller number of genes than biologists had previously estimated, but that only shows that the number of genes in a genome has little to do with the complexity of the organism. Connectionist networks may explain some of the building blocks of cognition, but they are too underpowered to account for thought and language on their own; they must be innately engineered and assembled for the tasks. Neural plasticity is not a magical protean power of the brain but a set of tools that help turn megabytes of genome into terabytes of brain, that make sensory cortex dovetail with its input, and that implement the process called learning.

  Therefore genomics, neural networks, and neural plasticity fit into the picture that has emerged in recent decades of a complex human nature. It is not, of course, a nature that is rigidly programmed, impervious to the input, free of culture, or endowed with the minutiae of every concept and feeling. But it is a nature that is rich enough to take on the demands of seeing, moving, planning, talking, staying alive, making sense of the environment, and negotiating the world of other people.

  The aftermath of the Blank Slate’s last stand is a good time to take stock of the case for the alternative. Here is my summary of the evidence for a complex human nature, some of it reiterating arguments from previous chapters, some of it anticipating arguments in chapters to come.

  Simple logic says there can be no learning without innate mechanisms to do the learning. Those mechanisms must be powerful enough to account for all the kinds of learning that humans accomplish. Learnability theory—the mathematical analysis of how learning can work in principle—tells us there are always an infinite number of generalizations that a learner can draw from a finite set of inputs.76 The sentences heard by a child, for example, can be grounds for repeating them back verbatim, producing any combination of words with the same proportion of nouns to verbs, or analyzing the underlying grammar and producing sentences that conform to it. The sight of someone washing dishes ca
n, with equal logical justification, prompt a learner to try to get dishes clean or to let warm water run over his fingers. A successful learner, then, must be constrained to draw some conclusions from the input and not others. Artificial intelligence reinforces this point. Computers and robots programmed to do humanlike feats are invariably endowed with many complex modules.77

  Evolutionary biology has shown that complex adaptations are ubiquitous in the living world, and that natural selection is capable of evolving them, including complex cognitive and behavioral adaptations.78 The study of the behavior of animals in their natural habitat shows that species differ innately from one another in their drives and abilities, some of them (like celestial navigation and food caching) requiring complicated and specialized neural systems.79 The study of humans from an evolutionary perspective has shown that many psychological faculties (such as our hunger for fatty food, for social status, and for risky sexual liaisons) are better adapted to the evolutionary demands of our ancestral environment than to the actual demands of the current environment.80 Anthropological surveys have shown that hundreds of universals, pertaining to every aspect of experience, cut across the world’s cultures.81

  Cognitive scientists have discovered that distinct kinds of representations and processes are used in different domains of knowledge, such as words and rules for language, the concept of an enduring object for understanding the physical world, and a theory of mind for understanding other people.82 Developmental psychology has shown that these distinct modes of interpreting experience come on line early in life: infants have a basic grasp of objects, numbers, faces, tools, language, and other domains of human cognition.83

  The human genome contains an enormous amount of information, both in the genes and in the noncoding regions, to guide the construction of a complex organism. In a growing number of cases, particular genes can be tied to aspects of cognition, language, and personality.84 When psychological traits vary, much of the variation comes from differences in genes: identical twins are more similar than fraternal twins, and biological siblings are more similar than adoptive siblings, whether reared together or apart.85 A person’s temperament and personality emerge early in life and remain fairly constant throughout the lifespan.86 And both personality and intelligence show few or no effects of children’s particular home environments within their culture: children reared in the same family are similar mainly because of their shared genes.87

  Finally, neuroscience is showing that the brain’s basic architecture develops under genetic control. The importance of learning and plasticity notwithstanding, brain systems show signs of innate specialization and cannot arbitrarily substitute for one another.88

  In these three chapters I have given you a summary of the current scientific case for a complex human nature. The rest of the book is about its implications.

  PART II

  FEAR AND LOATHING

  By the middle of the second half of the twentieth century, the ideals of the social scientists of the first half had enjoyed a well-deserved victory. Eugenics, Social Darwinism, colonial conquest, Dickensian policies toward children, overt expressions of racism and sexism among the educated, and official discrimination against women and minorities had been eradicated, or at least were rapidly fading, from mainstream Western life.

  At the same time, the doctrine of the Blank Slate, which had been blurred with ideals of equality and progress for much of the century, was beginning to show cracks. As the new sciences of human nature began to flourish, it was becoming clear that thinking is a physical process, that people are not psychological clones, that the sexes differ above the neck as well as below it, that the human brain was not exempt from the process of evolution, and that people in all cultures share mental traits that might be illuminated by new ideas in evolutionary biology.

  These developments presented intellectuals with a choice. Cooler heads could have explained that the discoveries were irrelevant to the political ideals of equal opportunity and equal rights, which are moral doctrines on how we ought to treat people rather than scientific hypotheses about what people are like. Certainly it is wrong to enslave, oppress, discriminate against, or kill people regardless of any foreseeable datum or theory that a sane scientist would offer.

  But it was not a time for cool heads. Rather than detach the moral doctrines from the scientific ones, which would ensure that the clock would not be turned back no matter what came out of the lab and field, many intellectuals, including some of the world’s most famous scientists, made every effort to connect the two. The discoveries about human nature were greeted with fear and loathing because they were thought to threaten progressive ideals. All this could be relegated to the history books were it not for the fact that these intellectuals, who once called themselves radicals, are now the establishment, and the dread they sowed about human nature has taken root in modern intellectual life.

  This part of the book is about the politically motivated reactions to the new sciences of human nature. Though the opposition was originally a brain child of the left, it is becoming common on the right, whose spokespeople are fired up by some of the same moral objections. In Chapter 6 I recount the shenanigans that erupted as a reaction to the new ideas about human nature. In Chapter 71 show how these reactions came from a moral imperative to uphold the Blank Slate, the Noble Savage, and the Ghost in the Machine.

  Chapter 6

  Political Scientists

  THE FIRST LECTURE I attended as a graduate student at Harvard in 1976 was by the famous computer scientist Joseph Weizenbaum. He was an early contributor to artificial intelligence (AI) and is best remembered for the program Eliza, which fooled people into thinking that the computer was conversing though it was just spouting canned repartee. Weizenbaum had just published Computer Power and Human Reason, a critique of artificial intelligence and computer models of cognition, praised as “the most important computer book of the past decade.” I had misgivings about the book, which was short on argument and long on sanctimony. (For example, he wrote that certain ideas in artificial intelligence, such as a science-fiction proposal for a hybrid of nervous systems and computers, were “simply obscene. These are [applications] whose very contemplation ought to give rise to feelings of disgust in every civilized person…. One must wonder what must have happened to the proposers’ perception of life, hence to their perceptions of themselves as part of the continuum of life, that they can even think of such a thing.”)1 Still, nothing could have prepared me for the performance in store at the Science Center that afternoon.

  Weizenbaum discussed an AI program by the computer scientists Alan Newell and Herbert Simon that relied on analogy: if it knew the solution to one problem, it applied the solution to other problems with a similar logical structure. This, Weizenbaum told us, was really designed to help the Pentagon come up with counterinsurgency strategies in Vietnam. The Vietcong had been said to “move in the jungle as fish move in water.” If the program were fed this information, he said, it could deduce that just as you can drain a pond to expose the fish, you can denude the jungle to expose the Vietcong. Turning to research on speech recognition by computer, he said that the only conceivable reason to study speech perception was to allow the CIA to monitor millions of telephone conversations simultaneously, and he urged the students in the audience to boycott the topic. But, he added, it didn’t really matter if we ignored his advice because he was completely certain—there was not the slightest doubt in his mind—that by the year 2000 we would all be dead. And with that inspiring charge to the younger generation he ended the talk.

  The rumors of our death turned out to be greatly exaggerated, and the other prophecies of the afternoon fared no better. The use of analogy in reasoning, far from being the work of the devil, is today a major research topic in cognitive science and is widely considered a key to what makes us smart. Speech-recognition software is routinely used in telephone information services and comes packaged with home computers, where it has been a godse
nd for the disabled and for people with repetitive strain injuries. And Weizenbaum’s accusations stand as a reminder of the political paranoia and moral exhibitionism that characterized university life in the 1970s, the era in which the current opposition to the sciences of human nature took shape.

  It was not how I imagined that scholarly discourse would be conducted in the Athens of America, but perhaps I should not have been surprised. Throughout history, battles of opinion have been waged by noisy moralizing, demonizing, hyperbole, and worse. Science was supposed to be a beachhead in which ideas rather than people are attacked and in which verifiable facts are separated from political opinions. But when science began to edge toward the topic of human nature, onlookers reacted differently from how they would to discoveries about, say, the origin of comets or the classification of lizards, and scientists reverted to the moralistic mindset that comes so naturally to our species.

  Research on human nature would be controversial in any era, but the new sciences picked a particularly bad decade in which to attract the spotlight. In the 1970s many intellectuals had become political radicals. Marxism was correct, liberalism was for wimps, and Marx had pronounced that “the ruling ideas of each age have ever been the ideas of its ruling class.” The traditional misgivings about human nature were folded into a hard-left ideology, and scientists who examined the human mind in a biological context were now considered tools of a reactionary establishment. The critics announced they were part of a “radical science movement,” giving us a convenient label for the group.2