A theory has only the alternatives of being wrong or right. A model has a third possibility: it may be right but irrelevant.
IN THE FINAL decade of the eighteenth century, a brilliant Viennese physician named Franz Joseph Gall proposed a radical new theory of the brain. At the time, the human mind was believed to be the seat of the immortal soul, and examining its deeper nature was the domain of philosophers. Immanuel Kant had proclaimed space and time to be the natural and irreducible categories of the mind, basic preconditions of the way it filters and perceives reality. But on the matter of the physical brain, what the three and a half pounds of tissue is made of and how it works, scientists knew next to nothing. Gall himself was totally ignorant of the brain’s hundred billion neurons, hundreds of trillions of interconnections, and more than a thousand kilometers of cabling. Even so, he had made an astonishing and revolutionary discovery—or so he thought.
In his work as a physician, Gall had encountered patients with all sorts of peculiar personalities. Some were notably selfless and kind; others, ruthless and ambitious; and still others were strikingly intelligent and blessed with mathematical or poetic talent. Over more than a decade, Gall tirelessly recorded the characteristic traits of his patients, while quietly amassing data on the sizes and shapes of their heads. He collected hundreds of human and animal skulls, manufactured countless wax molds of brains, and put calipers to the foreheads of friends and pupils. On the basis of his observations, he finally came to the conclusion that the brain, like the body, possesses distinct organs of various kinds.
As Gall argued, “The same mind which sees through the organ of sight, and which smells through the olfactory organ, learns by heart through the organ of memory, and does good through the organ of benevolence.”2 Of course, if he was right and these organs of the brain were real, it should be possible to locate and measure them, and Gall insisted that this was indeed the case. To seek the organs in someone’s head, Gall recommended running the palms over the surface of the scalp, feeling for any unusual bumps or depressions. The idea was that any larger, overdeveloped organ would push the skull outward, causing a protuberance; an underdeveloped organ, on the other hand, would leave an indentation. By seeking the significant bumps and depressions, Gall insisted, one could learn which faculties were overdeveloped or underdeveloped in any individual, and get a quick read of their character.
To make the reading of heads easier, Gall even mapped out the positions of the various organs and listed the twenty-seven faculties with which they were associated. These included parental love, friendly attachment, ambition, and the sense of cunning. Other organs, he claimed, were the seats of ability for music or arithmetic, or for mechanical skills or poetic talent. Using Gall’s map, a skilled phrenologist, as practitioners of the technique soon became known, could assess a personality with fingers and palms in just a few minutes. Gall was invited to lecture at major European universities and to demonstrate his method to kings, queens, and statesmen. (He allegedly annoyed Napolean by detecting in the contours of his skull a distinct lack of philosophical talent.)
Gall’s only regret was that he did not have more skulls to study. As he wrote in a letter to a colleague, “It would be very agreeable to me, if persons would send me the heads of animals, of which they have observed well the characters; for example, of a dog, who would eat only what he had stolen, or one who could find his master at a great distance….”3
Today, of course, the science of phrenology has been completely discredited. No careful scientific study has ever discovered a legitimate link between personality and the shape of the head, and Gall appears to have been fooling himself in thinking that he had found one. Nevertheless, the Viennese physician did get some things right; in fact, his ideas initiated a way of thinking that takes a central position in scientists’ picture of the brain.
No one before Gall had conceived of the brain as an assembly of distinct modules, each responsible for a different task—speech, vision, emotions, language, and so on. But in the modern neuroscience laboratory, researchers can see these different modules in action. Functional magnetic resonance imaging is a technique that uses radio waves to probe the pattern of blood flow in the brain, revealing how much oxygen its various parts are using at any moment. This in turn reflects the level of neural activity. With this method, researchers can watch as different regions “light up” on a computer screen as a subject deals with different tasks, such as responding to a verbal command or recognizing a taste. When a subject is given a new telephone number to remember, the hippocampus becomes active, this being the brain region heavily involved in the formation of new memories. Other regions in the brain control hearing and vision, or basic drives such as aggression and hunger. These are not quite the organs Gall had in mind, but they are significant and distinct processing centers within the brain.
Gall was also the first to focus the attention of neuroscientists on the special region of the brain where these centers reside: the thin, gray outer layer known as the cerebral cortex. For centuries, the cortex had been thought of as an unimportant protective layer. In actuality, though never more than a few millimeters thick, this layer contains most of the brain’s precious neurons. The surface of the cortex is smooth in small mammals and other lower organisms, but it becomes highly convoluted and folded in creatures roughly bigger than mice, this being necessary to fit the bigger brains inside the skull. If you could stretch the cortex of the human brain out flat, it would cover the surface of a picnic table. This intricately folded and delicately packed cortex is where higher intelligence resides. It is the part of the brain that lets us speak, make plans, learn calculus, and invent excuses for being late.
In short, the cortex is what makes us distinctively human. And it is indeed organized, as Gall suspected, into something like a set of organs. Of course, the brain is hardly an anarchy of modules working in blind independence of one another. These modules have to communicate in order to coordinate overall brain activity. When speaking, we not only choose the right words and put them together properly, but also access memories and control the timing of our speech and its emotional tone. We may recall a name, whisper it to a friend, and gesture with our hands to some additional effect. These actions involve many of the brain’s functional regions working in effortless combination, with information shuttling rapidly and efficiently between them. All this activity raises an obvious question: what wiring pattern does the brain use to provide this efficiency?
A THOUGHTFUL ARCHITECTURE
A REGION OF the human brain no larger than a marble contains as many neurons as there are people in the United States. In the crudest picture, each neuron is a single cell with a central body from which issue numerous fibers. The shortest of these, known as dendrites, are the neuron’s receiving channels, while the longer fibers, known as axons, are its transmission lines. The axons running away from any neuron eventually link up with the dendrites of other neurons, providing communicating links. Details on the full structure of any neuron would require pages, perhaps a book, but these features are the essentials.
By a long shot, most of the neurons link up with others nearby, within the same functional region, for example, be it the hippocampus or Broca’s area, a part of the brain involved in the production of speech, as the French neurologist Paul Broca discovered in 1862. Some axons run a bit farther and link up with neurons in neighboring brain regions. Most neighboring regions are linked in this way. From a whole-brain perspective, if we think of the various functional regions of the brain as the nodes of a network, these “local” connections sew the brain together into one connected whole, not unlike an ordered network. However, the brain also has a smaller number of truly long-distance axons that link brain regions that lie far apart, sometimes even on opposite sides of the brain. Consequently, we have many local links and a few long-distance links, something that begins to sound like the small-world pattern. As researchers have recently found, it is relatively easy to bring this pattern into sharper focus.
At the University of Newcastle in England, psychologist Jack Scannell has spent more than a decade mapping out the connections between different regions of the cerebral cortex. As it happens, Scannell’s studies have focussed on the brains of cats and monkeys rather than humans, and yet given the great similarities between all mammalian brains, the results almost certainly apply to human brains as well.4 Earlier research in the cat identified some fifty-five regions of the cerebral cortex, each associated with a distinctive function; in the macaque, the number is sixty-nine. In these brains, there are roughly four to five hundred significant links connecting the different regions, links formed not only by single axons but also by more appreciable streams of many axons running in parallel.
To find out how these links are arranged, Vito Latora of the University of Paris and Massimo Marchiori of the Massachusetts Institute of Technology used Scannell’s maps. Analyzing these networks in the terms set out by Watts and Strogatz, they found the signatures of a strikingly efficient network architecture.5 In the cat brain, for example, the number of degrees of separation turns out to be between only two and three. The number is identical in the macaque brain. At the same time, Latora and Marchiori found that each of these neural networks is highly clustered. In other words, it seems that what is true for good friends is also true for regions of the cerebral cortex: if one brain region has links to two others, then these two other regions are also likely to share a link.
From a functional point of view, these features make obvious biological good sense. If you mistakenly pick up a burning stick, sensory neurons immediately send signals racing toward the brain. These signals trigger a chain reaction of neurons stimulating other neurons, which eventually reaches motor neurons that send signals back to the fingers, vocal chords, and muscles of the tongue and mouth—you drop the stick and cry out in pain. If transmitting the information involved hundreds or thousands of steps between neurons, reflex responses would take far longer than they do. The small-world pattern guarantees that the brain’s diverse functional parts reside only a few steps from one another, binding the entire network into one intimate unit.
Quick and efficient transmission of signals is the simplest and most obvious benefit conferred by the small-world structure. But there is another benefit. In a social network, as Mark Granovetter pointed out, the clustering of links among a group of good friends implies that if a few of them were removed from the network, the others would still remain closely linked. In a clustered network, in other words, the loss of one element will not trigger a dramatic fragmentation of the network into disconnected parts. In the brain, this organization may also play a useful role, for the damage or destruction of one particular region would have little effect on the ability of signals to move among and coordinate the other regions. Patients with damage to Broca’s region, for example, cannot understand speech, yet they hear perfectly well, do mathematics, and make plans for the future with no difficulty. If damage to this area also severed communications between, say, the visual cortex and the hippocampus, or at least made signals travel long distances to get from one region to the other, then short-term memory of visual information might also be impaired. The small-world architecture seems to prevent this. It not only makes the brain efficient and quick but also gives it the ability to stand up in the face of faults.
The mammalian brain, human or otherwise, is far more than a device for generating efficient reflex responses or hanging together under adversity. The small-world network in your head appears to work magic in many other ways as well.
THE NATURE OF consciousness remains one of the most perplexing of all scientific mysteries. The human brain is a lump of material made of ordinary cells working on chemical and electrical principles. From the point of view of fundamental biology and the laws of physics, it is a perfectly ordinary physical machine, although one of undeniable complexity. But if the brain is merely physical stuff doing ordinary physical things, what is the seat of the seemingly spiritual entity that can perceive itself and say, “I”? Are our emotions and sense of responsibility merely the consequences of the mechanical laws of physics when combined in a setting of sufficient complexity? Or is there some mysterious extra ingredient in the brain’s workings, without which consciousness would be impossible?
Philosophers, psychologists, computer scientists, and neuroscientists are still arguing over these questions. No one can say for sure what consciousness really is, point to the exact neurons from which it issues, or explain how we might create it artificially. And yet neuroscientists have taken impressive steps in exploring the neural activity associated with consciousness, as well as some of the mechanisms by which the conscious brain seems to work.
The amazing power of the brain arises in part from its ability to respond to the external world with an immense repertoire of possible conscious states. Suppose you look through a window and see a person approaching your house. In a fraction of a second, your brain will flip through thousands of different conscious states, each with a slightly different awareness of the person’s ever-changing position. With the addition of emotions, expectations of the future, awareness of sounds, links to memories, and so on, the set of possible conscious states is clearly enormous. Yet your brain at every instant quickly settles into just one of these innumerable states, chosen in delicate correspondence with the external world and your own personal history and condition.
This extreme flexibility is what makes us so complex and gives us great adaptability in the face of a changing world. Equally impressive, however, is the depth of the brain’s conscious organization. To be conscious of someone approaching the house is to have a visual image, to appreciate the aspect of movement, to place the image in the context of a particular window through which it is seen, and to link the image to memories of people or situations possibly connected to this person. The brain binds these diverse aspects and countless others into a single, indivisible mental scene, which loses its meaning if it is broken down into its components. To put it another way, the brain acts as a remarkably well-coordinated unit to produce just one completely integrated conscious response at any instant.
What goes on with the neurons to make all this take place? On the one hand, the sheer number of neurons in the brain may acount for the broad range of its possible states. It is less easy, however, to understand what neurons have to do to tie together the different components of a conscious scene. Neuroscientists are probably years away from understanding this in detail, and yet they have uncovered some significant clues. Researchers have learned, for example, that consciousness always involves the activation of neurons from many regions of the brain—it seems to depend on their coherent engagement into one overall pattern. And the mechanism of this engagement, at least in part, is neural synchrony.
In a striking experiment in 1999, for example, neuroscientist Wolf Singer and colleagues from the Max Planck Institute for Brain Research in Frankfurt, Germany, devised a way to present a cat with two distinct series of stripes moving at right angles to one another. The experimenters could adjust the brightness of the two patterns, and in this way control what the cat perceived. If one set of stripes were brighter than the other, the cat would see the two sets as independent features; if the brightness were equal, however, the cat instead would see the stripes melded together as if there was only one pattern, a single checkerboard pattern moving in a third direction (halfway between the directions of movement of the two sets of stripes).
This clever setup enabled Singer’s team to study how neurons in the cat’s brain respond as the cat goes from seeing the stripes as distinct and unconnected to seeing them bound together into a conscious whole. Slowly altering the brightness, while monitoring the activity of more than a hundred neurons over a wide area in the cat’s visual cortex, they found that when the cat was seeing two distinct sets of stripes, two corresponding sets of neurons were firing. Notably, they were out of synch with one another. However, when the brightness was adjusted so that the cat perceived just one pattern, the two sets of neurons fell into close synchrony.6 The synchronous firing seemed to bind the two distinct features together into one conscious element.
This experiment represents the state of the art in brain research, as the team had the ability to record activity from over a hundred neurons at the same time. At the California Institute of Technology, experimental neuroscientist Gilles Laurent and his colleagues used a similar technique in studying the brains of locusts, and here too they discovered an important role for neural synchronization.7
In the locust, the olfactory antennal lobe is a group of about eight hundred neurons that takes information from the olfactory “smell” receptors and relays it toward higher regions of the brain. When a locust smells an interesting odor, these neurons respond very quickly by firing in a synchronized pattern at about twenty times a second. This is only part of the organizational response, however. Relative to the collective organized firing of the group, each neuron also maintains its own specific timing, just slightly ahead or behind the average. These findings imply that the neurons are storing information not only at the group level, by virtue of their synchrony, but also at the individual level, in their exact timing. So lots of information gets sent upward for further processing.
Again, as in the cat, synchrony seems to be central to the way the network of neurons accomplishes its function. And it seems sensible to wonder if the small-world architecture of the nervous system might not be crucial in allowing this synchrony to take place. Think again of the fireflies and crickets. As Watts and Strogatz discovered, the small-world pattern of links would offer a great benefit to a collection of fireflies trying to synchronize their firing. In neurons, it is indeed beginning to appear that the small-world trick is not only a good idea but one of the most basic prerequisites for the brain’s fundamental functions.
WOULD THE SMALL-WORLD trick really help a network of neurons to synchronize? Would it offer any other advantages or disadvantages as well? In 1999, Luis Lago-Fernández and colleagues from the Autonomous University of Madrid tried to find out by studying networks of neurons in much the way Watts and Strogatz had studied collections of fireflies. Specifically, they created a virtual model of the locust’s olfactory antennal lobe and put it through its paces to see how it would respond to a stimulus.
No one has ever studied the layout of neurons in the locust closely enough to know its real architecture. So Lago-Fernández and his colleagues tried out several possibilities. To begin with, they wired the neurons together as in a regular or ordered network. To make the simulations realistic, they used detailed models for the behavior of each of eight hundred neurons, models that were developed over half a century by the painstaking efforts of experimental neuroscientists. Using the computer, the team could apply a stimulus to a small fraction of neurons in the network and then monitor the network as the activity spread throughout.
Hitting this network with a triggering impulse—the analog of the locust detecting a significant smell—the team found that this regular architecture offered a decidely inadequ