Quotes from “The Shallows” by Nicholas Carr

My life, like the lives of most Baby Boomers and Generation Xers, has unfolded like a two-act play. It opened with Analogue Youth and then, after a quick but thorough shuffling of the props, it entered Digital Adulthood.
Virtually all of our neural circuits—whether they’re involved in feeling, seeing, hearing, moving, thinking, learning, perceiving, or remembering— are subject to change
The experiment, say the scholars, indicates that the more distracted we become, the less able we are to experience the subtlest, most distinctively human forms of empathy, compassion, and other emotions. “
The results of the most recent such study were published in Psychological Science at the end of 2008. A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous, and mentally fatiguing, series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were then divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.
The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”34
n his report on the research, van Nimwegen emphasized that he controlled for variations in the participants’ fundamental cognitive skills. It was the differences in the design of the software that explained the differences in performance and learning. The subjects using the bare-bones software consistently demonstrated “more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge.” The more that people depended on explicit guidance from software programs, the less engaged they were in the task and the less they ended up learning. The findings indicate, van Nimwegen concluded, that as we “externalize” problem solving and other cognitive chores to our computers, we reduce our brain’s ability “to build stable knowledge structures”—schemas, in other words—that can later “be applied in new situations.”29 A polemicist might put it more pointedly: The brighter the software, the dimmer the user.
What gives real memory its richness and its character, not to mention its mystery and fragility, is its contingency. It exists in time, changing as the body changes. Indeed, the very act of recalling a memory appears to restart the entire process of consolidation, including the generation of proteins to form new synaptic terminals.29 Once we bring an explicit long-term memory back into working memory, it becomes a short-term memory again. When we reconsolidate it, it gains a new set of connections—a new context. As Joseph LeDoux explains, “The brain that does the remembering is not the brain that formed the initial memory. In order for the old memory to make sense in the current brain, the memory has to be updated.”30 Biological memory is in a perpetual state of renewal. The memory stored in a computer, by contrast, takes the form of distinct and static bits; you can move the bits from one storage drive to another as many times as you like, and they will always remain precisely as they were.
Peter Suderman, who writes for the American Scene, argues that, with our more or less permanent connections to the Internet, “it’s no longer terribly efficient to use our brains to store information.” Memory, he says, should now function like a simple index, pointing us to places on the Web where we can locate the information we need at the moment we need it: “Why memorize the content of a single book when you could be using your brain to hold a quick guide to an entire library? Rather than memorize information, we now store it digitally and just remember what we stored.” As the Web “teaches us to think like it does,” he says, we’ll end up keeping “rather little deep knowledge” in our own heads.11 Don Tapscott, the technology writer, puts it more bluntly. Now that we can look up anything “with a click on Google,” he says, “memorizing long passages or historical facts” is obsolete. Memorization is “a waste of time.”12
The debate over Google Book Search is illuminating for several reasons. It reveals how far we still have to go to adapt the spirit and letter of copyright law, particularly its fair-use provisions, to the digital age. (The fact that some of the publishing firms that were parties to the lawsuit against Google are also partners in Google Book Search testifies to the murkiness of the current situation.) It also tells us much about Google’s high-flown ideals and the high-handed methods it sometimes uses to pursue them. One observer, the lawyer and technology writer Richard Koman, argued that Google “has become a true believer in its own goodness, a belief which justifies its own set of rules regarding corporate ethics, anti-competition, customer service and its place in society.”38
But the young entrepreneurs knew that they would not be able to live off the largesse of venture capitalists forever. Late in 2000, they came up with a clever plan for running small, textual advertisements alongside their search results—a plan that would require only a modest compromise of their ideals. Rather than selling advertising space for a set price, they decided to auction the space off. It wasn’t an original idea—another search engine, GoTo, was already auctioning ads—but Google gave it a new spin. Whereas GoTo ranked its search ads according to the size of advertisers’ bids—the higher the bid, the more prominent the ad—Google in 2002 added a second criterion. An ad’s placement would be determined not only by the amount of the bid but by the frequency with which people actually clicked on the ad. That innovation ensured that Google’s ads would remain, as the company put it, “relevant” to the topics of searches. Junk ads would automatically be screened from the system. If searchers didn’t find an ad relevant, they wouldn’t click on it, and it would eventually disappear from Google’s site.
Thanks to that knack, Google was soon processing most of the millions—and then billions—of Internet searches being conducted every day. The company became fabulously successful, at least as measured by the traffic running through its site. But it faced the same problem that had doomed many dot-coms: it hadn’t been able to figure out how to turn a profit from all that traffic. No one would pay to search the Web, and Page and Brin were averse to injecting advertisements into their search results, fearing it would corrupt Google’s pristine mathematical objectivity. “We expect,” they had written in a scholarly paper early in 1998, “that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”20
September they incorporated as Google Inc. They chose the name—a play on googol, the word for the number ten raised to the hundredth power—to highlight their goal of organizing “a seemingly infinite amount of information on the web.”
Page had another insight, again drawing on the citations analogy: not all links are created equal. The authority of any Web page can be gauged by how many incoming links it attracts. A page with a lot of incoming links has more authority than a page with only one or two. The greater the authority of a Web page, the greater the worth of its own outgoing links. The same is true in academia: earning a citation from a paper that has itself been much cited is more valuable than receiving one from a less cited paper. Page’s analogy led him to realize that the relative value of any Web page could be estimated through a mathematical analysis of two factors: the number of incoming links the page attracted and the authority of the sites that were the sources of those links. If you could create a database of all the links on the Web, you would have the raw material to feed into a software algorithm that could evaluate and rank the value of all the pages on the Web. You would also have the makings of the world’s most powerful search engine.
Page had an idea that he thought might unlock some of its secrets. He had realized that the links on Web pages are analogous to the citations in academic papers. Both are signifiers of value. When a scholar, in writing an article, makes a reference to a paper published by another scholar, she is vouching for the importance of that other paper. The more citations a paper garners, the more prestige it gains in its field. In the same way, when a person with a Web page links to someone else’s page, she is saying that she thinks the other page is important. The value of any Web page, Page saw, could be gauged by the links coming into it.
Google’s view, information is a kind of commodity, a utilitarian resource that can, and should, be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can distill their gist, the more productive we become as thinkers. Anything that stands in the way of the speedy collection, dissection, and transmission of data is a threat not only to Google’s business but to the new utopia of cognitive efficiency it aims to construct on the Internet.
e’re not smarter than our parents or our parents’ parents. We’re just smart in different ways. And that influences not only how we see the world but also how we raise and educate our children. This social revolution in how we think about thinking explains why we’ve become ever more adept at working out the problems in the more abstract and visual sections of IQ tests while making little or no progress in expanding our personal knowledge, bolstering our basic academic skills, or improving our ability to communicate complicated ideas clearly. We’re trained, from infancy, to put things into categories, to solve puzzles, to think in terms of symbols in space. Our use of personal computers and the Internet may well be reinforcing some of those mental skills and the corresponding neural circuits by strengthening our visual acuity, particularly our ability to speedily evaluate objects and other stimuli as they appear in the abstract realm of a computer screen. But, as Flynn stresses, that doesn’t mean we have “better brains.” It just means we have different brains.11
2007 report from the U.S. Department of Education showed that twelfth-graders’ scores on tests of three different kinds of reading—for performing a task, for gathering information, and for literary experience—fell between 1992 and 2005. Literary reading aptitude suffered the largest decline, dropping twelve percent.3
ew intellectual technology.
ON THE EVENING of April 18, 1775, Samuel Johnson accompanied his friends James Boswell and Joshua Reynolds on a visit to Richard Owen Cambridge’s grand villa on the banks of the Thames outside London. They were shown into the library, where Cambridge was waiting to meet them, and after a brief greeting Johnson darted to the shelves and began silently reading the spines of the volumes arrayed there. “Dr. Johnson,” said Cambridge, “it seems odd that one should have such a desire to look at the backs of books.” Johnson, Boswell would later recall, “instantly started from his reverie, wheeled about, and replied, ‘Sir, the reason is very plain. Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.’”55
The Net grants us instant access to a library of information unprecedented in its size and scope, and it makes it easy for us to sort through that library—to find, if not exactly what we were looking for, at least something sufficient for our immediate purposes. What the Net diminishes is Johnson’s primary kind of knowledge: the ability to know, in depth, a subject for ourselves, to construct within our own minds the rich and idiosyncratic set of connections that give rise to a singular intelligence.
HE WEB COMBINES the technology of hypertext with the technology of multimedia to deliver what’s called “hypermedia.” It’s not just words that are served up and electronically linked, but also images, sounds, and moving pictures. Just as the pioneers of hypertext once believed that links would provide a richer learning experience for readers, many educators also assumed that multimedia, or “rich media,” as it’s sometimes called, would deepen comprehension and strengthen learning. The more inputs, the better. But this assumption, long accepted without much evidence, has also been contradicted by research. The division of attention demanded by multimedia further strains our cognitive abilities, diminishing our learning and weakening our understanding. When it comes to supplying the mind with the stuff of thought, more can be less.
n an article published in Science in early 2009, Patricia Greenfield, a prominent developmental psychologist who teaches at UCLA, reviewed more than fifty studies of the effects of different types of media on people’s intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies has led to the “widespread and sophisticated development of visual-spatial skills.” We can, for example, rotate objects in our minds better than we used to be able to. But our “new strengths in visual-spatial intelligence” go hand in hand with a weakening of our capacities for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”52 The Net is making us smarter, in other words, only if we define intelligence by the Net’s own standards. If we take a broader and more traditional view of intelligence—if we think about the depth of our thought rather than just its speed—we have to come to a different and considerably darker conclusion.
Does optimizing for multitasking result in better functioning—that is, creativity, inventiveness, productiveness? The answer is, in more cases than not, no,” says Grafman. “The more you multitask, the less deliberative you become; the less able to think and reason out a problem.” You become, he argues, more likely to rely on conventional ideas and solutions rather than challenging them with original lines of thought.48 David Meyer, a University of Michigan neuroscientist and one of the leading experts on multitasking, makes a similar point. As we gain more experience in rapidly shifting our attention, we may “overcome some of the inefficiencies” inherent in multitasking, he says, “but except in rare circumstances, you can train until you’re blue in the face and you’d never be as good as if you just focused on one thing at a time.” 49 What we’re doing when we multitask “is learning to be skillful at a superficial level.”50 The Roman philosopher Seneca may have put it best two thousand years ago: “To be everywhere is to be nowhere.”
fb2epub
Drag & drop your files (not more than 5 at once)