The Story of Psychology (105 page)

BOOK: The Story of Psychology
6.01Mb size Format: txt, pdf, ePub

—The linguist Benjamin Whorf theorized in 1957 that thought is molded by the syntax and vocabulary of one’s native language, and offered cross-cultural evidence to prove his point. One of his examples was that the Hopi Indian language does not distinguish, at least not as we do, between past, present, and future (a rare exception to a nearly universal rule). Instead, a Hopi speaker indicates through inflections whether he or she is talking about an event that actually happened, one that is expected to happen, or about such events in general. Whorf and his followers accordingly maintained that the language we use shapes or influences what we see and think.
62

—On the other hand, anthropologists have found that in many other cultures people have fewer color terms than English-speaking people but experience the world no differently. The Dani of New Guinea have only two color terms:
mili
(dark) and
mola
(light), but tests of speakers of Dani and other languages that lack many explicit color names have shown that their memory for colors and their ability to judge differences between color samples are much the same as our own. At least when it comes to color, they can think without words.
63

—The studies of children’s thinking, carried out by Piaget and other developmental psychologists, show strong interactions between language and thought. Hierarchical categorization, for one thing, is a powerful cognitive mechanism that enables us to organize and make use of our knowledge; if we are told that an unfamiliar item in an ethnic grocery store is a fruit, says Philip Lieberman, we know at once that it is a plant, edible, and is probably sweet.
64
This inferential capacity is built into the structure of language and acquired in the normal course of development. Studies show that children begin verbal categorization at about eighteen months, and that one of the results is the “naming explosion,” a phenomenon every parent has observed. Thus, says Lieberman, “particular languages do not inherently constrain human thought, because both capacities [language and thought] appear to involve closely related brain mechanisms.”
65

The physical locations of some of those brain mechanisms were pinpointed through the study of aphasia, a speech disorder caused by an injury to or lesion in a specific part of the brain. A lesion in Wernicke’s Area, as we saw earlier, results in speech that is relatively fluent and syntactical but often nonsensical; victims either mangle or cannot find the nouns, verbs, and adjectives they want. Howard Gardner, a Harvard cognitive psychologist who has explored aphasia, has given this example, taken from a conversation he had with a patient:

“What kind of work have you done, Mr. Johnson?” I asked.

“We, the kids, all of us, and I, we were working for a long time in the… you know…it’s the kind of space, I mean place rear to the spedwan…”

At this point I interjected, “Excuse me, but I wanted to know what work you have been doing.”

“If you had said that, we had said that, poomer, near the fortunate, forpunate, tamppoo, all around the fourth of martz. Oh, I get all confused,” he replied, looking somewhat puzzled that the stream of language did not appear to satisfy me.
66

In contrast, a person with damage to Broca’s area, though able to understand language, has great difficulty producing any; the speech is fragmented, lacking in grammatical structure, and deficient in modifiers of nouns and verbs.

This much is known at the macro level. Nothing, however, is known about how the neuronal networks within Wernicke’s and Broca’s areas carry out language functions in normal persons; those areas are still “black boxes” to psychologists—mechanisms whose input and output are known but whose internal machinery is a mystery.

But neuroscientists have found a few clues. Analyses of brain function in speech-impaired persons by means of electrode probes during surgery, PET and fMRI scanning, and other methods have shown that linguistic knowledge is located not only in Wernicke’s and Broca’s Areas but in many parts of the brain and is assembled when needed. Dr. Antonio Damasio of the University of Iowa College of Medicine is one of many researchers who have concluded that information about any object is widely distributed. If the object is, say, a polystyrene cup (Damasio’s example), its shape will be stored in one place, crushability in another, texture in another, and so on. These connect, by neural networks, to a “convergence zone” and thence to a verbal area where the noun “cup” is stored.
67
This is strikingly similar to the abstract portraits of the semantic memory network we saw earlier in this chapter.

In the past several years, PET and fMRI scans of normal people have identified areas in the brain that are active when specific linguistic processes are going on. But despite a wealth of such information, the data do not tell us how the firing of myriad neurons in those locations becomes a word, a thought, a sentence, or a concept in the mind of the individual. The data provide a more detailed model than was formerly available of where language processes take place in the brain, but cognitive neuroscience has not yet yielded a theory as to how the neural events become language. As Michael Gazanniga and his co-authors say in
Cognitive Neuroscience
, “The human language system is complex, and much remains to be learned about how the biology of the brain enables the rich speech and language comprehension that characterize our daily lives.”
68
*

“Much remains”? A memorable understatement.

Reasoning

Some years ago I asked Gordon Bower, a prominent memory researcher, a question about thinking and was taken aback by his testy
reply: “I don’t work on ‘thinking’ at all. I don’t know what ‘thinking’ is.” How could the head of Stanford University’s psychology department not work on thinking at all—and not even know what it is? Then, rather grudgingly, Bower added, “I presume it’s the study of reasoning.”

Thinking was traditionally a central theme in psychology, but by the 1970s the proliferation of knowledge in cognitive psychology had made the term unhandy, since it included processes as disparate as momentary short-term memory and protracted problem solving. Psychologists preferred to speak of thought processes in more specific terms: “chunking,” “reasoning,” “retrieval,” “categorization,” “formal operations,” “problem solving,” and scores of others. “Thinking” came to have a narrower and more precise meaning than before: the manipulation of knowledge to achieve a goal. To avoid any misunderstanding, however, many psychologists preferred, like Bower, to use the term “reasoning.”

Although human beings have always viewed reasoning ability as the essence of their humanity, research on it was long a psychological backwater.
69
From the 1930s to the 1950s little work was done on reasoning except for the problem-solving experiments of Karl Duncker and other Gestaltists and the studies by Piaget and his followers of the kinds of thought processes characteristic of children at different stages of intellectual development.

But with the advent of the cognitive revolution, research on reasoning became an active field. The IP (information processing) model enabled psychologists to formulate hypotheses that portrayed, in flow-chart fashion, what went on in various kinds of reasoning, and the computer was a piece of apparatus—the first ever—with which such hypotheses could be tested.

IP theory and the computer were synergistic. A hypothesis about any form of reasoning could be described, in IP terms, as a sequence of specific steps of information processing; the computer could then be programmed to perform an analogous sequence of steps. If the hypothesis was correct, the machine would reach the same conclusion as the reasoning human mind. By the same token, if a reasoning program written for the computer produced the same answer as a human being to a given problem, one could suppose that the program was operating in the same way as, or at least in a similar fashion to, that of the human mind.

How does a computer do such reasoning? Its program contains a routine, or set of instructions, plus a series of subroutines, each of which is
used or not used, depending on the results of the previous operations and the information in the program’s memory. A common form of routine is a series of if-then steps: “If the input meets condition 1, then take action 1; if not, take action 2. Compare the result with condition 2 and if the result is [larger, smaller, or whatever], take action 3. Otherwise take action 4… Store resulting conditions 2, 3… and, depending on further results, use these stored items in such-and-such ways.”
70

But when computers carry out such programs, whether in mathematical computing or problem solving, are they actually reasoning? Are they not acting as automata that unthinkingly execute prescribed actions? The question is one for the philosopher. If a computer can, like a knowledgeable human being, prove a theorem, navigate a spacecraft, or determine whether a poem was written by Shakespeare, who is to say that it is a mindless automaton—or that a human being is not one?

In 1950, when only a few primitive computers existed but the theory of computation was being much discussed by mathematicians, information theorists, and others, Alan Turing, a gifted English mathematician, proposed a test, more philosophic than scientific, to determine whether a computer could or could not think. In the test, a computer programmed to solve a certain kind of problem is stationed in one room, a person skilled in that kind of problem is in another room, and in a third room is a judge in telegraphic communication with each. If the judge cannot tell from the dialogue which is the computer and which the person, the computer will pass the test: it thinks.
71
No computer program has yet won hands down, in publicly conducted contests, although some have fooled some of the judges. The validity of the Turing test has been debated, but at the very least it must mean that if a computer seems to think, what it does is as good as thinking.

By the 1960s, most cognitive psychologists, whether or not they agreed that computers really think, regarded computation theory as a conceptual breakthrough; it enabled them for the first time to describe any aspect of cognition, and of reasoning in particular, in detailed and precise IP terms. Moreover, having hypothesized the steps of any such program, they could translate them from words into computer language and try the result on a computer. If it ran successfully, it meant that the mind did indeed reason by means of something like that program. No wonder Herbert Simon said the computer was as important for psychology as the microscope had been for biology; no wonder other enthusiasts
said the human mind and the computer were two species of the genus “information-processing system.”
72

The ability to solve problems is one of the most important applications of human reasoning. Most animals solve such problems as finding food, escaping enemies, and making a nest or lair largely by means of innate or partly innate patterns of behavior; human beings solve or attempt to solve most of their problems by means of either learned or original reasoning.

In the mid-1950s, when Simon and Newell undertook to create Logic Theorist, the first program that simulated thinking, they posed a problem to themselves: How do human beings solve problems? Logic Theorist took them a year and a half, but the question occupied them for more than fifteen. The resulting theory, published in 1972, has been the foundation of work in that field ever since.

Their chief method of working on it, according to Simon’s autobiography, was two-man brainstorming. This involved deductive and inductive reasoning, analogical and metaphoric thinking, and flights of fancy—in short, any kind of reasoning, orderly or disorderly:

From 1955 to the early 1960s, when we met almost daily… [we] worked mostly by conversations together, with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again.
73

They also did a good deal of laboratory work. Singly and together they recorded and analyzed the steps by which they and others solved puzzles and then wrote out the steps as programs. A favorite puzzle, of which they made extensive use for some years, is a child’s toy known as the Tower of Hanoi. In its simplest form, it consists of three disks of different sizes (with holes in their centers) piled on one of three vertical rods mounted on flat bases. At the outset, the largest disk is on the bottom, the middle-sized one in the middle, the smallest one on top. The problem is to move them one at a time in the fewest possible moves, never putting any disk on top of one smaller than itself, until they are piled in the same order on another rod.

The perfect solution takes seven steps, although with errors leading to dead ends and backtracking to correct them, it can take several times
that many. In more advanced versions, the solution requires complex strategies and many moves. A perfect five-disk game takes thirty-one moves, a perfect seven-disk game 127 moves, and so on.
*
Simon has said, quite seriously, that “the Tower of Hanoi was to cognitive science what fruit flies were to modern genetics—an invaluable standard research setting.”
74
(Sometimes, however, he ascribes this honor to chess.)

Another laboratory tool used by the team was cryptarithmetic, a type of puzzle in which a simple addition problem is presented in letters instead of numbers. The goal is to figure out what digits the letters stand for. This is one of Simon and Newell’s simpler examples:

Other books

Leave a Mark by Stephanie Fournet
Naughty Thoughts by Portia Da Costa
Recoil by Andy McNab
Grand Opera: The Story of the Met by Affron, Charles, Affron, Mirella Jona
The Midnight Swimmer by Edward Wilson
Boating for Beginners by Winterson, Jeanette