Read Through the Language Glass: Why the World Looks Different in Other Languages Online
Authors: Guy Deutscher
Tags: #Language Arts & Disciplines, #Linguistics, #Comparative linguistics, #General, #Historical linguistics, #Language and languages in literature, #Historical & Comparative
It is a fact that many students don’t realize that their linguistics textbooks don’t know that some languages do not have finite complements.
Instead, this state of affairs would have to be expressed by other means. For example, in the early stages of Akkadian, one would do it along these lines:
Some languages do not have finite complements. Some linguistics textbooks don’t know that. Many students don’t realize their textbooks’ ignorance. This is a fact.
While systematic statistical surveys on subordination have not yet been conducted, impressionistically it seems that languages that have restricted use of complements (or even lack them altogether) are mostly spoken in simple societies. What is more, ancient languages such as Akkadian and Hittite show that this type of “syntactic technology” developed at a period when the societies in question were growing in complexity. Is this just coincidence?
I have argued elsewhere that it is not. Finite complements are a more effective tool for conveying elaborate propositions, especially when less information can be left to the context and more explicitness and accuracy
are required. Recall the sequence of events described in the Akkadian legal document found
here
. Of course, it is possible to convey the set of propositions described there just as the Akkadian text organizes it, with a simple juxtaposition of clauses: X told Y to do something; Y did something different; X didn’t know that; X proved it in front of the inspectors. But when the dependence between the clauses is not explicitly marked, some ambiguity remains. What exactly did X prove? Did he prove that Y did something different from what he was told? Or did X prove that he didn’t
know
that Y did something different? The juxtaposition does not make that clear, but the hierarchical structure of finite complements can easily do so.
The language of legal proceedings, with its zealous insistence on accurate, explicit, and context-independent statements, is an extreme example of the type of elaborate communicative patterns that are more likely to arise in a complex society. But it is not the only example. As I mentioned earlier, in a large society of strangers there will be many more occasions where elaborate information has to be conveyed without reliance on shared background and knowledge. Finite complements are better equipped to convey such information than alternative constructions, so it is plausible that finite complements are more likely to emerge under the communicative pressures of a more complex society. Of course, as no statistical surveys about subordination have been conducted yet, speculations about correlations between subordination and the complexity of a society necessarily have to remain on an impressionistic level. But there are signs that things might be changing.
For decades, linguists have elevated the hollow slogan that “all languages are equally complex” to a fundamental tenet of their discipline, zealously suppressing as heresy any suggestion that the complexity of any areas of grammar could reflect aspects of society. As a consequence, relatively little work has been done on the subject. But a flurry of publications from the last couple of years shows that more linguists are now daring to explore such connections.
The results of this research have already revealed some significant statistical correlations. Some of these, such as the tendency of smaller societies to have more complex word structure, may seem surprising at
first sight, but look plausible on closer examination. Other connections, such as the greater reliance on subordination in complex societies, still require detailed statistical surveys, but nevertheless seem intuitively convincing. And finally, the relation between the complexity of the sound system and the structure of society awaits a satisfactory explanation. But now that the taboo is lifting and more research is being done, there are undoubtedly more insights in store. So watch this space.
We have come a long way from the Aristotelian view of how nature and culture are reflected in language. Our starting point was that only the labels (or, as Aristotle called them, the “sounds of speech”) are cultural conventions, while everything behind those labels is a reflection of nature. By now culture has emerged as a considerable force whose influence extends far beyond merely bestowing labels on a preordained list of concepts and a preordained system of grammatical rules.
In the second part of the book, we move on to a question that may seem a fairly innocuous corollary to the conclusions of the first part: does our mother tongue influence the way we think? Since the conventions of the culture we were born into affect the way we carve up the world into concepts and the way we organize these concepts into elaborate ideas, it seems only natural to ask whether our culture can affect our thoughts
through
the linguistic idiosyncrasies it has imposed on us. But while raising the question appears harmless enough in theory, among serious researchers the subject has become a pariah. The following chapter explains why.
*
There has been a lot of brouhaha in the last few years about Pirahã, a language from the Brazilian Amazon, and its alleged lack of subordination. But a few Pirahã subordinate clauses have recently managed to escape from the jungle and telegraph reliable linguists to say that reports of their death have been greatly exaggerated. (See notes for more information.)
In 1924, Edward Sapir, the leading light of American linguistics, was entertaining no illusions about the attitude of outsiders toward his field: “The normal man of intelligence has something of a contempt for linguistic studies, convinced as he is that nothing can well be more useless. Such minor usefulness as he concedes to them is of a purely instrumental nature. French is worth studying because there are French books which are worth reading. Greek is worth studying—if it is—because a few plays and a few passages of verse, written in that curious and extinct vernacular, have still the power to disturb our hearts—if indeed they have. For the rest, there are excellent translations. . . . But when Achilles has bewailed the death of his beloved Patroclus and Clytaemnestra has done her worst, what are we to do with the Greek aorists that are left on our hands? There is a traditional mode of procedure which arranges them into patterns. It is called grammar. The man who is in charge of grammar and is called a grammarian is regarded by all plain men as a frigid and dehumanized pedant.”
In Sapir’s own eyes, however, nothing could be further from the truth. What he and his colleagues were doing did not remotely resemble
the pedantic sifting of subjunctives from aorists, moldy ablatives from rusty instrumentals. Linguists were making dramatic, even worldview-changing discoveries. A vast unexplored terrain was being opened up, the languages of the American Indians, and what was revealed there had the power to turn on its head millennia’s wisdom about the natural ways of organizing thoughts and ideas. For the Indians expressed themselves in unimaginably strange ways and thus demonstrated that many aspects of familiar languages, which had previously been assumed to be simply natural and universal, were in fact merely accidental traits of European tongues. The close study of Navajo, Nootka, Paiute, and a panorama of other native languages catapulted Sapir and his colleagues to vertiginous heights, from where they could now gaze down on the languages of the Old World like people who see their home patch from the air for the first time and suddenly recognize it as just one little spot in a vast and varied landscape. The experience was exhilarating. Sapir described it as the liberation from “what fetters the mind and benumbs the spirit . . . the dogged acceptance of absolutes.” And his student at Yale Benjamin Lee Whorf enthused: “We shall no longer be able to see a few recent dialects of the Indo-European family . . . as the apex of the evolution of the human mind. They, and our own thought processes with them, can no longer be envisioned as spanning the gamut of reason and knowledge but only as one constellation in a galactic expanse.”
It was difficult not to get carried away by the view. Sapir and Whorf became convinced that the profound differences between languages must have consequences that go far beyond mere grammatical organization and must be related to profound divergence in modes of thought. And so in this heady atmosphere of discovery, a daring idea about the power of language shot to prominence: the claim that our mother tongue determines the way we think and perceive the world. The idea itself was not new—it had been lying around in a raw state for more than a century—but it was distilled in the 1930s into a powerful concoction that then intoxicated a whole generation. Sapir branded this idea the principle of “linguistic relativity,” equating it with nothing less than Einstein’s world-shaking theory. The observer’s perceptions of the
world—so ran Sapir’s emendation of Einstein—depend not only on his inertial frame of reference but also on his mother tongue.
The following pages tell the story of linguistic relativity—a history of an idea in disgrace. For as loftily as it had once soared, so precipitously did the theory then crash, when it transpired that Sapir and especially his student Whorf had attributed far-fetched cognitive consequences to what were in fact mere differences in grammatical organization. Today, any mention of linguistic relativity will make most linguists shift uneasily in their chairs, and “Whorfianism” has largely become an intellectual tax haven for mystical philosophers, fantasists, and postmodern charlatans.
Why then should one bother telling the story of a disgraced idea? The reason is not (just) to be smug with hindsight and show how even very clever people can sometimes be silly. Although there is undeniable pleasure in such an exercise, the real reason for exposing the sins of the past is this: although Whorf’s wild claims were largely bogus, I will try to convince you later that the notion that language can influence thoughts should not be dismissed out of hand. But if I am to make a plausible case that some aspects of the underlying idea are worth salvaging and that language may after all function as a lens through which we perceive the world, then this salvaging mission must steer clear of previous errors. It is only by understanding where linguistic relativity went astray that we can turn a different way.
The idea of linguistic relativity did not emerge in the twentieth century entirely out of the blue. In fact, what happened at Yale—the overreaction of those dazzled by a breathtaking linguistic landscape—was a close rerun of an episode from the early 1800s, during the high noon of German Romanticism.
The prevailing prejudice toward the study of non-European languages that Edward Sapir gently mocked in 1924 was nothing to poke fun at a century earlier. It was simply accepted wisdom—not just for the “ordinary man of intelligence” but also among philologists themselves—that
the only languages worthy of serious study were Latin and Greek. The Semitic languages Hebrew and Aramaic were occasionally thrown into the bargain because of their theological significance, and Sanskrit was grudgingly gaining acceptance into the club of classical worthies, but only because it was so
similar
to Greek and Latin. But even the modern languages of Europe were still widely viewed as merely degenerate forms of the classical languages. Needless to say, the languages of illiterate tribes, without great works of literature or any other redeeming features, were seen as devoid of any interest, primitive jargons just as worthless as the primitive peoples who spoke them.
It was not that scholars at the time were unconcerned about the question of what is common to all languages. In fact, from the seventeenth century onward, the writing of learned treatises on “universal grammar” was very much in vogue. But the universe of these universal grammars was rather limited. Around 1720, for instance, John Henley published in London a series of grammars called
The Compleat Linguist; or, An Universal Grammar of All the Considerable Tongues in Being
. All the considerable tongues in being amounted to nine: Latin, Greek, Italian, Spanish, French, Hebrew, Chaldee (Aramaic), Syriac (a later dialect of Aramaic), and Arabic. This exclusive universe offered a somewhat distorted perspective, for—as we know today—the variations among European languages pale in significance compared with the otherness of more exotic tongues. Just imagine what misleading ideas one would get on “universal religion” or on “universal food” if one limited one’s universe to the stretch between the Mediterranean and the North Sea. One would travel in the different European countries and be impressed by the great divergences between them: the architecture of the churches is entirely different, the bread and cheese do not taste at all the same. But if one never ventured to places farther afield, where there were no churches, cheese, or bread, one would never realize that these intra-European differences are ultimately minor variations in essentially the same religion and the same culinary culture.