The Science of Language (3 page)

Read The Science of Language Online

Authors: Noam Chomsky

BOOK: The Science of Language
4.64Mb size Format: txt, pdf, ePub
In fact, there are other specific restrictions, which are much more modern. So take what are called “
formal languages,” say . . . arithmetic, or programming systems, or whatever. They're kind of like natural language, but they're so recent and so self-conscious that we know that they're not really much like the biological object, human language.
Notice how they're not. Take
Merge [the basic computational principle of all natural languages]. Just as a matter of pure logic, if you take two things, call them X and Y, and you make the set of X and Y ({X, Y}), there are two possibilities. One is that X is distinct from Y; the other is that they're not distinct. If everything is constructed by Merge, the only way for X to be not distinct from Y is for one to be inside the other. So let's say that X is inside Y. Well, if X is inside Y and you merge it, then you've got the set so that if Y = [. . . X . . .] then {X,Y}. In effect, Internal Merge (X,Y) = {X,Y} = {X, [. . . X . . .]}. That's a transformation. So in fact, the two kinds of Merge that are possible are taking two things and putting them together or taking one thing and taking a piece of it and sticking it at the edge. That's the
displacement [or movement] property of natural language, which is found all over the place. I had always thought [until recently] that displacement was a kind of strange imperfection of language, compared with Merge or concatenate; but that is just a mistake. As internal Merge, it just comes automatically, unless you block it. That's why
language uses that device for all sorts of things; it comes ‘for free.’ Assuming so, then you can ask the question, “How are these two kinds of Merge employed?” And here you look at the semantic interface; that's the natural one. There are huge differences.
External Merge is used, basically, to give you argument structure. Internal Merge is basically used to give you discourse-related information, like focus, topic, new information, all that kind of stuff that relates to the discourse situation.[C] Well, that's not perfect, but it's close enough so that it's probably true; and if we could figure it out, or understand it well enough, we would find that it is perfect.
Suppose [now] that you're inventing a
formal language. It has no discourse-related properties. So you just use external Merge. You put a constraint on systems – in effect, not to use internal Merge. And then you get, effectively, just argument structure. Now, it's interesting that if these systems give us scopal properties, they do it in particular ways, which happen to be rather similar to natural language. So if you're teaching, say, quantificational logic to undergraduates, the easiest way to do it is to use standard quantification theory – you put the variables on the outside and use parentheses, and so on and so forth. Well, we know perfectly well that there are other ways of doing it – logic without variables, as has been known since Curry (
1930
; Curry & Feys
1958
).
And it has all the right properties. But it's extremely hard to teach. You can learn it, after you've learned it in the ordinary notation. I don't think anyone's tried – and I think it would be extremely hard – to do it the other way, to teach the Curry system and then end up showing that you could also do it in this other way. But why? They're logically equivalent, after all. I suspect that the reason is that the standard way has many of the properties of natural language. In natural language, you do use edge properties for scope; and you do it through internal
Merge. Formal languages don't have internal Merge; but they have got to have something that is going to be interpreted as scope. So you use the same device you do in natural language: you put it on the outside with the restricted variables, and so on.
These are things that just flow from having a system with
Merge inside you; and probably the same is true of music, and lots of other things. We got this capacity that came along and gives us extraordinary options for planning, interpretation and thought, and so on and so forth. And it just starts feeding into everything else. You get this massive cultural revolution, which is quite striking, probably about sixty or seventy thousand years ago. Everywhere where humans are, it's essentially the same. Now, maybe in Australia they don't have
arithmetic; Warlpiri, for example, does not. But they have intricate kinship systems which, as Ken Hale pointed out, have a lot of the properties of
mathematical systems. Merge just seems to be in the mind, working on interesting formal problems: you don't have arithmetic, so you have complicated kinship systems.
JM:
That suggests that at least the possibility of constructing
natural sciences – that that came too with Merge
.
NC:
It did, it starts right away. Right at this period you start finding it – and here we have fossil evidence and archaeological evidence of recording of natural events, such as the lunar cycles, and things like that. People begin to notice what is going on in the world and trying to interpret what is going on. And then it enters into ceremonies, and the like. It went on that way for a long time.
What we call
science [that is, natural science with explicit, formal theories and the assumption that what they describe should be taken seriously, or thought of as ‘real’] is extremely recent, and very narrow.
Galileo had a hell of a time trying to convince his funders – the aristocrats – that there was any point in studying something like a ball rolling down a frictionless inclined plane. “Who cares about that? There is all sorts of interesting stuff going on in the world. What do you have to say about flowers growing? That would be interesting; tell me about that.” Galileo the scientist had nothing to say about flowers growing. Instead, he had to try to convince his funders that there was some point in studying an experiment that he couldn't even carry out – half of the experiments that Galileo described were thought experiments, and he describes them as if he carried them out, but it was later shown that he couldn't . . .The idea of not looking at the world as too complicated, of trying to narrow it down to some artificial piece of the world that you could actually investigate in depth and maybe even learn some principles about it that would help you understand other things [what we might think of as pure science, science that aims at basic structures, without regard to applications] – that's a huge step in the sciences and, in fact, it was only very recently taken. Galileo convinced some people that there were these laws that you just had to memorize. But in his time they were still used as calculating devices; they provided ways of building things, and the like. It really wasn't until the twentieth century that
theoretical physics became recognized as a legitimate domain in itself. For example, Boltzmann tried all his life to convince people to take atoms and molecules seriously, not just think of them as calculating devices; and he didn't succeed. Even great scientists, such as, say,
Poincaré – one of the twentieth century's greatest scientists – just laughed at it. [Those who laughed] were very much under Machian [Ernst Mach's] influence: if you can't see it, touch it . . . [you can't take it seriously]; so you just have a way of calculating. Boltzmann actually committed suicide – in part,
apparently, because of his inability to get anyone to take him seriously. By a horrible irony, he did it in 1905, the year that Einstein's Brownian motion paper came out, and everyone began to take it seriously. And it goes on.
I've been
interested in the history of chemistry. Into the 1920s, when I was born – so it isn't that far back – leading scientists would have just ridiculed the idea of taking any of this seriously, including Nobel prizewinning chemists. They thought of [atoms and other such ‘devices’] as ways of calculating the results of experiments. Atoms can't be taken seriously, because they don't have a physical explanation, which they didn't. Well, it turned out that the physics of the time was seriously inadequate; you had to radically revise physics to be unified with and merged with an unchanged chemistry.
But even well after that, even beyond Pauling, chemistry is still for many mostly a descriptive subject. Take a look at a graduate text in theoretical chemistry. It doesn't really try to present it as a unified subject; you get different theoretical kinds of models for different kinds of situations. If you look at the articles in the technical journals, such as, say,
Science
or
Nature
, most of them are pretty descriptive; they pick around the edges of a topic, or something like that. And if you get outside the hard-core natural sciences, the idea that you should actually construct artificial situations in an effort to understand the world – well, that is considered either exotic or crazy. Take
linguistics. If you want to get a grant, what you say is “I want to do corpus linguistics” – collect a huge mass of data and throw a computer at it, and maybe something will happen. That was given up in the hard sciences centuries ago. Galileo had no doubt about the need for focus and idealization when constructing a
theory.[C]
Further, [in] talking about the capacity to do science [in our very recently practiced form, you have to keep in mind that] it's not just very recent, it's
very limited. Physicists, for example, don't go commit suicide over the fact that they can't find maybe 90 percent of what they think the universe is composed of [dark matter and dark energy]. In . . . [a recent] issue of
Science
, they report the failure of the most sophisticated technology yet developed, which they hoped would find [some of] the particles they think constitute dark matter. That's, say, 90 percent of the universe that they failed to find; so we're still in the dark about 90 percent of the matter in the universe. Well, that's regarded as a scientific problem in physics, not as the end of the field. In linguistics, if you were studying Warlpiri or something, and you can't understand 50 percent of the data, it's taken to mean that you don't know what you're
talking about.
How can you understand a very complex object? If you can understand some piece of it, it's amazing. And it's the same pretty much across the board. The
one
animal communication system that seems to have the kind of complexity or intricacy where you might think you could learn something about it from [what we know about] natural languages is that of
bees. They have an extremely intricate communication system and, as you obviously know, there is no evolutionary connection to human beings. But it's interesting to look at bee signs. It's very confusing. It turns out there are hundreds of species of bees – honey bees, stingless bees, etc. The communication systems are scattered among them – some of them have them, some don't; some have different amounts; some use displays, some use flapping . . . But all the species seem to make out about as well. So it's kind of hard to see what the selectional advantage [of the bee communication system] is. And there's almost nothing known about its fundamental nature. The evolution of it is complicated; it's barely studied – there are [only] a few papers. Even the basic neurophysiology of it is extremely obscure. I was reading some of the most recent reviews of bee science. There are very good descriptive studies – all sorts of crazy things are reported. But you can't really work out the basic neurophysiology, and the evolution is almost beyond investigation, even though it's a perfect subject – hundreds of species, short gestation period, you can do any experiment you like, and so on and so forth. On the other hand, if you compare the literature on the evolution of bee communication to the literature on the evolution of
human language, it's ridiculous. On the evolution of human language there's a library; on the evolution of bee communication, there are a few scattered textbooks and technical papers. And it's a far easier topic. The evolution of human language has got to be one of the hardest topics to study. Yet somehow we feel that we have got to understand it, or we can't go further. It's a highly irrational approach to
inquiry.[C]
2
On a formal theory of language and its accommodation to biology; the distinctive nature of human concepts
 
JM:
Let me pursue some of these points you have been making by asking you a different question. You, in your work in the 1950s, effectively made the study of language into a mathematical, formal science – not mathematical, of course, in the way Markov systems are mathematical, but clearly a formal science that has made very considerable progress. Some of the marks of that progress have been – for the last few years, for example – successive elimination of all sorts of artifacts of earlier theories, such as deep structure, surface structure, and the like. Further, recent theories have shown a remarkable ability to solve problems of both descriptive and explanatory adequacy. There is a considerable increase in degree of simplification. And there also seems to be some progress toward biology – not necessarily biology as typically understood by philosophers and by many others, as a selectional evolutionary story about the gradual introduction of a complex structure, but biology as understood by people like
Stuart Kauffman (
1993
) and D'Arcy Thompson (
1917
/1942/1992). I wonder if you would comment on the extent to which that kind of mathematical approach has progressed.[C]
NC:
Ever since this business began in the early fifties – two or three students,
Eric Lenneberg, me, Morris Halle, apparently nobody else – the topic we were interested in was, how could you work this into biology? The idea was so exotic, no one else talked about it. Part of the reason was that ethology was just . . .
 
JM:
Excuse me; was that [putting the theory of language into biology] a motivation from the beginning?
NC:
Absolutely: we were starting to
read ethology, Lorenz, Tinbergen, comparative psychology; that stuff was just becoming known in the United States. The US tradition was strictly descriptive behaviorism. German and Dutch comparative zoologists were just becoming available; actually, a lot was in German. We were interested, and it looked like this was where linguistics ought to go. The idea was so exotic that practically no one talked about it, except the few of us. But it was the beginning of Eric Lenneberg's work; that's really where all this started.
The problem was that as soon as you tried to look at language carefully, you'd see that practically nothing was known. You have to remember that it was assumed by most linguists at the time that pretty much everything in the field was known. A common topic when linguistics graduate students talked to one another was: what are we going to do when there's a phonemic analysis for every language? This is obviously a terminating process. You could maybe do a morphological analysis, but that is terminating too. And it was also assumed that languages are so varied that you're never going to find anything general. In fact, one of the few departures from that was found in Prague-style
distinctive features: the distinctive features might be universal, so perhaps much more is universal. If language were biologically based, it would have to be. But as soon as we began to try to formulate the universal rules that were presupposed by such a view, it instantly became obvious that we didn't know anything. As soon as we tried to give the first definitions of words – what does a word mean? etc. – it didn't take us more than five minutes of introspection to realize that the
Oxford English Dictionary
wasn't telling us anything. So it became immediately obvious that we were starting from zero. The first big question was that of finding something about what was going on. And that sort of put it backwards from the question of how we are going to answer the biological questions.
Now, the fundamental biological question is: what are the properties of this language system that are specific to it? How is it different from walking, say –
what specific properties make a system a linguistic one? But you can't answer that question until you know something about what the system is. Then – with attempts to say what the system is – come the descriptive and explanatory adequacy tensions. The descriptive pressure – the attempt to provide a description of all possible natural languages – made it [the system] look very complex and varied; but the obvious fact about acquisition is that it has all got to be basically the same.
So we were caught in that tension.
Just recently I started reading the records of some of the conferences back in the sixties and seventies. The participants were mostly rising young biologists, a few neurophysiologists, some linguists, a few others. And these kinds of questions kept arising – someone would say, well, what are the specific properties of this system that make it unlike other systems? And all we could do was list a complicated set of principles which are so different [from each other] and so complex that there is no conceivable way that they could have evolved: it was just out of the question.
Furthermore, beyond the comparative question, there is another question lurking, which is right at the edge for biology currently – it is the one that
Kauffman is interested in. That question is, why do biological systems have
these properties – why these properties, and not other properties? It was recognized to be a problem back around Darwin's time. Thomas
Huxley recognized it – that there's going to be a lot of different kinds of life forms, including human ones; maybe nature just somehow allows human types and some other types – maybe nature imposes constraints on possible life forms. This has remained a fringe issue in biology: it has to be true, but it's hard to
study. [Alan] Turing (
1992
), for example, devoted a large part of his life to his work on morphogenesis. It is some of the main work he did – not just that on the nature of computation – and it was an effort to show that if you ever managed to understand anything really critical about biology, you'd belong to the chemistry or physics department. There are some loose ends that the history department – that is, selectional views of evolution – just happens to have. Even natural selection – it is perfectly well understood, it's obvious from the logic of it – even natural selection alone cannot do anything; it has to work within some kind of prescribed channel of physical and chemical possibilities, and that has to be a restrictive channel. You can't have any biological success unless only certain kinds of things can happen, and not others. Well, by now this is sort of understood for primitive things. Nobody thinks, for instance, that mitosis [the process of cellular DNA duplication, leading to division] is determined by natural selection into spheres and not cubes; there are physical reasons for that. Or take, say, the use of polyhedra as construction materials – whether it's the shells of viruses, or bee honeycombs. The physical reasons for that are understood, so you don't need selectional reasons. The question is, how far does it go?
The basic questions of what is specific to language really have to do with issues that go beyond those of explanatory adequacy [that is, with dealing with
Plato's Problem, or explaining the poverty of the stimulus facts for language acquisition]. So if you could achieve explanatory adequacy – if you could say, “Here's
Universal Grammar [UG], feed experience into it, and you get an I-language” – that's a start in the biology of language, but it's only a start.[C] The next step would be, well, why does UG have the properties that it has? That's the basic question. Well, one possibility is just one thing after another – a set of historical accidents, asteroids hitting the earth, or whatever. In that case, it's essentially unexplainable; it is not rooted in nature, but in accident and history. But there is another possibility, which is not unreasonable, given what we know about human evolution. It seems that the language system developed quite suddenly. If so, a long process of historical accident is ruled out, and we can begin to look for an explanation elsewhere – perhaps, as Turing thought, in chemistry or physics.
The standard image in
evolutionary biology – the reason why biologists think that finding something perfect doesn't make any sense – is because you're
looking at things over a long period of evolutionary history. And there are, of course, lots of instances of what François Jacob calls “
bricolage,” or tinkering; at any particular point, nature does the best it can with what is at hand. You get paths in evolution that get stuck up here and go from there and not start over and go somewhere else. And so you do end up with what look like very complicated things that you might have done better if you had had a chance to engineer them from the start. That may be because we don't understand them. Maybe Turing was right; maybe they become this way because they have to. But at least it makes some sense to have that image if you have a long evolutionary development. On the other hand, if something happened pretty fast, it doesn't make any sense to take that image seriously.
For a while, it did not seem as if the
evolution of language could have happened very quickly. The only approach that seemed to make any sense of language was that
UG [or the biological endowment we have that allows us to acquire a language] is a pretty intricate system with highly specific principles that had no analogue anywhere else in the world. And that leads to the end of any discussion of the central problems of biology of language – what's specific to it, how did it get there? The reason for that was the tie between the theory – between the format for linguistic theory – and the problem of
acquisition. Everyone's picture – mine too – was that UG gives something like a format for possible grammars and some sort of technique for choosing the better of them, given some data. But for that to work, the format has to be highly restrictive. You can't leave a lot of options open and, to make it highly restrictive, it seems as though it has to be highly articulated and very complex. So you're stuck with a highly articulated and highly specific theory of Universal Grammar, basically for acquisition reasons. Well, along comes the
Principles and Parameters (P&P) approach; it took shape around the early eighties. It doesn't solve the problem [of saying what is distinctive to language and how it got there], but it eliminates the main conceptual barrier to solving it. The big point about the P&P approach is that it dissociates the format for grammar from acquisition. Acquisition according to this approach is just going to be a matter of picking up (probably) lexical properties, and undoubtedly lexical properties are picked up from experience, so here was another way in which acquisition is dissociated from the
format.
Well, if all of that is dissociated from the principles part of UG, then there is no longer any conceptual reason why they have to be extremely intricate and specific. So you can begin to raise the question, well, have we just been wrong about their complexity and high level of articulation? Can we show that they really are simple? That's where the Minimalist
Program begins. We can ask the question which was always lurking but that we could not handle, because of the need to solve the acquisition problem. With the dissociation of acquisition from the structure of
language – primarily through the choice of
parameters – we can at least address these questions. After the early 1980s, just about every class I taught I started with saying, “Let's see if language is perfect.” We'd try to see if it was perfect, and it didn't work; we'd end up with another kind of complexity. And in fact [pursuing that issue] didn't get very far until about the early 1990s, and then, at that point, things started to come together. We began to see how you could take the latest [theoretical understanding of the] technology and develop a fundamental explanation of it, and so on. One of the things – oddly enough – that was the last to be noticed around 2000 was that
displacement [movement] is necessary. That looked like the biggest problem – why displacement? The right answer – that it's just
internal Merge – strikes you in the face once you look at it in the right way.
JM:
Didn't the story used to be that it was there to meet interface conditions – constraints on the core language system that are imposed by the systems with which language must ‘communicate’?
NC:
Well,
it turns out that it does meet interface conditions; but that's there anyhow. There have to be interface conditions; [the question we could now answer was] the biggest problem – why use displacement to meet them? Why not use indices, or something? Every system [has to] meet those conditions, but does it with different technology. Well now, thinking it through, it turns out that transformational grammar is the optimal method for meeting those conditions, because it's
there for free.
JM:
. . . when thought of as internal and external Merge . . .
NC:
Yes, that comes for free, unless you stipulate that one of them doesn't happen.
JM:
OK, and this helps make sense of why
Merge – thus recursion in the form we employ it in language (and probably mathematics) – is available to human beings alone.[C] Is this all that is needed to make sense of what is distinctive about human language, then, that we have Merge? I can assume, on at least some grounds, that other species have conceptual capacities . . .
NC:
But see, that's questionable. On the sensory-motor [interface] side, it's probably true. There might be some adaptations for language, but not very much. Take, say, the bones of the middle ear. They happen to be beautifully designed for interpreting language, but apparently they got to the ear from the reptilian jaw by some mechanical process of skull expansion that happened, say, 60 million years ago. So that is something that just happened. The articulatory-motor apparatus is somewhat different from other primates, but most of the properties of the articulatory system are found elsewhere, and if monkeys or apes had the human capacity for language, they could have used whatever sensory-motor systems they have for externalization, much as
native human signers do. Furthermore, it seems to have been available for hominids in our line for hundreds of thousands of years before it was used for language. So it doesn't seem as if there were any particular innovations there.

Other books

The Veil by Bowden, William
Rex Stout by The Mountain Cat
Her Accidental Husband by Mallory, Ashlee
The Midnight Star by Marie Lu
The Presence by Eve Bunting
Murder on the Short List by Peter Lovesey
Museum of the Weird by Gray, Amelia
Some Like It Lethal by Nancy Martin