The Science of Language (8 page)

Read The Science of Language Online

Authors: Noam Chomsky

BOOK: The Science of Language
3.98Mb size Format: txt, pdf, ePub
JM:
So you think that with the possible exception of the head and polysynthesis parameter, they're all going to have to be shifted over to the PHON mapping?
NC:
Well, but take the head parameter – it looks like the most solid of the macroparameters (reinterpreted, if Kayne is right, in terms of options for raising), although it's not really solid because while there are languages like English and Japanese where it works, a lot of languages mix them up and one thing works for noun phrases and something else with verb phrases, and so on – but even that, that is a
linearization parameter, and linearization is probably in the externalization system. There's no reason why internal computation should involve linearization; that seems to be related to a property of the sensory-motor system, which has to deal with sequencing through time. So it could be that that too is an externalization parameter. The same is true of polysynthesis, Mark Baker's core parameter. It has to do with whether a sentence's arguments – subject, object, and so on – are internal to the syntactic structure, or are marked in the syntactic structure just like markers, kind of like pronouns, and they hang around on the outside. But that is also a kind of linearization problem.
So it may turn out that there aren't any [computation-]internal parameters; it's just one fixed system.
JM:
What happens then to parameter setting?
NC:
That's the problem of
language acquisition, and a lot of it happens extremely early . . .
JM:
As
Jacques Mehler's work indicated . . .
NC:
All the phonetic stuff, a lot is going on before the child even speaks.
JM:
Familiarization with the native tongue . . .
NC:
It's known that Japanese kids lose the R/L distinction before they can even speak. So some kind of stuff is going on there that is fine-tuning the sensory apparatus. The sensory apparatus does get fine-tuned very early, in other areas too.
JM:
So in principle, it's possible that you don't have to set (‘learn’) any parameters – that it all happens automatically and at an extremely early age, even before the child speaks
.
NC:
Certainly no kid is conscious of what is going on in his or her head. And then you get a three- or four-year-old child who is speaking the language mostly of their peers.
JM:
OK. How does
early and automatic acquisition fit with the kind of data that Charles Yang comes up with, data that suggest that when children ‘grow’ a language, they go through a stage at around two and a half where they exhibit a kind of parameter-setting experimentation: their minds ‘try out’ computational patterns available in other languages that are extinguished as they develop a pattern characteristic of, say, English . . .
NC:
There is interaction, but it's not so obvious that the feedback makes much difference, because most of the interaction is with children.
I don't know about you, but my dialect is [that of] a little corner of Philadelphia where I grew up, not my parents’ – which is totally different. What about you?
JM:
I grew up speaking both English and Tamil
.
NC:
How's that?
JM:
I was born in the southern part of India
.
NC:
Did your parents know Tamil?
JM:
My father did; he learned it by squatting with kids on the floors of their schools
.
NC:
Did they speak Tamil at home?
JM:
No, they didn't. But some of my friends spoke Tamil
.
NC:
So you picked up Tamil from your friends, from other kids. That's normal. No one knows why, but children almost always pick up the language of their peers. And they're not getting any feedback – certainly not teaching. The parents may be trying hard to teach you something, but all they do is teach you artificialities [irregularities]. So it looks like a tuning problem. It works in other things too. There are styles of walking. If you go to Finland –
Carol and I noticed as soon as we were there – they just walk differently. These older women carrying shopping bags racing down the streets; we could barely keep up. It's just the way they walk. People just pick that up.
I remember once when Carol and I were walking down the streets in Wellfleet [Massachusetts] one summer and Howard Zinn was walking in front of us, and right next to him was his son, Jeff Zinn. And the two of them had exactly the same posture. Children just pick these things up. If people really studied things like styles of walking, I'm sure that they'd find something like dialect variation. Think about it: you can identify somebody who grew up in England just by mannerisms.
JM:
Assume so, then what gets put into the lexicon in the way of phonological features?
NC:
Well, as we both agree, a lot of what ends up in the lexicon comes from inside. Nobody's conscious of it, nor can be conscious of it. It's not in the dictionary . . .
JM:
We hope it's accessible to some theory, surely
.
NC:
It has to be; there has to be some kind of theory about it, if you're going to understand it at all. As far as I know, we can't go much beyond the seventeenth century on this. It looks like they found a considerable amount of what we can be aware of. [But of course, that has nothing to do with what scientific theory can reveal.]
So it [that is, the question of what ends up in the lexicon] is a topic, but it's not going to be investigated until people understand that the externalist story [about language and its sounds and meanings] just doesn't get anywhere. Until people understand that it's a problem, it'll just not be investigated.[C]
Some of the stuff that is coming out in the literature is just mind-boggling. Do you look at
Mind and Language
?
JM:
Yes . . .
NC:
The last issue has an article – I never thought that I would see this –
you know this crazy theory of Michael Dummett's, that people don't know their own language, etc? This guy is defending it.
JM:
Terje Lohndal [a graduate student in linguistics at the University of Maryland] – he and Hiroki Narita [a linguistics graduate student at Harvard] – wrote a response to it. I think it's good; I don't know if it will be published. I hope so. [See Lohndal & Hiroki
2009
.]
Is there anything you want to add about design?
NC:
Well, the main thing is, we've got to find another term, because it's just too misleading. And it's true for biology altogether. In biology, people aren't usually misled by it, even though the connotations are there. Well, maybe some of them are misled. So if you read, for example,
Marc Hauser's book on the evolution of communication – which is a very good book, and he's one of the most sophisticated people working in biology – well, if you read through the chapters, there's almost nothing about evolution there. The chapters are discussions of how perfectly adapted organisms are to their ecological niche. A bat can pick out a random mosquito far away and go right after it. And that shows that animals fit their ecological niche. The assumption [in the background] is, of
course, that that's because of natural selection; that they evolved [to fit their niche]. [But the book] doesn't say anything about
evolution [– about how it took place in these specific cases]. [So far as the discussion of the book is concerned,] a creationist could accept it: God designed bats to be able to catch mosquitoes. But that move is very fast. To try to
demonstrate
anything about evolution is extremely hard. Richard Lewontin has a paper coming out on this, about how difficult – just on the basis of population genetics – about what it would take for natural selection actually to have worked. The way it looks, it seems to be a really remote possibility.
JM:
Jerry Fodor is against selection too . . .
NC:
But he's against it for other reasons concerned with intentionality, and that kind of stuff about what something is for. His instincts are right, but I think that's the wrong line to take. You don't ask whether a polar bear is white for surviving or for mating, or something like that. It just is, and because it fits the environment, it survives. That's why people like Philip Kitcher and others go after him.
Do you think that there's anything else to say about design?
JM:
No, although I'm sure that discussion of the topic will not end
there
.
9
Universal Grammar and simplicity
 
JM:
OK, now I'd like to get clear about the current status of Universal Grammar (UG). When you begin to focus in the account of acquisition on the
notion of biological development, it seems to throw into the study of language a lot more – or at least different – issues than had been anticipated before. There are not only the questions of the structure of the particular faculty that we happen to have, and whatever kinds of states it can assume, but also the study of how that particular faculty developed . . .
NC:
How it evolved? Or how it develops in the individual? Genetically, or developmentally?
 
JM:
Well, certainly genetically in the sense of how it came about biologically, but also the notion of
development in a particular individual, where you have to take into account – as you make very clear in your recent work – the contributions of this third factor that you have been emphasizing. I wonder if that doesn't bring into
question the nature of modularity [of language] – it's an issue that used to be discussed with a set of assumptions that amounted to thinking that one could look at a particular part of the brain and ignore the rest of it
.
NC:
I never believed that. Way back about fifty years ago, when we were starting to talk about it, I don't think anyone assumed that that had to be
true. Eric Lenneberg was interested in – we were all interested in – whatever is known about localization, which does tell us something about what the faculty is. But if it was distributed all over the brain, so be it . . .
JM:
It's not so much the matter of localization that is of interest to me, but rather the matter of what you have to take into account in producing an account of development. And that seems to have grown in recent years
.
NC:
Well,
the third factor was always in the background. It's just that it was out of reach. And the reason it was out of reach, as I tried to explain in the LSA paper (2005a), was that as long as the concept of Universal Grammar, or linguistic theory, is understood as a format and an evaluation procedure, then
you're almost compelled to assume it is highly language-specific and very highly articulated and restricted, or else you can't deal with the acquisition problem.
That makes it almost impossible to understand how it could follow any general principles. It's not like a logical contradiction, but the two efforts are tending in opposite directions. If you're trying to get Universal Grammar to be articulated and restricted enough so that an evaluation procedure will only have to look at a few examples, given data, because that's all that's permitted, then it's going to be very specific to language, and there aren't going to be general principles at work. It really wasn't until the
principles and parameters conception came along that you could really see a way in which this could be divorced. If there's anything that's right about that, then the format for
grammar is completely divorced from acquisition; acquisition will only be a matter of parameter setting. That leaves lots of questions open about what the parameters are; but it means that whatever is left are the properties of language. There is no conceptual reason any more why they have to be highly articulated and very specific and restricted. A conceptual barrier has been removed to the attempt to see if the third factor actually does something. It took a long time before you could get anywhere with that.
JM:
But as the properties of language become more and more focused on Merge and, say, parameters, the issue of development in the particular individual seems to be becoming more and more difficult, because it seems to involve appeals to other kinds of scientific enterprise that linguists have never in fact touched on before. And I wonder if you think that the study of linguistics is going to have to encompass those other areas
.
NC:
To the extent that notions such as efficient computation play a role in determining how the language develops in an individual, that ought to be a general biological, or maybe even a general physical, phenomenon. So if you get any evidence for it from some other
domain, well and good. That's why when Hauser and Fitch and I were writing (Hauser, Chomsky & Fitch
2002
), we mentioned optimal foraging strategies. It's why in recent papers I've mentioned things like
Christopher Cherniak's work [on non-biological innateness (
2005
) and on brain wiring (Cherniak, Mikhtarzada, Rodriguez-Esteban & Changizi
2004
)], which is suggestive. You're pretty sure that that kind of result will show up in biology all over the place, but it's not much studied in biology. You can see the reasons.
The intuition that biologists have is basically Jacob's, that simplicity is the last thing you'd look for in a biological organism, which makes some sense if you have a long
evolutionary history with lots of accidents, and this and that happens. Then you're going to get a lot of jerry-rigging; and it appears, at least superficially, that when you look at an animal, it's going to be jerry-rigged. So it's tinkering, as Jacob says. And maybe that's true, and maybe it isn't – maybe
it looks true because you don't understand enough. When you don't understand anything, it looks like a pile of gears, levers, and so on. If you understood enough, maybe you'd find there's more to it. But at least the logic makes some sense. On the other hand, the logic wouldn't hold if language is a case of pretty sudden emergence. And that's what the archeological evidence seems to suggest. You have a time span that's pretty narrow.
JM:
To press a
point of simplicity for a moment: you've remarkably shown that there's a very considerable degree of simplicity in the faculty itself – in what might be taken to be distinctively linguistic aspects of the faculty of language. Would you expect that kind of simplicity in whatever third factor contributions are going to be required to make sense of growth of language in a child?
NC:
To the extent that they're real, then yes – to the extent that they contribute to growth. So how does a child get to know the subjacency condition [which restricts movement of a constituent to crossing a single bounding node]? Well, to the extent that that follows from some principle of
efficient computation, it'll just come about in the same way as cell division comes about in terms of spheres. It won't be because it's genetically determined, or because of experience; it's because that's the way the world works.
JM:
What do you say to someone who comes along and says that the cost of introducing so much simplicity into the faculty of language is having to in the long run deal with other factors outside of the faculty of language that contribute to the growth of language, and also consists, in part, at least, of pushing into another area whatever kinds of global considerations might be relevant to not only language itself, but its use?
NC:
I don't understand why that should be considered a cost; it's a benefit.
JM:
OK; for the linguist interested in producing a good theory, that's plausible
.
NC:
In the first place, the question of cost and benefit doesn't arise; it's either true or it isn't. If it is true – to the extent that it's true – it's a source of gratification that carries the study of language to a higher level. Sooner or later, we expect it to be integrated with the whole of science – maybe in ways that haven't been envisioned. So maybe it'll be integrated with the study of insect navigation some day; if so, it's all to the good.
JM:
Inclusiveness: is it still around?[C]
NC:
Yes; it's a natural principle of economy, I think. Plainly, to the extent that language is a system in which the computation just involves rearrangement of what you've already got, it's simpler than if the system adds new
things. If it adds new things, it's only specific to language. Therefore, it's more complex; therefore, you don't want it, unless you can prove that it's there. At least, the burden of proof is on assuming you need to add new things. So
inclusiveness is basically the null hypothesis. It says language is just what the world determines, given the initial fact that you're going to have a
recursive procedure. If you're going to have a recursive procedure, the best possible system would be one in which everything else follows from
optimal computation – we're very far from showing that, but insofar as you can show that anything works that way, that's a success. What you're showing here is a property of language that does not have to be attributed to genetic endowment. It's just like the discovery that polyhedra are the construction materials. That means you don't have to look for the genetic coding that tells you why animals such as bees are going to build nests in the form of polyhedra; it's just the way they're going to do it.
JM:
Inclusiveness used to depend to a large extent upon the lexicon as the source of the kind of ‘information’ to be taken into account in a computation; does the lexicon still have the important role that it used to have?
NC:
Unless there's something more primitive than the lexicon. The lexicon is a complicated
notion; you're fudging lots of issues. What about compound nouns, and idioms, and what kinds of constructive procedures go on in developing the lexicon – the kind of thing that Kenneth Hale was playing with? So ‘
lexicon’ is kind of a cover for a big mass of problems. But if there's one aspect of language that is unavoidable, it's that in any language, there's some assembly of the possible properties of the
language – features, which just means linguistic properties. So there's some process of assembly of the features and, then, no more access to the features, except for what has already been assembled. That seems like an overwhelmingly and massively supported property of language, and an extremely natural one from the point of view of computation, or use. So you're going to have to have some kind of lexicon, but what it will be, what its internal structure will be, how morphology fits into it, how compounding fits in, where idioms come in – all of those problems are still sitting there.
JM:
Merge – the basic computational principle: how far down does it go?
NC:
Whatever the lexical atoms are, they have to be put together, and the easiest way for them to be put together is for some process to just form the object that consists of
them. That's Merge. If you need more than that, then ok, there's more – and anything more will be specific to language.
JM:
So in principle, von Humboldt might have been right, that the lexicon is not this – I think his term was “completed, inert mass” . . .
NC:
. . . but something created . . .
JM:
. . . something created and put together. But if it's put together, is it put together on an occasion, or is there some sort of storage involved?
NC:
It's got to be storage. We can make up new words, but it's peripheral to the language [system's core computational operations].[C]
As for Humboldt, in fact, I think that when he was talking about the
energeia
and the lexicon, I think he was actually referring to usage. In fact, almost all the time, when he talks about infinite use of finite means, he doesn't mean what we mean – infinite generation – he means use; so, it's part of your life.
JM:
But he did recognize that use depended rather heavily upon systems that underlie it, and that effectively supported and provided the opportunity for the use to . . .
NC:
. . . that's where it fades off into obscurity. I think now that the way that I and others who have quoted him has been a bit misleading, in that it
sounds as if he's a precursor of generative grammar, where perhaps instead he's really a precursor of the study of language use as being unbounded, creative, and so on – in a sense, coming straight out of the Cartesian tradition, because that's what
Descartes was talking about. But the whole idea that you can somehow distinguish an internal competence that is already infinite from the use of it is a very hard notion to grasp. In fact, maybe the person who came closest to it that I've found is neither
Humboldt nor Descartes, but [A.W.] Schlegel in those strange remarks that he made about poetry [see Chomsky,
1966
/2002/2009)]. But it was kind of groping around in an area there was no way of understanding, because the whole idea of a recursive infinity just didn't exist.
JM:
But didn't Humboldt distinguish . . . he did have a distinction between what he called the Form of language and its character, and that seems to track something like a distinction between competence and
use . . .
NC:
It's hard to know what he meant by it. When you read through it, you can see it was just groping through a maze that you can't make any sense of until you at least distinguish, somehow, competence from performance. And that requires having the notion of a recursive procedure and an internal capacity that is ‘there’ and already infinite, and can be used in all the sorts of ways he was talking about. Until you at least begin to make those distinctions, you can't do much except grope in the wilderness.
JM:
But that idea was around, as you've pointed out.
John Mikhail pointed it out in Hume; it was around in the seventeenth and
eighteenth centuries . . .
NC:
. . . something was around. What Hume says, and what John noticed, is that you have an infinite number of responsibilities and duties, so there has to
be some procedure that determines them; there has to be some kind of system. But notice again that it's a system of usage – it determines usage. It's not that there's a class of duties characterized in a finite manner in your brain. It's true it has to be that; but that wasn't what he was talking about. You could say it's around in Euclid, in some sense. The idea of a finite axiom system sort of incorporates the idea; but it was never clearly articulated.

Other books

Blackberry Wine by Joanne Harris
Ever Tempted by Odessa Gillespie Black
History of the Jews by Paul Johnson
Wartime Wife by Lane, Lizzie