The Science of Language (5 page)

Read The Science of Language Online

Authors: Noam Chomsky

BOOK: The Science of Language
12.86Mb size Format: txt, pdf, ePub
3
Representation and computation
 
JM:
Continuing in the same vein, your understanding of computation seems to differ from the philosophically favored notion where it is understood as tied in with a representational theory of mind. Computation there is understood to be something like the operations of a problem-solving device that operates over symbols understood in traditional (not your) semantic terms, in terms of relationships of items inside the head that represent things outside in the world
.
NC:
The term “representation” is used in a kind of technical sense in the philosophical literature which I think basically comes back to the theory of ideas. You know there's something out there and the impression of it becomes an idea, and then there's a relation – so, say, in Jerry Fodor's representational theory of mind – there's a causal relation between
the cat over there
and the concept
cat
in your language of thought. And Kripke, Putnam, Burge have a picture roughly like that.
 
JM:
Well, it's more than just causal – I mean, for Fodor, it really is a semantic relationship . . .
NC:
Yes, but it is causal [in that something ‘out there’ causes the formation of an internal representation which is your ‘idea of’ what causes it]. I mean, that's how you get the connection. There is some causal relation, and then, yes, it sets up the semantic relation of reference. And there is a factual question as to whether any of that happens. Obviously there's some causal relation between what's outside in the world and what's in our head. But it does not follow that there's a symbol–object relationship, [something like the reverse of the causal one]. And the big problem with that approach is – what's the object? Well, here we're back to studying lexical concepts and it was pretty clear by the seventeenth and eighteenth centuries that there wasn't going to be a relation like that, even for the simplest concepts. We just individuate things in different ways.
Locke's discussion of personal identity is a famous example of how we just don't individuate things that way; [we, or rather, our minds, produce the
concept PERSON]. That goes back to Aristotle and form and matter, but then it's very much extended in the seventeenth century; and then it kind of dropped. As far as I know, after Hume it virtually disappears from the literature. And now – these days – we're back to a kind of neo-scholastic picture of word–thing relations. That's why you have books called
Word and
Object
[by W.V.O. Quine] and that sort of thing. But there's no reason to believe that that relation exists. So yes, the representational theories of mind are bound to a concept of representation that has historical origins but has no particular merits as far as I know.
JM:
I asked in part because, when you read works of people like Georges Rey, he seems to assume that when Turing speaks of computation, he was essentially committed to something like a representational account
.
NC:
I don't see where that comes from – I don't see any evidence for that in
Turing. That's the way Turing is interpreted by Rey, by Fodor, and by others. But I don't see any textual basis for that. In fact, I don't think Turing even thought about the problem. Nothing in what I've read, at least. You can add that if you like to Turing; but it's not there. Now Georges Rey in particular has carried out a very intensive search of the literature to find uses of the word ‘representation’ in my work and elsewhere, and consistently misinterprets them, in my opinion [see Rey's contribution and Chomsky's reply in Hornstein & Antony (
2003
)]. If you look at the literature on cognitive science and neurology and so on and so forth, people are constantly talking about internal
representations. But they don't mean that there's a connection between what's inside and some mind-independent entity. The term “internal representation” just means that something's inside. And when you add this philosophical tradition to it, yes, you get funny conclusions – in fact, pointless ones. But if we learned anything from graduate school when we were reading the late Wittgenstein, it's that that's a traditional philosophical error. If you want to understand how a cognitive neuroscientist or a linguist is using the word
representation
, you've got to look at how they're using it, not add a philosophical tradition to it. [To return to an earlier point,] take
phonetic representation – which is the standard, the traditional linguistic term from which all the others come. Nobody thinks that an element in a syllable in IPA [International Phonetic Alphabet] picks out a mind-independent entity in the world. If it's called a phonetic representation, that's just to say that there's something going on in the
head.[C]
4
More on human concepts
 
JM:
We had spoken earlier about the distinctiveness of human concepts, and I'd like to get a bit clearer about what that amounts to. I take it that, at least in part, it has to do with the fact that human beings, when they use their concepts – unlike many animals – do not in fact use them in circumstances in which there is some sort of direct application of the concept to immediate circumstances or situations
.
NC:
Well, as far as anyone knows – maybe we don't know enough about other animals – what has been described in the
animal literature is that every action (local, or whatever) is connected by what Descartes would have called a machine to either an internal state or an external event that is triggering it. You can have just an internal state – so the animal emits a particular cry [or other form of behavior] ‘saying’ something like “It's me” or “I'm here,” or a threat: something like “Keep away from me,” or maybe a mating cry. [You find this] all the way down to insects. Or else there is a reaction to some sort of external event; you get a chicken that's looking up and sees something that we interpret as “There's a bird of prey” – even though no one knows what the chicken is doing. It appears that everything is like that, to the extent – as
mentioned before – that Randy Gallistel (
1990
) in his review introduction to a volume on animal communication suggests that for every animal down to insects, whatever internal representation there is, it is one-to-one associated with an organism-independent external event, or internal event. That's plainly not true of human language. So if [what he claims] is in any way near to being true of animals, there is a very sharp divide there.
 
JM:
That's a sharp divide with regard to what might be called the “use” or application of relevant types of concepts, but I take it that it's got to be more than that . . .
NC:
Well, it's their natures. Whatever the nature of HOUSE, or LONDON, ARISTOTLE, or WATER is – whatever their internal representation is – it's just not connected to mind-independent external events, or to internal states. It's basically a version of Descartes's point, which seems accurate enough.
JM:
OK, so it's not connected to the use of the concepts, nor is it connected . . .
NC:
Or the thought. Is it something about their nature, or something about their use? Their use depends on their nature. We use HOUSE differently from how we use BOOK; that's because there's something different about HOUSE and BOOK. So I don't see how one can make a useful distinction . . .
JM:
There's a very considerable mismatch, in any case, between whatever features
human concepts have and whatever types of things and properties in the world that might or might not be ‘out there’ – even though we might use some of these concepts to apply to those things . . .
NC:
Yes, in fact the relation seems to me to be in some respects similar to the sound side of language[, as I mentioned before]. There's an internal representation,
æ
, but there's no human-independent physical event that
æ
is associated with. It can come out in all sorts of ways . . .
JM:
So for concepts it follows, I take it, that only a creature with a similar kind of mind can in fact comprehend what a human being is saying when he or she says something and expresses the concepts that that person has . . .
NC:
So when you teach a dog commands, it's reacting to something, but not your concepts . . .
JM:
OK, good. I'd like to question you then in a bit more detail about what might be thought of as relevant types of theories that one might explore with regard to concepts. Does it make sense to say that there are such things as atomic concepts? I'm not suggesting that they have to be atomic in the way that
Jerry Fodor thinks they must be – because of course for him they're semantically defined over a class of identical properties . . .
NC:
External . . .
JM:
External properties, yes
.
NC:
I just don't see how that is going to work, because I don't see any way to individuate them mind-independently. But I don't see any alternative to assuming that there are atomic ones. Either they're all atomic, in which case there are atomic ones, or there is some way of combining them. I don't really have any idea of what an alternative would be. If they exist, there are atomic ones. It seems a point of logic.
JM:
I wonder if the view that there must be atomic concepts doesn't have about the same status as something like Newton's assumption that there have to be corpuscles because that's just the way we think . . .
NC:
That's correct . . . there have to be corpuscles. It's just that Newton had the wrong ones. Every form of physics assumes that there are some things that are elementary, even if it's strings. The things that the world is made up of, including our internal natures, our minds – either those things are composite, or they're not. If they're not composite, they're atomic. So there are corpuscles.
JM:
Is there work in linguistics now being done that's at least getting closer to becoming clearer about what the nature of those atomic entities is?
NC:
Yes, but the work that is being done – and it's interesting work – is almost entirely on relational
concepts. There's a huge literature on telic verbs, etc. – on things that are related to syntax. How do events play a role, how about agents, states . . .? Davidsonian kind of stuff. But it's relational.
The concerns of philosophers working on philosophy of language and of linguists working on semantics are almost complementary.
Nobody in linguistics works on the meaning of WATER, TREE, HOUSE, and so on; they work on LOAD, FILL, and BEGIN – mostly verbal concepts.
JM:
The contributions of some philosophers working in formal semantics can be seen – as you've pointed out in other places – as a contribution to syntax
.
NC:
For example, Davidsonian-type work . . .
JM:
Exactly . . .
NC:
whatever one thinks of it, it is a contribution to the syntax of the meaning side of language. But contrary to the view of some Davidsonians and others, it's completely
internal, so far as I can see. You can tie it to truth conditions, or rather truth-indications, of some kind; it enters into deciding whether statements are true. But so do a million other
things.[C]
5
Reflections on the study of language
 
JM:
You used to draw a distinction between the language faculty narrowly conceived and the language faculty more broadly conceived, where it might include some performance systems. Is that distinction understood in that way still plausible?
NC:
We're assuming – it's not a certainty – but we're basically adopting the Aristotelian framework that there's sound and meaning and something connecting them. So just starting with that as a crude approximation, there is a sensory-motor system for externalization and there is a conceptual system that involves thought and action, and these are, at least in part, language-independent – internal, but language-independent.
The broad faculty of language includes those and whatever interconnects them. And then the
narrow faculty of language is whatever interconnects them. Whatever interconnects them is what we call syntax, ‘semantics’ [in the above sense, not the usual one], phonology, morphology . . ., and the assumption is that the faculty narrowly conceived yields the infinite variety of expressions that provide information which is used by the two interfaces. Beyond that, the sensory-motor system – which is the easier one to study, and probably the peripheral one (in fact, it's pretty much external to language) – does what it does. And when we look at the conceptual system, we're looking at human action, which is much too complicated a topic to study. You can try to pick pieces out of it in the way Galileo hoped to with inclined planes, and maybe we'll come up with something, with luck. But no matter what you do, that's still going to connect it with the way people refer to things, talk about the world, ask questions and – more or less in [John] Austin style – perform speech acts, which is going to be extremely hard to get anywhere with. If you want, it's pragmatics, as it's understood in the traditional framework [that distinguishes syntax,
semantics, and pragmatics].
1
 
All of these conceptual distinctions just last. Very interesting questions arise as to just where the boundaries are. As soon as you begin to get into the real way it works in detail, I think there's persuasive – never conclusive, but very persuasive – evidence that the connecting system really is based on some merge-like operation, so that it's
compositional to the core. It's building up pieces and then transferring them over to the interfaces and interpreting. So everything is compositional, or cyclic in linguistic terms. Then what you would expect from a well-functioning system is that there are constraints on memory load, which means that when you send something over the interface, you process it and forget about it; you don't have to re-process it. Then you go on to the next stage, and you don't re-process that. Well, that seems to work pretty well and to give lots of good empirical results.
But there is a problem. The problem is that there are global properties. So, for example, on the sound side, prosodic properties are global. Whether the intonation of the sentence is going to rise or fall at the end depends on the complementizer with which it begins. So if it's going to be a question that begins with, say, “who” or “what,” that's going to determine a lot about the whole
prosody of the sentence. And for this and other reasons it's a global property; it's not built up piece by piece. Similarly, on the semantic side, things like variable binding or Condition C of binding theory are plainly global. Well, what does that mean? One thing it may mean is that these systems – like, say, prosody and binding theory – which we have thought of as being narrow syntax, could be outside the language faculty entirely. We're not given the architecture in advance. And we know that, somehow, there's a
homunculus out there who's using the entire sound and entire meaning – that's the way we think and talk. It could be that that point where all the information is going to be gathered, that is where the global properties apply. And some of these global properties are situation-related, like what you decide to do depends on what you know you're talking about, what background information you're using, etc. But that's available to the homunculus; it's not going to be in the language faculty. The language faculty is kind of like the digestive system, it grinds away and produces stuff that we use. So we don't really know what the boundaries are. But you might discover them. You might discover them in ways like these.[C]
In fact, we might discover that the whole idea of an interface is wrong. Take, say, the sound side, which is easier to think about because we have some information about it. It's universally assumed – this goes back to the beginning of the subject – that the
internal language constructs some kind of narrow phonetic representation which is then interpreted by the sensory-motor system; it's said in different ways, but it always comes down to this. Well, it's not a
logical
necessity. It could be that in the course of generating the
sound side of an utterance, you send pieces over to the sensory-motor system long before you send other pieces over. So there won't be a phonetic interface. You can make up a system that works like that, and we don't know that language doesn't. It's just taken for granted that it doesn't because the simplest assumption is that there's one interface. But the fact that it's the first thing that comes to mind doesn't make it true. So it could be that our conception of the architecture is just like a first guess. It is not necessarily wrong, but most first guesses are. Take a look at the history of the advanced
sciences. No matter how well established they are, they almost always turned out to be wrong.
JM:
True, but their construction has often been guided by the intuition that simplicity of structure is crucial; and you get [at least partial] success when you follow that particular lead
.
NC:
No one knows why, but that has been a guiding intuition. In fact, that's sort of the core of the Galilean conception of science. That's what guided me. And in biology, that's what guided people like Turing in his effort to place the study of biology in the physics and chemistry departments.
JM:
History, free action, and accident mess things up and are beyond the scope of natural science. Did you think when you began all this that linguistics might become more and more like a physical
science?
NC:
I'm kind of torn. I mean, I did believe what I was
taught [by Zellig Harris and my other instructors]; a nicely brought-up Jewish boy does. But it less and less made any sense. By the late forties I was working kind of on my own and thinking maybe it – the idea that the study of language is a natural science – was a personal problem. It wasn't until the early 1950s that I began to think that the personal problem made some sense; and I began to talk about it. So it was kind of a difficult process to go through. And then, of course[, I had a long way to go]. For years, when I thought I was doing generative grammar, I was actually taking stuff over from traditional grammar.
1
Chomsky's point concerning pragmatics seems to be that it is very unlikely to be a naturalistic science (at least, as it is currently understood), even though one might find systematic aspects of the ways in which people use language. See
Appendix VI
.
 

Other books

Siege of Heaven by Tom Harper
Eight Inches to make Johnny Smile by Claire Davis, Al Stewart
Prince for a Princess by Eric Walters
The Mango Opera by Tom Corcoran
House of Shadows by The Medieval Murderers
Twin Cities Noir by Julie Schaper
Sekret by Lindsay Smith
Sweet Release by Pamela Clare