The Science of Language (29 page)

Read The Science of Language Online

Authors: Noam Chomsky

BOOK: The Science of Language
4.97Mb size Format: txt, pdf, ePub
Efforts to turn Fregean semantics into a semantics for natural languages must confront other problems too. One must perform some sleight of hand to deal with the fact that fictional terms such as “Pegasus” and descriptions such as “the average Irishman” and “the square circle” make perfectly good sense to ourselves and to others when we speak and engage in conversation, even
though the ‘world’ seems to lack average Irishmen and a winged horse named Pegasus. And square circles are particularly daunting. No one has difficulty using or understanding
The square circle horrifies geometricians
, even though square circles are ‘impossible objects.’ And then there is the problem of vagueness:
bald
(as in
Harry is bald
) does not have a determinate denotation/extension. And so on. I will not catalogue or discuss efforts to deal with these issues. No matter what restrictions, qualifications, and oddments of theoretical machinery are introduced, no general theory is on the horizon and, far more fundamentally, nothing provides a serious answer to (nor plausibly can speak to) what appears to be a fact, that reference is a matter of human use of terms and sentences, and that use appears to be free, and not written into natural events in a way that allows us to construct a theory. While Frege's views about how to proceed in doing semantics make some sense of the practices of mathematicians, he was right to say that they have limited application to natural languages or – more correctly – to the use of natural languages. And since the semantics he offers for mathematics depends essentially on the cooperation of groups of mathematicians in how they use their terms, his ‘semantic theory’ is not, and cannot be, a naturalistic science of meaning. The internalist demands nothing less than that.
But what about Fodor's attempt to construct an externalist (albeit partially nativist) theory of meaning for natural languages, a theory that purports to have senses that determine their denotations/referents, only in a ‘naturalistic’ way? Fodor, recall, has a view of what he calls “concepts” and semantic relationships to the world that according to him is based on causal principles and purports to be a naturalistic theory of natural language meaning. It focuses on his view that meanings must be public (similarity in concepts apparently will not do), and his assumption that in order for them to be public, meanings must be identified with their ‘wide contents,’ where these are taken to be properties of things ‘out there.’ This does not rule out entirely a contribution of the mind: the concepts that natural languages express are claimed to have a mental component too. Intuitively, Fodor holds that a concept is in part a mental entity consisting of a “mode of
presentation” (a psychological/mental version of a Fregean sense) and a denotation, with the latter serving as the meaning of a natural language's term. And these concepts are claimed to be the topics of naturalistic theorizing. To explain: things ‘out there’ via the operations of causal impingements on the human sensory system bring about the acquisition of an internal representation in the form of a mode of presentation (MOP), which is what I have been construing as a lexically represented concept. So far, there is little to dispute; of course internal MOPs (or in the terminology above, the ways in which internal systems configure experience and understanding) develop or grow because of ‘input,’ and the input is informational in a good technical sense: the
probability that a child/organism will develop a DOG
mop
from some impingements with doggish characteristics D is greater than from impingements with cattish ‘shapes’ C, where it is more probable that it will acquire a CAT
mop
. What count as doggish characteristics? Fodor is not entirely clear about the matter, but on a reasonable interpretation, being doggish depends not on dogs, but on the nature of the internal MOP-production system and what it demands for specific ‘triggering’ inputs. Fodor assumes as much by saying that concepts such as DOG and all the others that figure in common sense are “appearance properties.”
2
The problem lies instead in believing, as Fodor seems to, that the mode of presentation in turn stands in a semantic-denotational relation to the thing(s) that serve as the distal cause of the relevant impingements. In effect, he claims that a MOP
m
caused by some distal entity or property stands in a denotational relationship to that entity or property, and represents the causing thing(s)/properties ‘out there,’ and that this denotation is somehow determined by the nature of the MOP and the human mind. In his (
1998
)
Concepts
, one finds very little on how this determination is supposed to take place: he says only that we have the kinds of minds that “generalize to” a specific denotation-external property. In his (
2008
)
LOT 2
, he tries to expand on that mysterious capacity of the mind by introducing the idea that the mind is so assembled that some specific MOP ‘falls into’ the right denotation/external content. The picture he draws is at best very speculative. Assuming MOP
m
is associated with the term “m,” these external properties/things constitute the
meaning
of the term “m” (a syntactic entity) that is linked to the MOP. In this way, the linguistic term is ‘about’ its denotation.
The story
Fodor tells about determining denotation is not only implausible, but it is completely unnecessary. It is driven by externalist intuitions that have no merit and are undermined by Fodor's own assumption that the properties that figure in how to
understand the roles of MOPs are what he calls “appearance” properties. To cut a longish story short (told in McGilvray
2002a
,
2010
) the idea that some kind of ‘informational’ causal
relationship is involved in the acquisition of a concept is plausible; an infant is more likely to acquire the concept DOG in the presence of dogs than geese. But there is no obvious reason to think that there is a naturalistically responsible semantic reflex of this causal relationship, one that links in a denotational relationship a mode of presentation and its associated term to the distal ‘things’ that at some
time in the child's past had a significant role in configuring the input to the child's mind that yielded the mode of presentation. Both during and after acquisition, these things are
semantically
relevant only in that some relevant form of ‘trigger’ is required to institute a MOP. Reference by use of a mode of presentation can occur only once the mode of presentation is in place and a person so uses it, but the ways it is applied/used by one or more speaking individuals could and often does have nothing at all to do with whatever distal entities played a role in precipitating the mode of presentation's acquisition.
Reference is rather free – although when using a concept such as PERSON or MOUSE, and claiming literal (not metaphorical) truth for the expression employed, a person with his or her conceptual resources ‘makes’ a person or a mouse into
something that has psychic continuity.
3
It also happens that the causal triggering relationship is likely to be a lot looser than Fodor's externalist-motivated picture would invite: dog-pictures and dog-toys are probably equally likely to suffice in specific cases; and there are, of course, less probable triggers that have no obviously doggy characteristics at all, such as dog-stories and dog-poems that could nevertheless trigger the relevant MOP by portraying in some discourse a creature with some relevant characteristics. Generally, since it is the internal system that sets the agenda for what counts as the needed patterns and other characteristics of the impingements, the ‘real’ natures of distal causes matter little. So long as they are provided at least some information of the sort they need to come into operation (and they determine what is the right input), that is what counts. So surely the resultant MOP, not some external causing thing or property, is the best place to look for the ‘content’ of a word or concept. That is the one relatively fixed factor. It is so by virtue of the fact that our minds are biologically much the same, sharing not only cognitive capacities, but interests and other factors that are likely to make certain features relevant or important for the kinds of creatures we are.
Fodor can have external contents of a sort, of course. But unfortunately for the prospects of a naturalistic theory of denotation, the only ones he can have with natural languages and their applications are those provided by the actions of individuals who refer,
using whatever MOPs they employ: our minds constitute how the things of the commonsense world are, and how they appear to be. Or to look at the difficulties with Fodor's externalist efforts from another direction, even if one believed with him that a distal cause of the acquisition of a MOP somehow constituted the external content of the term associated with the MOP, an ‘external content’ introduced in this way would
be irrelevant to how a person used the MOP, and the way the person uses a MOP would be the deciding factor, even though that offers no prospects for constructing a genuine naturalistic theory of denotation. Surely, then, if one is really interested in offering a science of meaning, one should focus one's efforts as natural scientist on the mechanisms and ‘outputs’ of the relevant systems in the head, those that yield the modes of presentation expressed in natural languages. There is reason to think that the mechanisms and outputs are near universal across the species (see the main text discussion), given appropriate stimulation. Given that, and given the failures of externalist approaches when they pretend to be contributions to naturalistic theory rather than sociological observations about what people doing certain things by their actions often refer to, that is the only plausible place to look.
There are other difficulties with Fodor's view that arise because he demands that linguistic syntax be mapped to the language of thought. Chomsky outlines the difficulties with this in the main text (see also Chomsky
2000
), but I do not discuss them here.
Returning finally to a basic theme of this subsection: it seems that even elementary natural language concepts pose issues for the externalist that cannot be overcome. Our concepts of cities (London), states (France), towns . . . invite thinking of these ‘entities’ in both abstract and concrete terms. Remembering that the issue is what a science of meanings/semantics would look like on externalist terms, we would have to populate a world for a science with some extraordinary-looking entities. There is no reason to, given the internalist alternative. A science of linguistically expressed meanings is a science of our natively given
concepts.
VI.3
What is wrong with semantic externalism: second pass
 
There are other versions of externalism. A popular version of meaning externalism is found in the work of
Wilfrid Sellars and David Lewis and some of their progeny. I aim to show that their views of reference and
compositional rules fail to do what they need to. Whether they and their progeny were reaching for a naturalistic theory of meaning or not is irrelevant. They do not even come close to satisfying the demands they place on their own accounts of ‘public’ meanings.
Wilfrid Sellars's and David Lewis's approach to language cannot be called naturalistic; neither believes that language is a natural system in the head or ‘out there,’ for that matter. No doubt
Sellars's connectionist progeny would disagree; I speak to their claims below. Lewis's and Sellars's views of linguistically expressed meaning might perhaps be thought of as a sociolinguistic or game-theoretic view of language, although that is a stretch: they show no effort to engage in careful statistical sampling, or the like, and if the
discussion below is correct, they would have been unhappy if they had. But to the extent that it is scientific-looking at all, theirs is an attempt at something like sociolinguistics or game theory. Representative examples of their efforts are found in Sellars's “Some Reflections on Language Games” (
1951
) and David Lewis's “Languages and Language” (
1975
). Their basic ideas (and the assumptions that go with them) have proven very influential, appearing in the work of McDowell, Brandom, Churchland, and many others. So far as I can tell, what I have to say about Sellars and Lewis applies, with little change, to the others.
Both attempt theories of linguistic meaning that amount to accounts of language use in populations of users. Both their accounts assume that speakers learn certain patterns of behavior and behave in accord with them. These, they believe, are the “rules” of
language. These rules are basically patterns of inference,
inferences that are thought to yield reliable understanding of the world. Specifically, their views focus on what Sellars called linguistic “practices” and Lewis, “
conventions.” Both notions presuppose linguistic regularities in use.
4
They suppose that communities of speakers exhibit regular practices and conventions, and that these presumed patterned forms of linguistic behavior constitute the basic principles of semantic (and perhaps even syntactic) compositionality. Given the considerations discussed above, it is not clear why they believe that their approach is likely to lead to some set of inferential patterns genuinely shared across a
population. Wittgenstein (
1953
) – a good observer of language use (and enemy of those who would make a theory of language use) – made the point some time ago. People do not play a single game when using language; they do all sorts of things with it. And there does not appear to be a single most fundamental game such as describing the
environment or “telling the truth,” one on which all others are parasitic; this was another of Wittgenstein's points. There is at least one other reason to be worried too. Treating language as a social phenomenon and construing language's rules and principles as if they amounted to constraints on actions produced in public and subject to others’ critical scrutiny focuses on at most 2 to 3 percent of language use. Most language use takes place in the head, and there is no obvious social constraint on how it is used there. So taking either the game-theoretic or the social science route to provide insight into what languages and their uses ‘are’ seems to demand a considerable leap of faith, not a reasonable assessment of a hypothesis's chances for success. To compound the problem, Lewis and
Sellars seem to depend on a combination of these two dubious strategies. They not only choose to pay sole attention to the possibly 2 to 3 percent of
language use that is externalized, but they also assume that people play a single most fundamental ‘game,’ one of constructing a theory of the world, with – presumably – a much smaller segment of that 2 to 3 percent.
5
They focus on cases where individuals are likely to be careful in what they say and how they express themselves – and not for reasons of politesse or fear of punishment, but because they are construed as constraining themselves to be truth-tellers or protoscientists of a sort. Perhaps a few academics devote a fair amount of their 2 to 3 percent of externalized linguistic behavior to doing this, but I doubt that many other individuals do. Nevertheless, ignoring these rather daunting hurdles and – in the interests of charity – proceeding to take their project seriously, what can be said about its prospects for offering a semantic theory for a natural language? For this purpose, we can look at what they have to say about rules.
Rather than Sellars and Lewis separately, I look primarily at Sellars's views. Lewis and Sellars differ to an extent in specifics (admiration for possible worlds and possible world semantics in Lewis's case, for example), but not in their basic assumptions and strategies. In discussing their views, I ignore their more technical work and focus on fundamental claims. Sellars was introduced to the discussion earlier in a different but related connection, his
adoption of a behaviorist view of language acquisition and the picture of language that goes along with it. What is said there is relevant to the current issue because Sellars, like many others (either explicitly or implicitly), takes behaviorist-connectionist training to be the way to provide individuals in a population with the practices in the use of language that they must learn and respect in order to “master” a language. He treats language's rules and principles as practices that enjoin individuals how to act, how to produce the kinds of linguistic behaviors that he takes linguistic statements/sentences to be. Specifically,
Sellars treats linguistic behaviors as regular and regulated forms of activity across a population of individuals, activities that respect epistemic and other norms of a linguistic community, where the norms are epistemic in nature, uniform, and induced by training. They concern, among other things, what to say and believe, how to reason, and how to act, given certain sensory inputs. In effect, Sellars treats a linguistic community as a group of individuals who share epistemic practices by virtue of being trained to conform to these practices. Making practices out to be epistemic norms as he does, he also treats the languages of which these practices are the rules as constituting theories of the world. Natural languages, then, come to be seen
(wrongly) as uniform theories of the world shared across a population, and speakers proto-scientists, although apparently rather poor ones, given Sellars's views of what we are given when trained to speak: commonsense ‘folk’ theory, not much simpler (formally speaking) particle physics and biology. Sellars's ‘semantic theory’ amounts, then, to the idea that a natural language serves as a public and uniform theory of the world, and that the rules of language are those that are conceived to be reasonable guides to getting around in this world. Lewis's approach is similar.
To clarify: there is nothing wrong with ‘folk theories,’ when understood as parts of the commonsense understanding of the world. Folk biology focuses on organisms and plants, folk physics on movement and effort (and apparently underlies Descartes's contact mechanics – and Galileo's, Huygens's, Newton's, Leibniz's, etc.), and so on. They no doubt reflect patterns of belief and action found not in just certain linguistic populations, but across the species. Plausibly, they have the characters they do because of the natures of the commonsense concepts with which we are endowed, concepts that are expressed in our use of natural language. But these ‘theories’ are not learned, they are innate in the way our concepts are. They by no means offer or underlie the compositional rules/principles of our natural languages. And they are hopeless as naturalistic sciences.
To cut then to the chase, what Sellars has in mind by practices and Lewis by conventions have nothing to do with the kinds of rules and principles that do constitute the syntactic–semantic combinatory principles of natural languages. As Chomsky (
1980
/2005) points out in his criticism of Lewis's version of what amounts to the Sellars–Lewis view, “conventions” (only slightly different from Sellars's practices) not only say nothing about why “The candidates wanted each other to win” has about the same meaning as “Each candidate wanted the other to win,” but are silent on why “The candidates wanted me to vote for each other” makes no sense at all. To take another example drawn from
Pietroski (
2002
), conventions/practices make no sense of why “The senator called the millionaire from Texas” can be understood with the call from Texas, or the millionaire from Texas, but not the senator from Texas. Any serious theory of
language's ‘rules’ must be able to speak to these elementary syntactic–semantic points, and endless numbers of others. Chomsky's linguistic principles describe and explain these facts and a wide range of others concerning what sentences can and cannot mean. Simple examples like these and the lack of answers from Sellars and Lewis and their followers on these matters (and the absence of theoretical and objective descriptions of the structure of the relevant examples and all others) indicate that trying to make sense of what a language is and how it is related to its understanding by looking to behavioral practices and conventions and by relying on unreasonable assumptions about what we ‘do’ with language is a
failed strategy. It is not a failure of effort: no effort along the lines of what they have in mind, no attempt to modify, to try to modify in some way their views of conventions or practices, or the like can help. The approach is fundamentally flawed not only when treated as an attempt at natural science, but even as just a reasonably accurate description of what people do with language and the concepts they employ.
I do not claim that there are no practices at all. Lexical items’ sound–meanings associations can be seen as practices of a sort, but they are irrelevant: these associations are not rules of use in anything like the required sense. There are some examples of ‘
rules of use.’ In English-speaking communities, for example, there is a practice of sorts of saying “hello” when greeting someone. But there is also
hi
,
good to see/meet you
,
howdy
,
hey there
, and several others, including a current favorite in some groups,
dude
. However, there are very few such cases and the many variations make them not just dubious, but useless as rules. Variations are clearly welcomed and
even encouraged; flexibility is encouraged, even proving rewarding for the “spirit” (Kant's term) that it displays. None of the cases like these can do the job they need to do. For the purpose of constructing a general theory of meaning and use, they appear trivial, and they are. So apparently,
Sellars and Lewis do not quite ignore the facts entirely, for there are some linguistic practices of a sort. But they seem to be irrelevant to constructing not even a science of meaning, but a reasonably plausible description of phenomena that support their approaches to language and its meanings. There are no regularities of the sort that they need. That is because they are looking in the wrong place. They should look inside the head and abandon too their focus on language use.
What if it turned out that they were at least somewhere near the mark; could a view of natural language use that observed regularities provide some kind of science of concepts and meanings? Generally, natural languages are nothing like naturalistic theories of the world with invented concepts and practitioners who try to be careful in how they employ the symbols of their theories. Natural languages serve entirely different
purposes that not only do not demand determinate uses of the sort found in the practices of mathematicians and natural scientists (the uses that Fregean approaches depend on) but rather support flexibility in use. Natural language
concepts allow considerable scope for freedom in use, and people routinely exercise that freedom, getting satisfaction from doing so. So it is hopeless to even begin to look for a semantic theory for natural languages that presupposes regimentation in use. There is, to be sure, some kind of relationship between what languages offer humans and the ways they understand the world. For the concepts and perspectives natural languages offer allow us to develop ways to understand the world (the commonsense one, at least) and ourselves, and much else
besides. And no doubt this contributes to making language meaningful to us; it makes it useful for solving practical problems. But there is nothing in this fact for a theory – a science – of linguistic meaning.
As for the connectionist's claim to turn a Sellars–Lewis view of language and its embodiment in the brain into what purports to be a natural science (cf. Morris, Cotterell & Elman
2000
), consider Chomsky's criticism of the much-hailed success of a recent connectionist form of Sellars's
behaviorist-connectionist account of language learning. By way of background, the connectionists have learned some lessons since Sellars's time. Unlike Sellars (and Lewis), they have in recent years come to devote effort to trying to show that their training procedures operating on what they take to be
computer models of ‘plastic’ neural nets (“simple recurrent networks” or SRNs in Elman's case) can yield behavioral analogues of Chomsky's linguistic principles. It is not obvious why. Their efforts are puzzling in the way the Sellarsian and Lewisian efforts were, but also for another reason. In choosing what to train SRNs to produce in the way of outputs, they choose behaviors that conform to some rule statement or another that has appeared in the work in the Chomskyan tradition. They devote considerable time and experimental resources to trying to get a computer model of a plastic neural net (more realistically, very many of them going through massive training sessions in various ‘epochs’ of training, sometimes with the best performers subjected to other epochs in an attempt to simulate a [naïve: see
Appendix II
] view of evolution, and so on) after a long process of training to duplicate in its/their outputs some range of a set of ‘sentences’ (thought of here as sets of binary code, not as internal expressions) chosen from a linguistic corpus and thought to represent behavior that accords with the chosen rule. The connectionists clearly have no intention of adopting Chomsky's naturalistic approach to languages themselves, and appear to ignore the background facts, assumptions, and methods that led to the improving degree of success that Chomskyan linguistic theory has had in recent years, the theories that include the rule- or principle-
statements. They refuse to treat the rule/principle they focus on at a time as a rule/principle of derivation/computation of a natural ‘organ,’ one that does not produce linguistic behavior itself but that offers anyone who has such a system that can derive any of the infinite number of expressions that their I-languages make possible. They seem to think that the
facts of language acquisition and creative language use must be wrong, and while they do take Chomskyan rules/principles into account in the superficial way indicated, their concern is to attempt to show that neural nets can be trained to produce behaviors that they believe indicate that the net has ‘learned’ the rule/principle. One way to measure how successful they have been in their efforts is found in Elman's (
2001
) claim that he got a neural net to deal with the common phenomenon of nested dependencies in natural
languages. An example of nesting is center-embedded clauses in sentences; dependencies include subject-verb number agreement. They are important, for dependencies are closely related to linguistic structures and to constraints on them; they play a central role in syntax-semantics. As for Elman's claim to be successful, Chomsky remarks (in comments in personal correspondence that also appear in the third (2009) edition of
Cartesian Linguistics
): “No matter how much computer power and statistics . . . [connectionists] throw at the task [of language acquisition], it always comes out . . . wrong. Take [Jeff] Elman's . . . paper[s]
6
. . . on learning nested dependencies. Two problems: (1) the method works just as well on crossing dependencies, so doesn't bear on why language near universally has nested but not crossing dependencies. (2) His program works up to depth two, but fails totally on depth three. So it's about as interesting as a theory of arithmetical knowledge that handles the ability to add 2+2 but has to be completely revised for 2+3 (and so on indefinitely).” Details aside, the point is clear. Those convinced that
language is a learned form of behavior and that its rules can be thought of as learned social practices, conventions, induced habits, etc. that people conform to because they are somehow socially constrained, are out of touch with the facts. They are so because they begin with assumptions about language and its learning that have nothing to do with natural languages and their acquisition and use, refuse to employ standard natural science methodology in their investigation, and so offer ‘theories’ of language and its learning that have little to do with what languages are and how they are used.

Other books

Two Strangers by Beryl Matthews
The Mountains Bow Down by Sibella Giorello
Canyons by Gary Paulsen
Leading Man by Benjamin Svetkey
Ride The Storm by Honey Maxwell
The Missing by Shiloh Walker
Facade by Kim Carmichael
Everywhere That Tommy Goes by Howard K. Pollack