Read The Science of Language Online

Authors: Noam Chomsky

The Science of Language (25 page)

BOOK: The Science of Language
3.72Mb size Format: txt, pdf, ePub
ads
Here is one way to conceive of terms such as CONCRETE and ANIMATE. Because they contribute to what Chomsky calls a “perspective” available to “
other systems” at the semantic interface SEM, one can think of them in any given case as a contribution to one of a potentially infinite number of “ways of understanding” – understanding where language contributes. Sentences (“expressions”) express these ways; that is, sentences in the technical sense (“expressions”) offer in structured form at SEMs the semantic features of the lexical items of which they are composed. One can think of semantic feature terms as something like adverbial descriptions of how a person can think or conceive of ‘the world’ (including presumably a fictional or discourse or story or abstract world, however minimal) as presented by other systems in the head (cf. Chomsky
1995a
: 20). The precise way in which semantic features do this is by no means clear; that is for a theory to decide, although I make some suggestions in
Appendix XII
. And there is danger inherent in saying that these features offer ways in which
persons
can
understand
, for persons do not figure in natural sciences; the way(s) the features ‘work’ at a semantic interface by providing ‘information’ to other systems is presumably unconscious; and “understand” is by no means a well-defined theoretical term. For the moment, though, it suffices: the relevant features surely have something to do with how people comprehend and think – with “understand” taken here to be a general term for all such cases. Edging a bit into the realm of theory, then, perhaps we can think of the ways lexical items' (LIs’) semantic features contribute as having something to do with the way in which they configure other systems – or perhaps provide instructions to them, offering the semantic information the features constitute.
A caveat: it is a
mistake to think of these features as properties of things ‘out there,’ rather in the way that Fodor (
2008
) speaking of features does. They might appear to have that role in the case of a sentence that a person uses to refer to something, at least where this sentence is held-true of the thing(s) to which that person refers. But
referring and holding-true are both acts that a person performs, not by any means something that semantic features ‘do.’ Further, while sentences used to refer and hold-true may have a prominent place in the thoughts of those who would like to maintain that this use is both dominant and paradigmatic, it is neither. And emphasis upon truth-telling distracts attention from the far more prevalent uses of language in thought and imagination, speculation and self-berating, etc. and the primary
point that ‘telling the truth’ is at best one of many ways in which a semantic feature can contribute to understanding, and it distracts attention from the fact that where semantic features do contribute in any of these ways or others, it is
constitutive
of a way of understanding, and thereby possibly of ‘experience’ (cf. Chomsky
1966
/2002/2009).
Continuing, what is the status of current terms such as ABSTRACT? One can think of these as provisional theoretical terms. Think of them as descriptive in the way indicated: not describing things in the world, but
describing ways to understand. In addition, conceive of them as rather like
dispositional terms in that – like the dispositional term “soluble” said of salt – they do not offer explanations themselves of how semantic features ‘work,’ but describe by a discernible or noticeable ‘result’: salt dissolves when placed in water, and ABSTRACT yields an understanding of an ‘object’ as abstract. When hearing “George is writing a book on hydrodynamics and it will break your bookshelf,” you ‘see’ George's book as at first abstract, and then concrete. (Presumably, your LI ‘contains’ both ABSTRACT and CONCRETE.) Looking at terms such as ABSTRACT in this way, one can take them then to be provisional theoretical terms that as part of a naturalistic theoretical effort have something like the status of a dispositional term that can be replaced in the vocabulary of an advanced science of semantic features by terms that describe what a semantic feature is and – with the aid of the theory – explain how they ‘work,’ how they and lexical items are acquired, and the like.
Color science offers an analogy of sorts. When we (as we say) “see a green patch” (or better, I suspect, to capture the adverbial character: “sense greenly patchly,” awful though that sounds), the greenness is actually contributed by our visual system/mind in response (usually) to photon impingements on our arrayed retinal cones. Our minds, through the operations of the
visual system, configure shape- location- and color-sensation in colored ways, a specific green being one of those ways. Any specific green and its contribution to ‘color experience’ are captured in a complex of theoretical terms – hue, brightness, and saturation – where these are scalars in a theory of the operations of the visual system, and the theory shows how and why specific arrays of firing rates of retinal cones subjected to various forms of calculation yield triples with a specific set of HBS values for a specific ‘point’ in a retinotopic ‘visual space.’ A specific set of values of hue, brightness, and saturation describe and explain ‘how a person sees in a colored way’ on a specific occasion. The analogy to semantic features should be clear, but it is a limited one. For one thing, language, unlike vision, often – perhaps in the great majority of times – operates ‘offline’: it does not depend heavily as does vision on stimulation from other systems in the head or on ‘signals’ from the environment. Vision can go off line too: it seems to do so in the case of
imagination and dreams, but presumably in such cases, this is due to internal stimulation, and the degree to which it can go off line is nothing like what one can, and routinely does, find with language. Another disanalogy: language is what Chomsky calls a “
knowledge” system; vision is not. LIs store semantic and phonological information that – especially in the latter case – configure how we understand ourselves, our actions, and our world(s), not just a specific form of fully internal ‘sensory content.’
Notice that a view of this sort avoids any need to somehow link ‘words’
to semantic values, properties, elements in a language of thought, or anything else of the sort. Because of that, there is an immediate advantage: there is no need to add to a theory of linguistically expressed meanings an account of what meanings are, nor need for a theory to explain how the link to elements in a LOT takes place, nor need to tie or “lock” (Fodor's term) elements in a LOT to properties ‘out there.’ Issues of
acquisition are located where they belong, in an account of semantic features, where they ‘come from,’ and how they come to be assembled. Because they are located there, it becomes a lot easier to understand how linguistically expressed concepts could come to be so readily acquired and accessible to anyone with the right ‘equipment’ in their heads: assuming that the features are universal and so too the assembly mechanism, the fact that a few clues usually suffices to yield a reasonable grasp of a concept that one has not needed before becomes a lot easier to deal with. Further, one gets what Chomsky takes to be an advantage: a parallel between the way(s) in which
naturalistic theories of language deal with phonological and phonetic features and the ways in which they ‘work.’ The parallel is useful in indicating to convinced externalists that dropping the myths of referentialism and representationalism do not make it impossible for humans to communicate.
There are serious issues to resolve on the way to a theory of
lexical semantic features and how they are acquired and do their work. One is whether the concepts that play a role at
SEM are “underspecified,” allowing for ‘filling out’ by other systems. Or are they perhaps over-specified, so that some pruning is required at SEM? Another, related issue is whether there will prove to be a need to assign some uniquely identifying feature to a specific concept feature set in order to distinguish that set from the set for another concept. If it were required, one could ask why that feature alone could not serve to individuate a concept. A third is whether during the
derivation of a sentential expression, one can allow for insertion (or deletion) of features. Chomsky seems to think so in (
2000
: 175 f.), where he notes that LIs such as
who
and
nobody
yield restricted quantifier constructions at SEM, and other LIs such as
chase
and
persuade
seem to demand a form of lexical composition where a
causal action element (for
persuade
, “cause to intend”) and a resultative state (x intends . . .) appear to demand composition in the course
of a derivation.
4
Nevertheless, he remarks, for “simple words” (
2000
: 175) that it is plausible to speak of their features simply being ‘transported’ intact to SEMs. Reading “simple words” as something like candidates for morphological lexical stems of open class words (not formal, such as formal versions of “of,” “to,” plus TNS . . .), the point seems to be that what might be called “morphological roots” such as
house
and
real
(each meaning the relevant cluster of semantic features, represented by HOUSE and REAL) are neither composed nor decomposed during the course of a derivation/sentential computation. I assume so in what follows. A view of sentential derivation that essentially builds this idea into a picture of the morphological and syntactic operations that ‘take place’ in the course of a derivation is available in
Borer (
2005
). In part to keep lexical semantic roots together as “packages” of semantic features, I adopt her picture (see McGilvray
2010
).
There is a good reason to do this, assuming it is not incompatible with the facts, so far as they are known. The reason lies in an argument that Fodor (
1998
) employed to reject the
view that stereotypes could serve the purposes of meaning compositionality in the construction of sentences. For example, while most people have stereotypes for MALE (as used of humans) and for BUTLER, combining these stereotypes is very unlikely to yield a stereotypical male butler as the meaning of the two put together. More generally, Fodor argues against all ‘decomposed’
accounts of concepts with the exception of necessary and sufficient conditions, but – if there were such things – they serve purposes so far as he is concerned only because they are assumed to determine their denotations, which for Fodor (as “contents” of concepts, in his understanding of them) are ‘atomic.’ It is these that are supposed to do the work of compositionality. There is, however, a much simpler alternative account that remains entirely within syntax and morphology (inside the core of the language faculty) and does not require moving into properties of things ‘out there.’ It consists merely of pointing to the fact that the conceptual packages associated with morphological stems remain intact until they
reach SEM. That is all the ‘atomicity’ that is required. Taking this route
places on morphology and syntax the burden of describing and explaining how and why a package comes to be nominalized, verbalized, or made into an adjective, why and how a specific nominal comes to be assigned a role as agent, and so on.
The results of computation will be grammatical in some intuitive sense, although there is no guarantee that they will be readily interpretable. That is, however, a harmless result: we humans use what we can, and overproduction actually aids the interests of the creative aspect of language use. Some views of causal verbs such as
persuade
and
build
might demand that what appears at SEM is a syntactically joined expression that includes (say) CAUSE and INTEND (for
persuade
) at one or more SEMs, but while this requires treating causal verbs as complex, it is a harmless result too. In fact, it has the advantage of making it apparent that some analytic truths are underwritten by syntax without recourse to pragmatic or like
considerations. Fodor and Lepore (2002 and elsewhere) have objections to this kind of move, but they are, I think, avoidable.
As for
over- or underspecification: that will have to await a fuller account of the theory than can be offered now. There are, however, some considerations that argue in
favor of overspecification. Take metaphorical interpretation, a very common phenomenon in natural language use. A plausible account of metaphor holds that in interpretation, one employs a form of pruning that applies one or a few semantic features of an LI to characterize something else. To take a simple example, consider the sentence
John is being a pig
said in a context in which 7-year-old young John is at a table eating pizza. To characterize John as a pig is to take – likely in this case – the feature GREEDY from PIG and apply it to JOHN. Pruning of semantic features, where necessary, can perhaps be made the responsibility of ‘other systems’ on the other side of SEM and, ultimately, of the user and the interpreter (or perhaps just ‘the user,’ abandoning the work of other systems, for they would have to be far too context-sensitive to yield the kinds of results that a computational theory could possibly capture). For a procedure like this to make sense at all, one needs at SEM at least a rich set of features. Arguably, much the same holds for what are often called literal interpretations, where – again – there may be need for pruning.
There are several efforts to say what the features are, with varying success.
A notable one is found in Pustejovsky's (
1995
). There are, I think, some problems with the computational framework that Pustejovsky adopts (see McGilvray
2001
), but the feature descriptors – thought of as contributions to a theory of concepts – are quite extensive, and many are illuminating. Nevertheless, it is likely that there is a long way to go – assuming, at least, that linguistically expressed concepts are specified sufficiently fully to distinguish any one from any other. As for explanatory adequacy – a
solution to
Plato's Problem for lexical concept acquisition, for example – one must seek that too. By way of a start, one can look to
evidence gathered by Kathy Hirsh-Pasek and Roberta Golinkoff (
1996
), Lila Gleitman (e.g.,
Gleitman & Fisher
2005
) and others concerning the course of development of concepts and lexical items with children, including – in the case of Hirsh-Pasek and Golinkoff, pre-linguistic infants. The issue of concept acquisition is of course distinct, in part, from the issue of lexical acquisition. For it is obvious that children have (or are able to quickly develop) at least some versions of PERSON and action concepts such as GIVE, EAT, and so on, plus TREE, WASH, and TRUCK at a very early age. For they appear to understand many things said in their presence before they are able to articulate; and they clearly have an extremely early capacity to discriminate at least some things from others. Perhaps they do not have BELIEF and KNOW before they can articulate such concepts in language; perhaps, in effect, one needs at least some capacity to articulate before being able to develop such concepts. These are open issues. However, at least it is clear that children do develop some – remarkably many – concepts quickly, and that some of them seem to already have at least some of the characteristics that make them characteristic of our (adult) conceptual schemes. Thus, as with the concept PERSON, the child's concept DONKEY must have a feature amounting to something like “psychic continuity.” As Chomsky's grandchildren's responses to a story discussed in the main text reveals, the story donkey turned into a stone remains a donkey, and the same donkey, even though it now has the appearance of a stone. This also indicates that the feature ‘has psychic continuity’ must not only be innate, but that there must be some mental ‘mechanism’ that includes this feature in a child's concepts of humans, donkeys, and no
doubt other creatures.
BOOK: The Science of Language
3.72Mb size Format: txt, pdf, ePub
ads

Other books

Flintlock by William W. Johnstone
Gone Too Deep by Katie Ruggle
At the Queen's Command by Michael A. Stackpole
Promising Hope by Emily Ann Ward
Riccardo's Secret Child by Cathy Williams
Nine Fingers by Thom August