Read The Science of Language Online

Authors: Noam Chomsky

The Science of Language (33 page)

BOOK: The Science of Language
12.15Mb size Format: txt, pdf, ePub
ads
Appendix IX:
Simplicity
Seeking simplicity (elegance, austerity, beauty, optimality . . .) for one's theory of a system or phenomenon, it has often been pointed out, is a crucial aspect of scientific investigation and has a prominent place in
natural science methodology. Some aspects of it are discussed elsewhere in this volume: an insistence on seeking ‘atoms’ or what Newton called “corpuscles,” Galileo's focus on inclined planes and not on how plants grow, and Goodman's nominalism and constructive systems, as well as his effort to find a completely general conception of simplicity. Simplicity of several sorts (theory-general, computational, optimization, efficiency) is exhibited – remarkably, given earlier work and the complications that were a part of the ‘format’ picture of grammars found until the introduction of the principles and parameters framework – in Chomsky's Minimalist Program conception of the
language faculty. This conception, as indicated, suggests that linguistic structure and possible variants in it amount to Merge and the
developmental constraints built into parameters, where these could be based on the genome or third factor contributions. It also suggests that the human language system is a perfect (or as close to perfect as possible) solution to the problem of linking sounds and meanings over an unbounded range – or at the least, putting together complexes of concepts that we can think of as thoughts, or perhaps as language's contribution to thoughts. If the minimalist approach continues to make progress, we can with some confidence say that the language faculty appears to be an oddity among biological systems as they are usually
conceived. They are usually kludges: “bricolage,” in François Jacob's phrase (cf. Marcus
2008
). They are seen to be the result of the accidents of history, environment, and adventitious events: they are functioning systems that come out of millennia of gradual change, as conceived in the usual selectional story about evolution. However, the language faculty appears to be more like a physical system, one that exhibits elegance and simplicity – for example, atomic structure and the structured table of elements that it underwrites.
Getting this kind of result might have been a desideratum in Chomsky's early efforts to construct a theory of language, but it could not have been more than a dream at the time. The focus of early work (e.g.,
Aspects of the Theory of Syntax
) was to find a theory of language that would be descriptively
adequate – that is, provide a way to describe (with a theory/grammar) any possible natural language – while also answering the question of how a
child could acquire a given natural language in a short time, given minimal input which is often corrupt and without any recourse to training or ‘negative evidence.’ The acquisition issue – called in more recent work “
Plato's Problem” because it was the problem that confronted Plato in his
Meno
– was seen as the task of providing an explanatorily adequate theory. Taking a solution to the acquisition problem as the criterion
of explanatory adequacy might seem odd, but it is plausible: if a theory shows how an arbitrary child can acquire an arbitrary language under the relevant poverty of the stimulus conditions, we can be reasonably confident that the theory tracks the nature of the relevant system and the means by which it grows in the organism. Unfortunately, though, early efforts to meet descriptive adequacy (produce a theory of language with the resources that make it able to describe any of the thousands of natural languages, not to mention the indefinitely large number of I-languages) conflicted with meeting explanatory adequacy. If we all had a single language and its structure were simple so that we could understand how it developed quickly in the human species, and if our theory of it and of how it develops in an individual within the relevant time constraints were fully adequate, we would have a theory that meets both requirements. However, this counterfactual has nothing to do with the facts.
It was thought at the time that the only route available to the theoretician is to conceive of the child as being endowed with something like a format for a possible language (certain conditions on structure, levels of representation, and possible computations) and a relative optimization routine. The child, endowed with a format for a possible language and given input from his or her speech community, would automatically apply this routine so that the rules of his or her language faculty somehow converged on those that contribute to speech behaviors in the relevant community. The format would specify ways of ‘chunking’ linguistic data (word, phrase) and tying it together (rule, general linguistically relevant computational principles such as what was called “the principle of the cycle” . . .) and the routine would yield a measure of simplicity in terms of, say, number of rules to encompass the data. This routine, one that is internal to the system as conceived by the theory, yields a way of speaking of how one grammar is better than another, neighbor one: ‘better’ is cashed out in terms of a relative simplicity measure. Chomsky called this routine an “evaluation” procedure. The child's mind is conceived to have some devoted (language-specific) relative optimization principle available, one that within the relevant time period comes up with the (relatively) best theory (grammar) of the rather thin data set that the mind is offered. It was an intuitively obvious way to conceive of acquisition at the time for – among other things – it did appear to yield answers and was at least more
computationally tractable than what was offered in structural linguistics, where the alternatives found in structural linguistics could not even explain how that child managed to get anything like a morpheme out of data. But the space of choices remained far too large; the approach was theoretically implementable, but completely unfeasible. It clearly suffers in comparison to the new version. Moreover, it blocked progress by making it very difficult to conceive of how the
specification of a format, and of UG as conceived this way, could have developed in the species. UG – thought of as that which is provided in the way of language-specific genetic information – would have to be rich and complex, and it was difficult to see how something both devoted and rich and complex could have developed in the human species.
To solve the acquisition problem and meet the condition on explanatory adequacy (understood as solving Plato's Problem), it is far better to have a theory that provides something like very few
universal, invariant principles, plus language-universal acquisition algorithms that automatically return the ‘right’ language/grammar, given a set of data. That would be a selection procedure, a procedure that yielded a single solution without weighing alternatives. It – or a reasonably close approximation – became a prospect with the introduction in the late 1970s and early 1980s of the
principles and parameters view of the language faculty. Intuitively, the child is provided through UG at birth with a set of principles – grammatical universals or rules common to all languages. Among these principles are some that allow for options. The options are parameters. The parameters – conceived originally as options ‘internal’ to a principle – can be ‘set’ with minimal experience (or at least, with the amount of experience actually afforded children in the relevant developmental window). (See
Appendix VIII
on parameters and their role.) Setting them one way as opposed to another would determine one class of possible natural languages as opposed to another. This was real progress on the road to meeting
explanatory adequacy. Moreover, with the
Minimalist Program's growing acceptance of the idea that Merge is all that one needs in the way of an exceptionless principle, and the further suggestion that parameters might even amount to general
constraints on development constituted by – and set by – the non-biological factors included in what
Chomsky calls “third factor” contributions to language growth and the shape that a language takes, the burden placed on the language-specific instruction set included in the human genome becomes less and less. Maybe the
only
genetically specified language-specific contribution is Merge. If this were the case, it would be much easier to understand how language came to be introduced into the species at a single stroke. It would also make it easy to understand how and why language acquisition is as quick and automatic as it appears, while allowing for different courses of development. And it would allow linguists such as Chomsky to begin to raise and provide tentative answers to questions
such as what is biologically crucial to language. We would begin to have answers to ‘
why things are the way they are.’
With this in mind, where Chomsky speaks of biolinguistics (a term first introduced by Massimo Piattelli-Palmarini in 1974 as the title for a joint MIT-Royaumont Foundation conference, held in Paris), perhaps we should speak instead of “biophysical linguistics” or perhaps “bio-compu-physical linguistics,” so that it becomes clear that the set of possible natural languages and I-languages depends not just on genetic coding but also on other factors – all, though, conceived of as somehow built into nature and the ways in which it permits development/growth. And if UG is thought of as what we are provided by biology alone (i.e., genomic specification), perhaps UG becomes nothing but the specification for
Merge.
Interestingly, the principles and parameters framework seems to allow us to abandon the theory-internal conception of simplicity that played such an important role in early efforts. If the child's mind knows what the switches or options are, relative optimization of simplicity plays no role. You can think of the
language acquisition matter as solved (at least for narrow syntax) and turn to other explanatory matters. That is no doubt part of the reason why in a recent paper Chomsky speaks of minimalism as going “
beyond explanation” – part of the reason, not the whole, for third factor considerations appear to begin to allow answers to questions concerning why principles X rather than alternatives Y, Z . . . Explanation in the sense of solving
Plato's Problem remains crucial, of course, but with parameters, solving Plato's Problem no longer need be the single, central goal of linguistic explanation.
Theory-general conceptions of simplicity continue as they have for centuries to guide the
scientist's
(not a child's mind's) construction of theories of various sorts of various domains, of course, including the linguist's theories of linguistic phenomena. And in the theory-general domain, it is hardly likely that nature has provided the scientist with an automatic selection device that does the job of coming up with a good theory of whatever phenomena the scientist aims to describe and explain. We do seem to have
something; we have what Descartes called “the light of nature,” and what Chomsky calls the “science-forming capacity.” It is a gift given to humans alone, so far as we know, although not a gift to be attributed to God, as Descartes suggested. It is somehow written into our bio-physical-computational natures. Because of this capacity, we can exercise what Peirce called “
abduction” and contemporary philosophers are fond of calling “inference to the best explanation.” It is very unlike other kinds of inference; it is more like good guessing. Probably some internal operation that seeks simplicity of some sort or sorts is a part of it. In any case, with it and other mental contributors, we miraculously but typically manage to converge on what counts as the better/improved description or explanation for some set of
phenomena.
Appendix X:
Hume on the missing shade of blue and related matters
Getting a better way to look at Hume's missing shade of blue problem requires abandoning Hume's very strong empiricist principles. His color problem, and the more general issue of novel experience and novel judgment, can only be dealt with satisfactorily by appealing (as Chomsky suggests) to theories of the internal systems that yield the kinds of ways we can cognize, and the limits that those systems set. It is clear that Hume was aware of the general issue; as indicated, he recognized that the mind can (and does) understand and make judgments about novel moral
circumstances. It is also clear that he recognized that the limits of our capacities to understand and experience must be set by internally (and – while he did not like the fact – innately) determined
‘instincts.’ Further, he thought that understanding how these instincts work lay outside human powers. But on that, he was quite obviously wrong. Through
computational sciences of the mind, they are now coming to be understood. As Chomsky emphasizes elsewhere in our discussions, one of the aims of cognitive science is coming to understand the natures of these cognitive
instincts.
Note that modern theories of color and other visual artifacts (borders, shading, depth . . .) assume that they are products of internal machinery that both yields and, by having specific ranges and domains, sets
limits on what can be sensed by the human visual system. (Bees can and do respond to photon input in the ultraviolet energy range with what we must assume are some kinds of internal visual representation, presumably something like colors. We, however, cannot respond to, nor produce [‘represent’] colors as a result of this kind of stimulation.)
1
While these systems are not recursive in
the way that the language system is and do not yield discrete infinities of output, it is still plausible to speak of the color system of the human mind ‘generating’ colors and other visual artifacts by relying on algorithms that have no ‘gaps’ of the sort Hume pointed to in their ranges, nor in their output domains. They have no problems with novel input and pose no puzzles about how a novel output could be produced. Hume's specific puzzle supposes novel output without novel input, of course, but it is not at all clear how he could even pose his puzzle in actual cases. One reason, as we now know, is that the human visual system is capable of producing between 7.5 and 10 million different colors – that is, yielding discriminable combinations of hue, brightness, and saturation. What, then, would count as a Humean unique color? How would it be specified or individuated without a well-developed theory of what the human visual system can produce? How would one decide whether a person's system was or was not producing that specific color when presented with an array of closely matched stimuli? How does one take into account fatigue, color blindness, accommodation, etc.? Hume now would be able to answer these questions, but unfortunately for his empiricist views and his reluctance to believe that one can investigate mental instincts, he could answer them – and pose them in a reasonable way – only because there is a plausible
theory of the internal instinct that yields colors in place. Given the theory and its success, one can even ask how seriously we should take philosophical thought experiments like Hume's. Surely at this stage, the existing theory counts as a better guide to which questions about what we can see and experience are reasonable to continue to raise.
Hume's insight, that our cognitive efforts are largely a matter of instinct, has now lots of evidence in its favor. Existing theories of vision and language indicate that he was on the right track in pointing to instinct as the source of our mental operations – pointing, that is, to the automatically developing biophysical machinery that makes discrimination in various modalities and judgment possible. The point generalizes: we can engage in science at all only because we can rely on some kind of ‘instinct’ that offers us something that Peirce labeled as “
abduction.” Chomsky makes this point in the main text.
There are other lessons in this detour into color. An obvious one is that the internalist approach – approaching the issue of what the human cognitive system can or cannot do in this and likely other cases by looking inside the head and constructing a theory of how a system operates and develops – is
supported by points like these. Another is that theory and the abstraction and simplification that is characteristic of and likely necessary to theory-construction trumps lists and compilations of ‘raw data.’ Data can be understood – and as the color case indicates, really only gathered – only when there is a theory that is making progress in place. Further, as with language, so too with color: commonsense views of both not only can be, but are in fact, misleading. If you want to know what a language or color is – or rather, want to have an objective and theoretically viable conception of a language or a color – look to how the best extant theories individuate them. In the case of a language, look to how to specify an I-language, and in the case of color, look to triples of hue, brightness, and saturation or rather, to the internalist theory that makes these the dimensions along which colors vary.
1
No one should assume that there is a one-to-one matching of color experiences and spectral inputs. In fact, one of the most convincing reasons to construct a theory of the internal operations of the visual system is that they seem to modify and ‘add’ a great deal to the input, to the extent that one should abandon the idea that vision somehow accurately ‘represents’ input and its distal cause/source. For example, three ‘monochromatic’ (single-wavelength) light sources of any of a range of different wavelengths (so long as they provide light within the normal input range of each of the three human cone systems) can be varied in their intensity and combined to produce experience of
any
spectral color. Three fixed wavelengths, any color. If so, at some risk of confusion, we can say that colors are in the head. They are because they are ‘produced’ there. This commits one to something like a ‘projectivist’ view according to which what and how we see owes a great deal to what the mind contributes to our vision. Chomsky's view of the roles of human concepts and the language faculty is a version of projectivism. The ways in which we experience are due to a large extent to the ways in which we can experience, and these depend essentially on what our various internal systems provide in the way of ‘content,’ much of which must be
innate.
 
BOOK: The Science of Language
12.15Mb size Format: txt, pdf, ePub
ads

Other books

Deathlist by Chris Ryan
Our Last Time: A Novel by Poplin, Cristy Marie
Rose in Bloom by Helen Hardt
Dead In The Morning by Margaret Yorke
All Flash No Cash by Randi Alexander
Waiting For Lily Bloom by Jericha Kingston