Having rejected empiricist views of concepts because they have nothing to recommend them and having dismissed the externalist misinterpretations of concepts found in Fodor's view, let us turn to Chomsky's claim that human concepts are somehow unique or different from those found with other organisms. Is he correct? Lacking a
reasonably well-developed theory of concepts/MOPs, one must look elsewhere for reasons to hold this. First, is there a case for human concepts being unique? Intuition does suggest differences between human concepts and what we can reasonably say about animals. It seems unlikely that a chimp has the concepts WATER, RIVER, and HIGHWAY, for example, at least in the forms we do. Perhaps a chimp can be trained to respond in some way taken to meet some criterion of success established by some experimenter or another to cases of water, rivers, and
highways, but it does not follow either that the chimp has what we have, or that it acquires these concepts in the way we do – if the chimp has them at all. Moreover, on other grounds, it is very unlikely that chimps have or can ever develop NUMBER, GOD, or even RIDGE or VALE. So there is a prima facie case for at least some distinctive human concepts.
That case is considerably improved by what Gallistel (
1990
) and several others interested in the topic of
animal communication say about the natures of animal concepts – or at least, about their use and by implication, about their natures. They are, so far as anyone can tell, referential in a way that human concepts (at least, those expressed in our natural languages, which is for our purposes all of them) are not. They seem to be ‘tied’ to dealing with an organism's environment. Assuming so, in what does the difference consist? Exploring this issue gives us an opportunity to think about how one might construct a good theory of MOPs or internal concepts.
One possibility, mentioned in the discussion, is that our linguistically expressed concepts differ from those available to other creatures in use or application. Perhaps, then, we have concepts identical to those available to chimps and bonobos, to the extent that there is overlap – for we need not suppose that we have exactly what they have, or vice versa. The difference on this line of argument lies rather in the fact that chimps and bonobos do not have language, and so they do not have at least some of the capacities we have because
our language system can run ‘offline’ – essential for speculation and wondering about what might happen if something having nothing to do with current circumstances were to
take place. On this view, through the flexibility of use to which its resources can be put, language allows us to ‘entertain’ complex (sententially expressed) concepts out of context, where chimps and bonobos are constrained to apply concepts – and not, obviously, concepts that are structured as a result of linguistic composition. As the discussion indicates, Chomsky rejects this explanation. If there are differences, the differences are in the natures of the concepts, not the uses to which they are put.
Our uses of linguistically expressed concepts do, of course, provide evidence for or against differences in concepts. For example, one reason for thinking that our concepts differ from those available to other creatures is that ours provide support for the multiple uses to which they are put, including metaphor – which seems to require the capacity to ‘take apart’ concepts and apply only some discourse-relevant parts. Another reason lies in animal concept use: if Gallistel and others are correct, it is very plausible that whatever an ape is employing when it employs some analogue of our concept HOUSE, it employs something that is directly mobilized by some one or some group of features that the ape's relevant sensory system(s) yield. The concept's features and character will be of its nature devoted to yielding quick recognition and performance. It will lack not only features added to a concept
in the course of a linguistic computation/derivation (for apes do not have language), but will lack non-sensory abstract features such as SHELTER, ARTIFACT, and SUITABLE MATERIALS that – as Aristotle and many others have noted – regularly figure in our concept HOUSE. I return to that, for it is compelling. First, though, I need to address a potentially misleading form of animal–human conceptual comparison.
It appears that at least some of our concepts differ from those available to animals in their internal structures. An interesting case is presented in
PERSUADE and other causal verbs, verbs whose associated concepts have plausibly been held to provide support for one variety of entailment, yielding analytic truths that are presumably unavailable to apes. If John persuades Mary to go to the movies, then if it is true that he does persuade her to do this, she at that point intends to go. Whether she does eventually do so is another matter. It is not obvious, however, that this point gives us a persuasive way to describe the differences between an ape's concepts and ours. According to a plausible and much-discussed hypothesis (for a contrary position, see Fodor and Lepore
2002
), entailments like this (assuming that “John persuades Mary” is taken to be true) depend on structure induced by the syntactic compositional operations of the language faculty. If that were the case, PERSUADE would turn out to amount to CAUSE to INTEND. And if so, our linguistically expressed PERSUADE would not be an ‘atomic’ concept, as are HOUSE, GROUSE, and RIDGE. Rather, it would have the character that it does because of the morphosyntactic operations of the language faculty. This suggests that if there is to be comparison of the ‘natures’ of animal concepts with ours, it is best to discount the contributions of morphology and syntax to concepts as they appear at language's semantic interface, where morphosyntax has contributed both internal and sentential structure. This point is illustrated, I think, in some of
Paul Pietroski's recent work on matters of semantic structure and its interpretation.
Could it be after all that the
difference between our concepts and those available to animals – especially apes, including chimps and bonobos – is entirely due to contributions of the language faculty? Paul Pietroski (
2008
) develops a version of this option, although not – I argue – one that addresses apparent differences in the natures of ‘atomic’ concepts. He suggests that
differences lie in the ‘adicity’ requirements of the language faculty at its semantic interface. The adicity of a concept is the number of arguments it takes: RUN seems to require one argument (
John ran
), so has adicity −1 (it needs an argument with value +1 to ‘satisfy’ it); GIVE might seem to require three arguments, and if it does, it has adicity −3.
Specifically, Pietroski adopts a variation of Donald Davidson's idea that the semantics of sentences should be expressed in terms of a conjunction of monadic predicates, that is, predicates with adicity of −1, and no other. In Pietroski's terms (avoiding all but the
most primitive logical notation for the benefit of the general reader unfamiliar with it),
John buttered the toast
amounts to: there is an [event]
e
, BUTTERING (
e
) [read this as “
e
is a buttering”]; there is an
x
, AGENT (
x
), CALLED-JOHN (
x
); there is a
y
, THEME [patient] (
y
), TOAST (
y
). According to this account, “buttered,” which appears to have adicity −2 (to require two arguments) is coerced to have adicity −1 (to require a single argument), and “John,” which appears to have adicity +1 (to be an argument with a value +1 that ‘satisfies’ a predicate with adicity −1), is coerced to something like the form
called John
. (This is reasonable on independent grounds, such as cases where one says that there are several Johns at the party.) In effect, then, the name “John” when placed in the language faculty's computational system gets −1 instead.
There are several advantages to this “neo-Davidsonian” approach. One is that it seems to coordinate with at least some of the aims of
Chomsky's Minimalist Program and its view of linguistic computation/derivation. Another is that it offers a very appealing account of some entailments that follow from overall SEM structure (or perhaps the structure of some form of representation on the ‘other side’ of SEM, in some interpretational system or another): from
John buttered the toast quickly
, it follows that he buttered. But as noted, it does not seem to address the prima facie difference in the natures of concepts noted above. It is unlikely that the difference between our linguistically expressed BUTTER and some chimp's BUTTER-like concept (assuming that there is such) consists solely in a
difference in adicity; on Pietroski's view, BUTTER on application or use – that is, for humans, as it appears at the semantic interface as understood by Pietroski – has the adicity (–1), for by hypothesis, that is what it is assigned by the operations of the language faculty that lead up to it. Because they lack language and the resources it provides, however, there is no reason to say this of the adicity-in-application of BUTTER for an ape, whatever that might be. Since, then, the difference in question appears to be due solely to the operations of the morphosyntactic system that determines the adicity of a concept that – as in this case – is assigned the status of verb and given one argument place to suit Pietroski's view of interpretation and what it underwrites, and because it relies essentially on the fact that we have language and apes do not, it does not speak to the issue of whether a chimp has what we have when we have the ‘atomic’ concept BUTTER. Generally speaking, Pietroski's discussion of differences between animal and human concepts focuses on adicity alone, and does not really touch the issue of what a concept ‘is’ – of what its ‘intrinsic content’ or inner nature is, and how to capture this nature. It steers around the issue of what concepts are – perhaps to be investigated by looking at what they amount to in pre-computation ‘atomic’ form, where they might be described as a cluster of semantic features that as a
package represent the
‘meaning’ contribution of a person's lexical item. It focuses instead on concepts as they appear (are constituted at? are ‘called upon’? are ‘fetched’?) at the language faculty's semantic interface. Because of this, it loses an opportunity to look for what counts as differences in concepts at the ‘atomic’ level, in the way a human's lexical conceptual store might differ from an ape's. And also because of this, it raises
doubts, I believe, about whether Pietroski or anyone else is warranted in assuming that our concepts are in fact (adicity and other contributions of morphology and syntax aside) identical or even similar to what are available to other primates. There is, of course, a difference between us and apes. That is not in question: they do not have the computational system of language, and Merge and linguistic formal features in particular. However, that difference does not address the issue in question here.
If looking to differences in use and to contributions of morphology and syntax do not speak to the matter and the language faculty imposes no obvious processing-specific requirements on the
intrinsic
features of the concepts it can take into account, another place to look for a way to describe and explain the prima facie differences is to a distinctively human
conceptual acquisition device. Might humans have such a device, procedure, or system? Associative stories of the sort offered by empiricists over the ages (for a contemporary version, see Prinz
2002
) are little help; they amount to an endorsement of a
generalized learning procedure that neither speaks to
poverty of the stimulus observations (infants with complex concepts, among other facts) nor offers a specific proposal concerning a mechanism – crucial, if one is to offer a theory at all. Their stories about generalized learning procedures are not made precise, nor – where efforts of a sort are made – are they relevant.
Pointing at connectionist learning stories does not help unless there is real reason to think that is the way human minds actually work, which infant concepts acquired with virtually no ‘input,’ among other things, deny. And so it appears that their explanation of human–animal differences (bigger, more complex brains, more powerful hypothesis formation and testing procedures, ‘scaled up’ operations, communal training procedures, etc.) are just forms of handwaving.
What, however, about appealing to a concept-acquisition mechanism that depends on a procedure that there is good reason to think only humans have? Specifically, could there be a concept-acquisition device that
employs Merge, or at least some version of it? This seems promising. On independent grounds, Merge is unique to humans. However, the suggestion faces barriers. For one thing, it challenges an
assumption basic to Chomsky's minimalist reading of evolution; on that reading, our human concepts must be in place before Merge and language's computational system are introduced. If this seems to rule out an appeal to Merge, there is a possible variant: perhaps the concepts in place
at the introduction of Merge are those shared to an extent with some other primates, and the introduction of Merge not only provided for the construction of new and distinctively human ones, but also allowed for modifications in existing ones. That again looks promising, but it has other problems. Merge in its usual external and
internal forms introduces hierarchies (unless there is another explanation for them), movement, and the like. There is no obvious need for these in dealing with concepts themselves, grammatically complex concepts such as PERSUADE aside. Perhaps there is a need for a distinction between the core features for a concept and more peripheral ones. Perhaps, for example, PERSON and DONKEY will have something like PSYCHIC CONTINUITY among their core features, but need not have BIPEDAL or QUADRIPEDAL. However, that does not appear to be a difference in hierarchy. It might even be an artifact of the way(s) in which the word
person
is used in majority environments, which would be irrelevant to a Merge-based account. Pair Merge, on the other hand – or
something like it that provides for a form of adjunction – could provide aid here. By abandoning hierarchical structure and movement/copying, it has promise, assuming it could operate over features and allow for something that looks rather like concatenation of features to produce distinctive clusters, perhaps expandable to allow for additional experience. However, it has problems too. For one thing, if it yields something like adjunction (e.g.,
the big bad ugly mean nasty . . . guy
), it depends on a single-element ‘host’ (here, “guy”) to which adjoined elements are attached, and it is not at all clear what that single element could be: lexical phonological elements will not do, and if there are ‘central’ features, they must by hypothesis allow for complexity. For another, it is more descriptive than explanatory: it does not help make sense of how concepts seem to develop automatically in ways that are (for core features at least) uniform across the human population, yielding conceptual packages that appear to be virtually ‘designed’ for the uses to which they can be put. And finally, it is hard to see why a procedure like the one discussed would be unavailable to animals (which also appear to have innate concepts, however different they might be), so the appeal to the human-uniqueness of the combinatory procedure fails to make sense of why human concepts are
unique. That suggests that looking to uniquely human conceptual package acquisition mechanisms to make sense of why human concepts are different is the wrong strategy. Unless there is some naturalistically based combinatory procedure that is demonstrably unique to humans other than Merge – which at the moment does not look plausible – perhaps we should look elsewhere.