Where the Conflict Really Lies: Science, Religion, and Naturalism (43 page)

Read Where the Conflict Really Lies: Science, Religion, and Naturalism Online

Authors: Alvin Plantinga

Tags: #Non-Fiction, #Biology, #Religious Studies, #Science, #Scientism, #Philosophy, #21st Century, #Philosophy of Religion, #Religion, #v.5, #Amazon.com, #Retail, #Philosophy of Science

BOOK: Where the Conflict Really Lies: Science, Religion, and Naturalism
9.97Mb size Format: txt, pdf, ePub

Consider an analogy. I am playing catch with my granddaughter, and in a vaingolorious attempt to show off, I throw the ball too hard; it whistles over her head and shatters a neighbor’s window. It is clear that the ball breaks the window
by virtue of
its mass, velocity,
hardness, size, and the like. If it had been much less massive, been traveling at a lower rate of speed, had been as soft as a bunch of feathers, it would not have broken the window. If you ask “Why did the window shatter upon being hit by the ball?” the correct answer will involve the ball’s having those properties (and of course also involve the window’s having a certain degree of brittleness, tensile strength, and the like). As it happens, the ball was a birthday present; but it does not break the window by virtue of being a birthday present, or being purchased at Sears and Roebuck, or costing $5.00. Examples of this sort, clearly enough, can be multiplied endlessly; but examples of other kinds also bound. Sam has the right to fire the city manager by virtue of his being mayor, not by virtue of his being nice to his wife. Aquinas was a great philosopher by virtue of his acumen and insight and prodigious industry, not by virtue of his being called “the Dumb Ox.”
28

Going back to materialism and the content of belief, then, it is by virtue of the NP properties of a belief B, not by virtue of its content, that the belief causes the behavior it does cause. Among B’s NP properties are such properties as that of involving many neurons working in concert: as we learn from current science, these neurons send a signal through effector nerves to the relevant muscles, causing those muscles to contract and thereby causing behavior. It is by virtue of these NP properties that it causes those muscles to contract. If the belief had had the same NP properties but different content, it would have had the same effect on behavior.

Objection: you claim that

(1) If the belief B had had the same NP properties but different content, it still would have had the same causal effects with respect to behavior;

 

but it
couldn’t
have had the same NP properties but different content. (1) is not merely counterfactual; it’s counterpossible. If the property of having C as content supervenes on neurophysiological properties, then (given strong super-venience) there will be a neurophysiological property equivalent to C in the broadly logical sense; hence it won’t be so much as possible that the antecedent of (1) hold. Given the usual semantics for counterfactuals, the conclusion to be drawn is that (1) is true, all right, but so is any counterfactual with the same antecedent, including, for example

(2) if B had had the same content but different neurophysiological properties, B would
not
have had the same causal effects with respect to behavior.

 

Right. But
is
the usual semantics for counterfactuals correct? This is hardly the place to address that particular (and large) can of worms, but in fact (so I think) it isn’t. It is true that if 2 had been greater than 3, then 3 would have been less than 2; it is not true that if 2 had been greater than 3, then 3 would have been greater than 2. It is not true that if 2 had been greater than 3, then the moon would have been made of green cheese. Even given that God is necessarily omniscient, it isn’t true that if God had not been omniscient, he would have known that he doesn’t exist. If I proved Gödel wrong, logicians everywhere would be astonished; it is false that if I proved Gödel wrong, logicians would yawn in boredom.

Furthermore, philosophers regularly and quite properly use counterpossibles in arguing for their views. Consider the philosophical view that what I really am is a member of a series of momentary person stages. One argues against this view by pointing to the truth of

(3) if this were true, I wouldn’t be responsible for anything that happened more than a moment ago (a new legal defense strategy?)

 

Even though the view in question is noncontingent—necessarily true or necessarily false—you take that counterpossible to be true and its mate

(4) if this were true, I
would
be responsible for much that happened more than a moment ago

 

false. A dualist might claim that if materialism were true, the content of one’s beliefs wouldn’t enter the causal chain leading to behavior; a materialist might claim that if (interactive) dualism were true, an immaterial substance would (implausibly) cause effects in the hard, heavy, massy material world. One of these counterfactuals has an impossible antecedent; both, however, are properly used in the dispute between materialists and dualists.

The truth of (1) gives us some reason to think that B doesn’t cause that action A by virtue of its content. As I say, however, this isn’t the place to look into the difficult matter of figuring out how to reason with counterpossibles; that would take us far afield. But we can also address our question directly: is it by virtue of its content that B causes A? I should think the answer, clearly, is that it is not. It is by virtue of its neurophysiological properties that B causes A; it is by virtue of
those
properties that B sends a signal along the relevant nerves to the relevant muscles, causing them to contract, and thus causing A. It isn’t by virtue of its having that particular content C that it causes what it does cause.

So once again: suppose N&E were true. Then materialism would be true in either its reductive or its nonreductive form. In either case, the underlying neurology is adaptive, and determines belief content. But in either case it doesn’t matter to the adaptiveness of the behavior (or of the neurology that causes that behavior) whether the content determined by that neurology is true.
29

VI THE REMAINING PREMISES
 

Now we’re ready for the next step: the naturalist who sees that P(R/N&E) is low has a
defeater
for R, and for the proposition that his own cognitive faculties are reliable. A defeater for a belief B I hold—at any rate this kind of defeater—is another belief B* I come to hold which is such that, given that I hold B*, I can no longer
rationally hold B.
30
For example, I look into a field and see what I take to be a sheep. You come along, identify yourself as the owner of the field, and tell me that there aren’t any sheep in that field, and that what I see is really a dog that’s indistinguishable from a sheep at this distance. Then I give up the belief that what I see is a sheep. Another example: on the basis of what the guidebook says I form the belief that the University of Aberdeen was established in 1695. You, the university’s public relations director, tell me the embarrassing truth: this guide book is notorious for giving the wrong date for the foundation of the University. (Actually it was established in 1495.) My new belief that the University was established in 1495 is a defeater for my old belief. In the same way, if I accept naturalism and see that P(R/N&E) is low, then I have a defeater for R; I can no longer rationally believe that my cognitive faculties are reliable.

So the second premise of the argument:

(2) Anyone who accepts (believes) N&E and sees that P(R/ N&E) is low has a defeater for R.

 

It isn’t that someone who believed N&E wouldn’t have enough
evidence
for R to believe it rationally. The fact is I don’t
need
evidence for R. That’s a good thing, because it isn’t possible to acquire evidence for R, at least if I have any doubts about it. For suppose I think up
some argument for R, and on the basis of this argument come to believe that R is indeed true. Clearly this is not a sensible procedure; to become convinced of R on the basis of that argument, I must of course believe the premises of the argument, and also believe that if those premises are true, then so is the conclusion. If I do that, however, I am already assuming R to be true, at least for the faculties or processes that produce in me belief in the premises of the argument, and the belief that if the premises are true, so is the conclusion. My accepting any argument for R, or any evidence for it, would clearly presuppose my believing R; any such procedure would therefore be viciously circular.

So the belief that my cognitive faculties are reliable is one for which I don’t need evidence or argument—that is, I don’t need evidence or argument in order to be rational in believing it. I can be fully and entirely rational in believing this even though I have no evidence or argument for it at all. This is a belief such that it is rational to hold it in the
basic
way, that is, not on the basis of argument or evidence from other things I believe. But that doesn’t mean it isn’t possible to acquire a defeater for it. Even if a belief is properly basic, it can still be defeated. In the above example about the sheep in the field, my original belief, we may suppose, was basic, and properly so; I still acquired a defeater for it.

Here we can reuse an example from
chapter 6
to show the same thing. You and I are driving through southern Wisconsin; I see what looks like a fine barn and form the belief
now that’s a fine barn!
Furthermore, I hold that belief in the basic way; I don’t accept it on the basis of evidence from other propositions I believe. You then tell me that the whole area is full of barn facades (indistinguishable, from the highway, from real barns) erected by the local inhabitants in a dubious effort to make themselves look more prosperous. If I believe you, I then have a defeater for my belief that what I saw was a fine barn, even though I was rational in holding the defeated belief in the
basic way. It is therefore perfectly possible to acquire a defeater for a belief B even when it is rational to hold B in the basic way.

And this is what happens when I believe N&E, and come to see that P(R/N&E) is low: I acquire a defeater for R. I can then no longer rationally accept R; I must be agnostic about it, or believe its denial. Consider an analogy. Suppose there is a drug—call it XX—that destroys cognitive reliability. I know that 95 percent of those who ingest XX become cognitively unreliable within two hours of ingesting it; they then believe more false propositions than true. Suppose further that I come to believe both that I’ve ingested XX a couple of hours ago and that P(R/I’ve ingested XX a couple of hours ago) is low; taken together, these two beliefs give me a defeater for my initial belief that my cognitive faculties are reliable.
31
Furthermore, I can’t appeal to any of my other beliefs to show or argue that my cognitive faculties are still reliable. For example, I can’t appeal to my belief that my cognitive faculties have always been reliable in the past or seem to me to be reliable now; any such other belief is now just as suspect or compromised as R is. Any such other belief B is a product of my cognitive faculties: but then in recognizing this and having a defeater for R, I also have a defeater for B.

Objection: why should we think that premise (2) is true? Some propositions of that form are true, but some aren’t. I believe that I’ve ingested XX, and that the probability that I am reliable, given that I’ve ingested XX is low; this gives me a defeater for the proposition that I am reliable. But I also believe that the probability that I live in
Michigan, given that the earth revolves around the sun, is low, and I believe that the earth revolves around the sun; this does not give me a defeater for my belief that I live in Michigan. Why think the case of N&E and R is more like the first than like the second?
32

Reply: Right: not every proposition of that form is true. This one is, however. What’s at issue, I think, is the question what else I believe (more exactly what else is such that I believe it and can legitimately conditionalize on it in this context). If the only thing I knew, relevant to

(a) my living in Michigan,

 

is that this is unlikely given that

(b) the earth revolves around the sun,

 

then my belief that (b) and that (a) is unlikely on (b)
would
give me a defeater for (a). But of course I know a lot more: for example, that I live in Grand Rapids, which is in Michigan. I quite properly conditionalize not just on (b), but on much else, on some of which (a) has a probability of 1. But now think about N&E and R. We agree that P(R/N&E) is low. Do I know something else X, in addition to N&E, such that (a) I can properly conditionalize on X, and (b) P(R/ N&E&X) is high? This is the conditionalization problem, which I address briefly on pages 346.

This brings us to the third premise:

(3) Anyone who has a defeater for R has a defeater for any other belief she thinks she has, including N&E itself.

 

(3) is pretty obvious. If you have a defeater for R, you will also have a defeater for any belief you take to be produced by your cognitive faculties, any belief that is a deliverance of your cognitive faculties. But
all
of your beliefs, as I’m sure you have discovered, are produced by your cognitive faculties. Therefore you have a defeater for any belief you have.

Still, even if you realize you have a defeater for every belief you hold, you are unlikely to give up all or perhaps even any of your beliefs. It may be that you can’t really reject R in the heat and press of day-to-day activities, for example, when you are playing poker with your friends, or building a house, or climbing a cliff. You can’t think dismissive Humean thoughts about, say, induction when clinging unroped (you’re free-soloing) to a rock face five hundred feet up the East Buttress of El Capitan. (You won’t find yourself saying, “Well, naturally I can’t help believing that if my foot slips I’ll hurtle down to the ground and smash into those rocks, but [fleeting, sardonic, self-deprecatory smile] I also know that I have a defeater for this belief and hence shouldn’t take it seriously.”) But in the calm and reflective atmosphere of your study, you see that this is in fact the case. Of course you also see that the very reflections that lead you to this position are also no more acceptable than their denials; you have a universal defeater for whatever it is you find yourself believing. This is a really crushing skepticism, and it is this skepticism to which the naturalist is committed.

Other books

Conspiracy in Kiev by Noel Hynd
Wordless by AdriAnne Strickland
MAGIC by William Goldman
Snitch World by Jim Nisbet
The Drowning by Camilla Lackberg
The Innocent Moon by Henry Williamson
Last Things by Ralph McInerny
Enchantress Mine by Bertrice Small