Where the Conflict Really Lies: Science, Religion, and Naturalism (26 page)

Read Where the Conflict Really Lies: Science, Religion, and Naturalism Online

Authors: Alvin Plantinga

Tags: #Non-Fiction, #Biology, #Religious Studies, #Science, #Scientism, #Philosophy, #21st Century, #Philosophy of Religion, #Religion, #v.5, #Amazon.com, #Retail, #Philosophy of Science

BOOK: Where the Conflict Really Lies: Science, Religion, and Naturalism
8.14Mb size Format: txt, pdf, ePub
 

That would be crazy in the same way as the above two arguments: but of course the FTA doesn’t at all proceed in that fashion.

Clearly not all arguments with an OSE lurking in the neighborhood are bad arguments. For example, it might be important for some medical procedure to know whether or not I am sometimes awake at 3:00
A.M
.; I can observe that I am awake at 3:00
A.M
. and infer that I am sometimes awake then, but I can’t observe that I am not awake then. Still, there is nothing wrong with the argument

I observe that I am awake and it is 3:00 a.m.

 

Therefore

I am sometimes awake at 3:00 a.m.

 

Arguments involving OSEs aren’t always bad arguments; why think the FTA is? Elliott Sober and others sometimes start by comparing FTA with arguments that involve an OSE and do seem to be seriously defective.
17
A particularly popular argument has been proposed by Arthur Eddington.
18
Suppose you are netting fish in a lake. Your net is pretty coarse-grained: it will only catch fish that are more than ten inches long. You then note that all the fish you catch are more than ten inches long. You consider two hypotheses: H1, according to which all the fish in the lake are more than ten inches long, and H2, according to which only half of the fish are more than ten inches long. The probability of your observation—that is, that all the fish you’ve caught are more than ten inches long—s greater on H1 than on H2; but you would certainly be rash if you thought this really confirms H1 over H2, or that it gave you a good reason to endorse H1. Given your net, you’d wind up with
fish more than ten inches long no matter what the proportion of such fish in the lake. This argument for the superiority of H1 to H2 is pretty clearly fishy, and defective just because it involves an OSE.

But is the FTA like this argument? Opponents of the anthropic objection propose that the FTA is more like the firing squad argument.
19
Here the scenario goes as follows: I am convicted of high treason and sentenced to be shot. I am put up against the wall; eight sharpshooters fifteen feet away take aim and shoot, each shooting eight times. Oddly enough, they all miss; I emerge from this experience shaken, but unscathed. I then compare two hypotheses: H3, that the sharpshooters intentionally missed, and H4, that the sharpshooters intended to shoot me. I note that my evidence, that I am unharmed, is much greater on H3 than it is on H4, and I conclude that H3 is to be preferred to H4. Here we also have an OSE; it would not be possible (we may suppose) for me to observe that I had been fatally shot. But my argument for the superiority of H3 to H4 certainly looks entirely right and proper. So our question is: which of these two arguments is the FTA more like?

I say it’s much more like the firing squad argument. Let’s suppose there are (or could be) many universes (see pp. 210ff); use “alpha” as a name of the universe we find ourselves in, the universe that is comprised by everything that is spatiotemporally related to us. What we observe is

O Alpha is fine-tuned.

 

We have the following two hypotheses:

D: Alpha has been designed by some powerful and intelligent being,

 

and

C: Alpha has come to be by way of some chance process that does not involve an intelligent designer.

 

We note that O is more likely on D than on C; we then conclude that with respect to this evidence, D is to be preferred to C.

Granted: we could not have existed if alpha had not been fine-tuned; hence we could not have observed that alpha is not fine-tuned; but how is that so much as relevant? The problem with the fishing argument is that I am arguing for a particular proportion of ten-inch fish by examining my sample, which, given my means of choosing it, is bound to contain only members that support the hypothesis in question. But in the fine-tuning case, I am certainly not trying to arrive at an estimate of the proportion of fine-tuned universes among universes generally. If I were, my procedure would certainly be fallacious; but that’s not at all what I am doing. Instead, I am getting some information about alpha (nevermind that I couldn’t have got information about any other universe, if there are other universes); and then I reason about alpha, concluding that D is to be preferred to C. There seems to be no problem there. Return to Eddington’s fishing example, and suppose my net is bound to capture exactly one fish, one that is ten inches long. I then compare two hypotheses:

H1 this fish had parents that were about 10 inches long

 

and

H2 this fish had parents that were about 1 inch long.

 

My observing that the fish is ten inches long is much more probable on H1 than on H2; H1 is therefore to be preferred to H2 (with respect
to this observation). This argument seems perfectly proper; the fact that I couldn’t have caught a fish of a different size seems wholly irrelevant. The same goes for the fine-tuning argument.

B. Is the Relevant Probability Space Normalizable?
 

Lydia McGrew, Timothy McGrew, and Eric Vestrup propose a formal objection to the FTA; there is, they claim, no coherent way to state the argument.
20
Why not? We are talking about various parameters—the strength of the gravitational force, the weak and strong nuclear forces, the rate of expansion of the universe—that can take on various values. But it looks as if there are no logical limits to the values these parameters could take on:

In each case, the field of possible values for the parameters appears to be an interval of real numbers unbounded at least in the upward direction. There is no
logical
restriction on the strength of the strong nuclear force, the speed of light, or the other parameters in the upward direction. We can represent their possible values as the values of a real variable in the half open interval [0, infinity].
21

 

Now there are several parameters involved here, and in principle we must consider various sets of assignments of values to the whole ensemble of parameters; the idea, of course, is that some of these sets are life-permitting but others are not. In the interests of simplicity, however, we can pretend there is just one parameter, which is apparently fine-tuned. The thought is that it could (by chance) have assumed any positive value (its value could have been any positive real number you please); but it does in fact assume a value in a small life-permitting range. That it does so is much more likely on theism than on chance; hence this fine-tuning is evidence, of one degree of strength or another, for theism.

Well then, what is the objection?

The critical point is that the Euclidean measure function described above is not normalizable. If we assume every value of every variable to be as likely as every other—more precisely, if we assume that, for each variable, every small interval of radius
e
on R has the same measure as every other—there is no way to “add up” the regions of R so as to make them sum to one.
22

 

But, they go on to say,

Probabilities make sense only if the sum of the logically possible disjoint alternatives adds up to one—if there is, to put the point more colloquially, some sense that attaches to the idea that the various possibilities can be put together to make up 100 percent of the probability space. But if we carve an infinite space up into equal finite-sized regions, we have infinitely many of them; and if we try to assign them each some fixed positive probability, however small, the sum of these is infinite.
23

 

By way of illustration, consider flying donkeys. For each natural number
n
, it is logically possible, I suppose, that there be exactly
n
flying donkeys. Now suppose we think that for any numbers
n
and
m
, it is as likely (apart from evidence) that there be
n
flying donkeys as
m
; in order to avoid unseemly discrimination, therefore, we want to assign the same probability to each proposition of the form
there are exactly n flying donkeys
. Call this
non-discrimination
: each proposition is to get the same probability. Suppose we also assume (as McGrew et al. apparently do, although they don’t mention it)
countable additivity
: the idea that for a countable set of mutually exclusive alternatives, the probability of any disjunction of the alternatives is equal to the sum of the probabilities of the disjuncts. Then obviously we can’t assign the same non-zero probability to each of these propositions, there being infinitely many of them; if we did, their sum would be infinite, rather than one. On the other hand, if we assign a probability of zero to each, then, while we honor non-discrimination and countable additivity, the probability space in
question isn’t normalizable; given countable additivity, the (infinite) sum of the probabilities assigned to those propositions is zero, not one.

The point is we can’t have each of non-discrimination, countable additivity, and normalizability when assigning probabilities to these propositions. We can have countable additivity and normalizability if we are willing to violate nondiscrimination: we could assign probabilities in accord with some series that sums to one (for example, a probability of 1/2 to the first proposition, 1/4 to the second, 1/8 to the third, and so on). We can have non-discrimination and countable additivity if we are willing to forgo normalizability; for example, we could assign each a probability of zero, and their countable sum also a probability of zero (not an attractive possibility). We can have non-discrimination and normalizability if we are willing to fiddle with additivity: for example, we could assign each proposition zero probability but assign their infinite disjunction a probability of one. What we can’t have is all three.

In the same way, if we consider any particular physical parameter—the velocity of light, for example—and if we respect non-discrimination by holding that for any natural numbers
n
and
m
, the velocity of light is as likely to be within one mile per second of
n
as of
m
, then we can’t respect both countable additivity and normalizability. This means, according to McGrew et al., that the fine-tuning arguments involve a fundamental incoherence. For suppose that in order for life to be permitted, the velocity of light must be within a mile or two per second of its actual value: one couldn’t properly erect a fine-tuning argument on that fact by arguing that it is much more probable that the velocity of light fall within that narrow range on theism than on chance. That is because if we respect non-discrimination and countable additivity, then the relevant probability measure isn’t normalizable. There is no (logical) upper limit on the velocity of light; hence its velocity in any units could be any positive real number. Hence the interval within which its velocity could fall is infinite. Any way of dividing up that interval into equal subintervals will result in infinitely many subintervals. But then there is no way of assigning probabilities to those subintervals in such a way that the sum (given countable additivity) of the probabilities assigned is equal to 1: if any nonzero probability is assigned to each, the sum of those probabilities will be infinite, but if a probability of zero is assigned to each, the sum of the probabilities will be zero. A genuine probability measure, however, must be additive and normalizable. Hence no FTA involves a genuine probability measure; therefore FTAs are incoherent.

So say McGrew et al. I think we can see, however, that their objection is clearly defective: it proves too much. Imagine the night sky displaying the words: “I am the Lord God, and I created the universe.” These words, a heavenly sign, as it were, are visible from any part of the globe at night; upon investigation they appear to be a cosmic structure with dimensions one light year by twenty light years, about forty light years distant from us. Following Collins, Swinburne, and others, one might offer an argument for theism based on this phenomenon: it is much more likely that there be such a phenomenon given theism than given chance.

But not if the McGrew et al. objection is a good one. For think about the parameters involved here—confine consideration to the length of the structure. Not just any length will be “message-permitting.” Holding its distance constant, if the structure is too short, it won’t be visible to us. But the same goes if it is too long—for example, if it is so long that we can see only a minute and uninterpretable portion of one of the letters. Therefore there is a “message permitting” band such that the length of this structure must fall within that band for it to function as a message. What are the logical constraints on the length of this structure? None; for any number
n
, it is logically possible that this structure be
n
light years long. (You might object that our universe is, or is at least at present thought to be, finite in extent; that, however, is a contingent rather than a logically necessary fact.) But if the structure can be any length whatever, this parameter, like those involved in the FTA, can fall anywhere in an infinite interval. This means that if we honor non-discrimination, the relevant probability measure isn’t normalizable: we can’t assign the same positive probability to each proposition of the form
the message is n light years long
in such a way that these probabilities sum to 1. Hence the McGrew claim implies that a design argument based on the existence of this message can’t be coherently stated. But surely it can be; the fact is it would be powerfully persuasive. The objection is too strong in that it eliminates arguments that are clearly successful.

Other books

Life Among The Dead by Cotton, Daniel
You or Someone Like You by Chandler Burr
Blind Trust by Jody Klaire
A Land to Call Home by Lauraine Snelling
The Black Queen (Book 6) by Kristine Kathryn Rusch
Etherworld by Gabel,Claudia
Outcast by Oloier, Susan