The Blackwell Companion to Sociology (54 page)

BOOK: The Blackwell Companion to Sociology
5.82Mb size Format: txt, pdf, ePub

examination of the frequency of that allele at that locus, and for that population.

In other words, what is being assessed is the frequency of genetic variation at a particular spot in the DNA in each population.

Occasionally, these researchers find a locus where one of the populations

being observed and measured has, for example (let's call them), alleles H, I,

224

Troy Duster

and J, and another population has alleles H, I, and K. For example, we know

that there are alleles that are found primarily among sub-populations of Native

American Indians. When comparing a group of North American Indians with a

group of Finnish people, one might find a single allele that was present in some Indians but in no Finns (or at such a low frequency in the Finns that it is rarely, if ever, seen). However, it is important to note and reiterate again and again that this does not mean that all Native American Indians, even in this sub-population, will have that allele. (This is a major and important point that is made in sets of statements about race from UNESCO and the American Anthropological

Association.) Indeed, it is inevitable that some will have a different set of alleles, and that many of them will be the same alleles as some of the Finns. Also, if

comparing North American Indians from Arizona with North American Cau-

casians from Arizona, we would probably find a low level of thèÌndian allele''

in the so-called Caucasians, because there has been `ìnterbreeding.'' This leads to the next point.

It is possible to make arbitrary groupings of populations (geographic, linguis-

tic, self-identified by faith, identified by others by physiognomy, etc.) and

still find statistically significant allelic variations between those groupings. For example, we could simply pick all the people in Chicago, and all in Los

Angeles, and find statistically significant differences in allele frequency at some loci. Of course, at many loci, even most loci, we would not find statistically

significant differences. When researchers claim to be able to assign people to

groups based on allele frequency at a certain number of loci, they have chosen

loci that show differences between the groups they are trying to distinguish. The work of Devlin and Risch (1992a, b), Evett et al, (1993, 1996), and others

suggests that there are only about 10 percent of sites in the DNA that arèùseful'' for making distinctions. This means that at the other 90 percent of the sites, the allele frequencies do not vary between groups such as `Àfro-Caribbean people in England'' and ``Scottish people in England.'' But it does

not follow that because we cannot find a single site where allele frequency

matches some phenotype that we are trying to identify (for forensic purposes,

we should be reminded), there are not several (four, six, seven) that will be

effective, for the purposes of aiding the FBI, Scotland Yard, or the criminal

justice systems around the globe in highly probabilistic statements about sus-

pects, and the likely ethnic, racial, or cultural populations from which they can be identified ± statistically.

In the July 8, 1995 issue of the New Scientist, entitled ``Genes in black and

white,'' some extraordinary claims are made about what it is possible to learn

about socially defined categories of race from reviewing information gathered

using new molecular genetic technology. In 1993, a British forensic scientist

published what is perhaps the first DNA test explicitly acknowledged to providèìntelligence information'' along `èthnic'' lines for `ìnvestigators of unsolved crimes.'' Ian Evett, of the Home Office's forensic science laboratory in Birmingham, and his colleagues in the Metropolitan Police, claimed that their DNA test

can distinguish between ``Caucasians'' and `Àfro-Caribbeans' in nearly 85

percent of the cases.

The Sociology of Science

225

Evett's work, published in the Journal of Forensic Science Society, draws on

apparent genetic differences in three sections of human DNA. Like most

stretches of human DNA used for forensic typing, each of these three regions

differs widely from person to person, irrespective of race. But by looking at all three, say the researchers, it is possible to estimate the probability that someone belongs to a particular racial group. The implications of this for determining, for legal purposes, who is and who is not `òfficially'' a member of some racial or

ethnic category are profound.

Two years after the publication of the UNESCO statement purportedly bury-

ing the concept of ``race'' for the purposes of scientific inquiry and analysis, and during the same time period that the American Anthropological Association was

deliberating and generating a parallel statement, an article appeared in the

American Journal of Human Genetics, authored by Ian Evett and his associates,

summarized thus:

Before the introduction of a four-locus multiplex short-tandem-repeat (STR) sys-

tem into casework, an extensive series of tests were carried out to determine robust procedures for assessing the evidential value of a match between crime and suspect samples. Twelve databases were analyzed from the three main ethnic groups

encountered in casework in the United Kingdom; Caucasians, Afro-Caribbeans,

and Asians from the Indian subcontinent. Independence tests resulted in a number of significant results, and the impact that these might have on forensic casework was investigated. It is demonstrated that previously published methods provide a similar procedure for correcting allele frequencies ± and that this leads to conservative casework estimates of evidential value. (Evett et al., 1996, p. 398)

These new technologies have some not-so-hidden potential to be used for a

variety of forensic purposes in the development and `àuthentication'' of

typologies of human ethnicity and race. A contemporary update of an old idea

of the idea of deciding upon ``degree of whiteness'' or ``degree of Indianness'' is possibly upon us, anew, with the aid of molecular genetics. The Congress of the

United States passed the Allotment Act of 1887, denying land rights to those

Native Americans who werè`less than half-blood.'' The US government still

requires American Indians to producè`Certificates with Degree of Indian

Blood'' in order to qualify for a number of entitlements, including being able

to have one's art so labeled. The Indian Arts and Crafts Act of 1990 made it a

crime to identify oneself as a Native American when selling artwork without

federal certification authorizing one to make the legitimate claim that one

was, indeed, an authentic (`òne-quarter blood'' even in the 1990s) American

Indian.

As noted above, it is not art, but law and forensics, that ultimately will impel the genetic technologies to be employed on behalf of attempts to identify who is

`àuthentically'' in one category or another. Geneticists in Ottawa, Canada, have been trying to set up a system ``to distinguish between `Caucasian Americans'

and `Native Americans' on the basis of a variable DNA region used in DNA

fingerprinting'' (New Scientist, 1995, p. 37).

226

Troy Duster

In 1989, Virginia was the first state to pass legislation requiring all convicted felons (not just sex offenders) to provide blood samples for use in a state DNA

database. In the next three years, several states followed the lead of Virginia, and in 1993, the FBI initiated a national DNA databank to link the DNA profiles of

convicts across state jurisdictions. The Omnibus Crime Control Act of 1994

included a provision for coordinating DNA databank systems nationwide. Soon

thereafter, the Department of Justice awarded nearly nine million dollars to state and city agencies to improve their DNA testing capacities and to encourage

uniform standards (Butterfield, 1996). As a direct result, all fifty states have adopted laws requiring ``specified offenders to provide blood samples for forensic DNA testing'' (Nelkin and Andrews, 1999).

For practical purposes, the issue of the authentication of persons' membership

in a group (racial/ethnic/cultural) can be brought to the level of DNA analysis.

The efficaciousness of testing and screening for genetic disorders in risk populations that are ethnically and racially designated poses a related set of vexing

concerns for thè`separation'' of the biological and cultural taxonomies of race.

In New York City, Mayor Giuliani has been an advocate of the use of DNA

testing of those arrested by the police. He has been joined by others, who have

convinced Attorney General Janet Reno that she should appoint a commission to

bring back recommendations on this matter. A report is due in 2000, but a

preliminary draft has already concluded that such data collection would pass

constitutional muster. Critics have pointed to the fact that who the police stop and arrest is not a neutral matter, but heavily politically biased, and, in particular, ``racially'' biased. Indeed, the American Civil Liberties Union has filed a lawsuit to stop the police from targeting primarily African Americans.

The technology to usè`SNiPs on chips'' to group, identify, categorize, and

marginalize is with us, but it is still at a relatively early stage. The Department of Energy awarded a contract to IBM in early 1998 to produce a chip that can hold

more than eight times the amount of information available and permit analysis

at more than ten times the speed now possible with current chip technology.

That technology is due to become operative in the year 2000.

ACKNOWLEDGMENT

I thank William H. Schneider for references to the German literature.

16

Structures of Knowledge

Richard E. Lee and Immanuel Wallerstein

Thè`Two Cultures'':

Cultures'': the Long-term Trend

The trend to secularize authoritative knowledge began in the Western world at

the end of the Middle Ages. The resulting intellectual and institutional structures of knowledge production have been constitutive of, and constituted by, the

modern world-system. These structures have been fundamental to the operations

of this world-system, alongside the transformed relations of production and

distribution (which form together a core±periphery axial division of labor) and

the reorganized structures of sovereign states within an interstate system (which seek to monopolize collective decision-making and legitimate coercion).

Previously, knowledge in the Western world had been thought to reside in two

realms, the earthbound and the heavenly, each constituted in different ways.

About each, however, one could have knowledge of both the true and the good.

To secularize knowledge meant to exclude theological judgments and authority

from the search for knowledge about nature, but nature was now considered to

behave by the same set of rules on earth and in the universe beyond. In effect,

this separated knowledge about a heavenly world that was extra-natural and

was the domain of theology, from knowledge about the natural/human world

which was to be the domain of secular specialists. Had the secularization

stopped there, it would merely have underpinned increased autonomy for nat-

ural philosophers. This did not, however, satisfy those oriented to systematic

observation and empirical verification of knowledge about the physical world.

They insisted on separating knowledge of what was true from knowledge of

what was good, because they claimed that the latter was not realizable with

scientific methods. This radical separation of two kinds of natural knowledge

represented a departure from conceptions of knowledge hitherto espoused in any

part of the world. There has consequently been a continuing intellectual debate

228

Richard E. Lee and Immanuel Wallerstein

in the modern world as to whether there could be any links whatsoever between

what were now considered two quite different forms of knowledge.

The study of natural things was said to be only the study of the true, and it was progressively privileged over the arts or humanities, wherein one pursued the

study of the good and of the beautiful. Proceeding from the knowing subject of

Rene Descartes, the two domains came to be grounded in the asserted dualism of

(non-human) nature and humans, of matter and mind. Descartes was motivated

by the search for truth as an activity of the rational mind during a time marked by religious conflict, not by a consideration for values and accumulated learning.

This set his project against rhetoric and the priority of willing the good over

knowing the truth. Francis Bacon was also looking for a way of producing valid

knowledge without either depending on received authority or falling prey to

personal bias. Half a century later, Newton synthesized Bacon's empirical and

experimental approach and method of induction capable of producing natural

laws with Descartes's project based on rationalism and a deductive method.

This program reached its logical conclusion and social triumph over the

course of the nineteenth century, which saw the proclamation of a doctrine of

total determinism by Pierre Simon de Laplace, who considered himself a faithful

disciple of Newton. The defining characteristic of modern science, as it came to be conceived and practiced, was that the world was one in which the discovery

of universal laws explained change and permitted accurate prediction (and

postdiction). Natural processes were linear and theoretical explanations were

time-reversible. Laplace's demon, as an outsider theoretically perceiving the

Other books

The Distracted Preacher by Thomas Hardy
A Fête Worse Than Death by Dolores Gordon-Smith
Until You by Bertrice Small
Whispers by Lisa Jackson
The Middle Kingdom by David Wingrove
Burnout by Vrettos, Adrienne Maria
Versim by Hox, Curtis
Griffin's Shadow by Leslie Ann Moore