Read The Bell Curve: Intelligence and Class Structure in American Life Online
Authors: Richard J. Herrnstein,Charles A. Murray
Tags: #History, #Science, #General, #Psychology, #Sociology, #Genetics & Genomics, #Life Sciences, #Social Science, #Educational Psychology, #Intelligence Levels - United States, #Nature and Nurture, #United States, #Education, #Political Science, #Intelligence Levels - Social Aspects - United States, #Intellect, #Intelligence Levels
B
LACK
D
ROPOUT
R
ATES.
The high black dropout rates from college are also easier to understand in the light of the figure above. Typically, the black dropout rate from universities in the last decade has run at about twice the white rate.
37
This was also true of the NLSY. Of all those who ever entered a four-year institution, 63 percent of whites had gotten a bachelor’s degree by 1990 (when the youngest reached 26) compared to only 34 percent of blacks. But the discrepancy is not mysterious. The first and dominant explanation of higher black dropout rates is cognitive ability. Controlling for age and IQ, the black and white dropout rates converge. Given the average IQ of those who entered four-year institutions (about 110), the expected probability that a youth entering a four-year college would graduate was 59 percent for blacks and 61 percent for whites, a trivial difference.
38
But whereas cognitive ability explains most of the difference in dropout rates, it may not explain everything. In particular, the NLSY data reflect the overall experience of blacks and whites, ignoring the experience at specific colleges as we described it earlier. Let us consider MIT, for which dropout rates by race have also been reported. In 1985, the average SAT-Math score for a black male accepted at MIT was 659, a score that put him above the 90th percentile of all students taking the SAT but below the 25th centile of all students at MIT.
39
The dropout rate for black students at MIT in the mid-1980s was 24 percent, compared to 14 percent for whites.
40
Even if the average MIT black freshman in 1985 could indeed do the work there in some objective sense, getting discouraged about one’s capacity to compete in an environment may be another cost of affirmative action, a phenomenon that
has been described anecdotally by a number of observers, black and white alike.
41
The other vantage point to take into account is the view of the public toward minority and white college graduates. The college degree—what it is and where you got it—packs a lot of information in today’s America, not just as a credential that employers evaluate in hiring but as a broad social signal. One may lament this (people ought to be judged on their own merits, not by where they went to school), but it also has a positive side. Historically, that little sentence, “I have a [solid degree] from [a well-regarded university],” jolted you loose from any number of stereotypes that the person you encountered might have had of you. The reason it did so was that a well-regarded college had a certain set of standards, and its graduates presumably met those standards. No matter what one’s view is of “credentialing” in theory, the greatest beneficiaries of credentialing are those who are subject to negative stereotypes. One of the great losses of preferential affirmative action has been to dilute the effects of the university credential for some minorities. Today the same degree from the same university is perceived differently if you have a black face or a white one. This is not a misguided prejudice that will be changed if only people are given more accurate information about how affirmative action really works. On the contrary, more accurate information about how affirmative action really works confirms such perceptions.
This unhappy reality is unnecessary. There is no reason that minority graduates from any given college have to be any different from white college graduates in their ability or accomplishments. Restoring the value of the credential is easy: Use uniform procedures for selecting, grading, and granting degrees to undergraduates. Some difference in the cognitive distributions among college graduates would still remain, because even if individual schools were to treat applicants and students without regard to race, we could expect some cognitive difference in the national distributions of graduates (since a group with disproportionately fewer high-scoring students would probably gravitate to less competitive schools; they would graduate, but nonetheless have lower mean ability). But within schools, the group differences could be as close to zero as the institution chooses to get. America’s universities are instead perpetuating in the ranks of their graduates the same gap in cognitive
ability that separates blacks and Latinos from whites in the general population. As we saw in the data on law and medical schools, there is no reason to think that the gap shrinks as people move further up the educational ladder, and some reason to think it continues to grow.
Some will argue the gap in ability is an acceptable price to pay for the other good things that are supposed to be accomplished by aggressive affirmative action. Our judgment, in contrast, is that in trying to build a society where ethnicity no longer matters in the important events in life, it is crucially important that society’s prestigious labels have the same or as close to the same meaning as possible for different ethnic groups. In the case of one of these key labels—the educational degree—policymakers, aided and abetted by the universities, have prevented this from happening.
We will trace some of the consequences in the next chapter, when we turn to affirmative action in the workplace and present at more length our assessment of how the double standard embedded in affirmative action affects society. For now, we will observe only that the seeds of the consequences in the workplace and beyond are sown in colleges and universities. To anticipate our larger conclusion, affirmative action as it is being practiced is a grave error.
We urge that affirmative action in the universities be radically modified, returning to the original conception. Universities should cast a wide net in seeking applicants, making special efforts to seek talent wherever it lives—in the black South Bronx, Latino Los Angeles, and white Appalachia alike. In the case of two candidates who are fairly closely matched otherwise, universities should give the nod to the applicant from the disadvantaged background. This original sense of affirmative action seems to us to have been not only reasonable and fair but wise.
What does “closely matched” mean in terms of test scores? We have no firm rules, but as a guideline, admissions officers might aim for an admissions policy such that no identifiable group (such as a racial minority) has a mean that is more than half a standard deviation below the rest of the student body.
42
This guideline is by no means demanding. In effect, it asks only that the average minority student is at the 30th centile of the white distribution. Perhaps experience would prove that
this is not closely matched enough. But at least let us move toward that standard and see how it works. The present situation, with black students averaging well over a full standard deviation below the white mean, sometimes approaching two standard deviations, is so far out of line with any plausible rationale that universities today cannot publish the data on their admitted students and hope to persuade the public (or specialists in education) that their policies are reasonable.
Would an end to aggressive affirmative action mean that minorities who can profit from a genuine college education will find the door of opportunity closed to them? There is no reason to think so. On the contrary, we urge that people examine more closely an ignored, brief era in American university life—from the mid-1950s to the mid-1960s. Simultaneously, the civil rights movement was gaining momentum, white upper-middle-class America was having its consciousness raised on the subject of racial discrimination, and color-blindness was actively taken as the ideal. At many colleges during that era, applicants were forbidden to enclose a photograph and instructed to avoid any information in the essay that might help identify their race or religion. Whether admissions committees were truly innocent of this information is another question, but the intent was clear, and so was the result: Racial differences in qualifications during that time were minor, or so it appeared to both of us at the time.
What were campus race relations like then? What were the attitudes of the black students toward achievement? What was the performance of black students relative to the predictions that might have been made based on their high school performance? What were the dropout rates of blacks relative to whites in the same institution? What were the subsequent careers of black students from that era? How do black students from that era, looking back, assess the pluses and minuses of the current state of affairs versus their experience?
We must put such topics as questions because that era has been ignored. We suggest this possibility: American universities once approached the ideal in their handling of race on the campus, and there is no reason why they could not do so again.
Fewer blacks would be at Berkeley or Yale if there were no affirmative action. But admitting half as many black students to Yale does not mean that the rejected ones will not go to college; it just means that they will not go to Yale. For some individuals who are not chosen, this will be a loss, for others a blessing, but it is a far different choice from
“college” versus “no college.” It is not even clear how much the goals of diversity would be adversely affected for the system as a whole. If affirmative action in its present form were ended, the schools at the very top would have smaller numbers of blacks and some other minorities on their campuses, but many other schools in the next echelons would add those students, even as they lost some of their former students to schools further down the line. And at every level of school, the gap in cognitive ability between minorities and whites would shrink.
Ending affirmative action as it is currently practiced will surely have other effects. Affirmative action does in fact bring a significant number of minority students onto campuses who would not otherwise be there. Perhaps the overall percentage of some minorities who attend college would drop. But their white counterparts at the same level of ability and similar socioeconomic background are not in college now. To what extent is a society fair when people of similar ability and background are treated as differently as they are now? In 1964, the answer would have been unambiguous: Such a society is manifestly unfair. The logic was right then, and right now.
Employers want to hire the best workers; employment tests are one of the best and cheapest selection tools at their disposal. Since affirmative action began in the early 1960s, and especially since a landmark decision by the Supreme Court in 1971, employers have been tightly constrained in the use they may make of tests. The most common solution is for employers to use them but to hire enough protected minorities to protect themselves from prosecution and lawsuits under the job discrimination rules.
The rules that constrain employers were developed by Congress and the Supreme Court based on the assumptions that tests of general cognitive ability are not a good way of picking employees, that the best tests are ones that measure specific job skills, that tests are biased against blacks and other minorities, and that all groups have equal distributions of cognitive ability. These assumptions are empirically incorrect. Paradoxically, job hiring and promotion procedures that are truly fair and unbiased will produce the racial disparities that public policy tries to prevent.
Have the job discrimination regulations worked? The scholarly consensus is that they had some impact, on some kinds of jobs, in some settings, during the 1960s and into the 1970s, but have not had the decisive impact that is commonly asserted in political rhetoric. It also appears, however, that since the early 196Os blacks have been overrepresented in white collar and professional occupations relative to the number of candidates in the IQ range from which these jobs are usually filled, suggesting that the effects of affirmative action policy may be greater than usually thought.
The successes of affirmative action have been much more extensively studied than the costs. One of the most understudied areas of this topic is job performance. The scattered data suggest that aggressive affirmative action does produce large racial discrepancies in job performance in a given workplace. It is time that this important area be explored systematically.
In coming to grips with policy, a few hard truths have to be accepted. First, there are no good ways to implement current job discrimination law without incurring costs in economic efficiency and fairness to both employers and employees. Second, after controlling for IQ, it is hard to demonstrate that the United States still suffers from a major problem of racial discrimination in occupations and pay.
As we did for affirmative action in higher education, we present the case for returning to the original conception of affirmative action. This means scrapping the existing edifice of job discrimination law. We think the benefits to productivity and to fairness of ending the antidiscrimination laws are substantial. But our larger reason is that this nation does not have the option of ethnic balkanization.
A
ffirmative action in the workplace arose at the same time that it did in the universities but with important differences. One difference is that in the workplace, the government and the courts have been the main activists, forcing businesses into a variety of involuntary practices, whereas universities and colleges largely create their own policies regarding student selection. Affirmative action policies in the workplace have been more a matter of evolution than of coherent policymaking. (Appendix 7 traces this evolution.) Universities and colleges occasionally run afoul of affirmative action laws in their hiring and promotion decisions, but in student admissions they are usually far ahead of what has been legally required of them.
A second important difference is that almost everyone has a personal stake, and can see what is going on, in the workplace, unlike on campus. In colleges, the applicant who does not get in because he was displaced by an affirmative action admission never knows exactly why he was rejected. In many workplaces, individuals can identify others who are hired, fired, and promoted under the aegis of affirmative action, and they tend to have strong opinions about the merits of each case. In many workplaces, affirmative action decisions regarding a few people can affect the daily life of tens or hundreds of people who work with them and under them. College and university admission decisions have less obvious immediate effects. These may be some of the reasons that few, if any, points of friction in American society have been rubbed so raw as where affirmative action operates in the workplace. The topic inflames relations between white elites (who generally favor the policies) and white
workers (many of whom feel victimized by them), between ethnic groups, between the sexes, and between many citizens and their government.