The violence associated with crack began to ebb in about 1991. This has led many people to think that crack itself went away. It didn’t. Smoking crack remains much more popular today than most people realize. Nearly 5 percent of all arrests in the United States are still related to cocaine (as against 6 percent at crack’s peak); nor have emergency room visits for crack users diminished all that much.

What did go away were the huge profits for selling crack. The price of cocaine had been falling for years, and it got only cheaper as crack grew more popular. Dealers began to underprice one another; profits vanished. The crack bubble burst as dramatically as the Nasdaq bubble would eventually burst. (Think of the first generation of crack dealers as the Microsoft millionaires; think of the second generation as Pets.com.) As veteran crack dealers were killed or sent to prison, younger dealers decided that the smaller profits didn’t justify the risk. The tournament had lost its allure. It was no longer worth killing someone to steal their crack turf, and certainly not worth being killed.

So the violence abated. From 1991 to 2001, the homicide rate among young black men—who were disproportionately represented among crack dealers—fell 48 percent, compared to 30 percent for older black men and older white men. (Another minor contributor to the falling homicide rate is the fact that some crack dealers took to shooting their enemies in the buttocks rather than murdering them; this method of violent insult was considered more degrading—and was obviously less severely punished—than murder.) All told, the crash of the crack market accounted for roughly 15 percent of the crime drop of the 1990s—a substantial factor, to be sure, though it should be noted that crack was responsible for far more than 15 percent of the crime increase of the 1980s. In other words, the net effect of crack is still being felt in the form of violent crime, to say nothing of the miseries the drug itself continues to cause.

The final pair of crime-drop explanations concern two demographic trends. The first one received many media citations: aging of the population.

Until crime fell so drastically, no one talked about this theory at all. In fact, the “bloodbath” school of criminology was touting exactly the opposite theory—that an increase in the teenage share of the population would produce a crop of superpredators who would lay the nation low. “Just beyond the horizon, there lurks a cloud that the winds will soon bring over us,” James Q. Wilson wrote in 1995. “The population will start getting younger again . . . Get ready.”

But overall, the teenage share of the population wasn’t getting much bigger. Criminologists like Wilson and James Alan Fox had badly misread the demographic data. The real population growth in the 1990s was in fact among the elderly. While this may have been scary news in terms of Medicare and Social Security, the average American had little to fear from the growing horde of oldsters. It shouldn’t be surprising to learn that elderly people are not very criminally intent; the average sixty-five-year-old is about one-fiftieth as likely to be arrested as the average teenager. That is what makes this aging-of-the-population theory of crime reduction so appealingly tidy: since people mellow out as they get older, more older people must lead to less crime. But a thorough look at the data reveals that the graying of America did nothing to bring down crime in the 1990s. Demographic change is too slow and subtle a process—you don’t graduate from teenage hoodlum to senior citizen in just a few years—to even begin to explain the suddenness of the crime decline.

There was another demographic change, however, unforeseen and long-gestating, that did drastically reduce crime in the 1990s.

Think back for a moment to Romania in 1966. Suddenly and without warning, Nicolae Ceauşescu declared abortion illegal. The children born in the wake of the abortion ban were much more likely to become criminals than children born earlier. Why was that? Studies in other parts of Eastern Europe and in Scandinavia from the 1930s through the 1960s reveal a similar trend. In most of these cases, abortion was not forbidden outright, but a woman had to receive permission from a judge in order to obtain one. Researchers found that in the instances where the woman was denied an abortion, she often resented her baby and failed to provide it with a good home. Even when controlling for the income, age, education, and health of the mother, the researchers found that these children too were more likely to become criminals.

The United States, meanwhile, has had a different abortion history than Europe. In the early days of the nation, it was permissible to have an abortion prior to “quickening”—that is, when the first movements of the fetus could be felt, usually around the sixteenth to eighteenth week of pregnancy. In 1828, New York became the first state to restrict abortion; by 1900 it had been made illegal throughout the country. Abortion in the twentieth century was often dangerous and usually expensive. Fewer poor women, therefore, had abortions. They also had less access to birth control. What they did have, accordingly, was a lot more babies.

In the late 1960s, several states began to allow abortion under extreme circumstances: rape, incest, or danger to the mother. By 1970 five states had made abortion entirely legal and broadly available: New York, California, Washington, Alaska, and Hawaii. On January 22, 1973, legalized abortion was suddenly extended to the entire country with the U.S. Supreme Court’s ruling in Roe v. Wade. The majority opinion, written by Justice Harry Blackmun, spoke specifically to the would-be mother’s predicament:

The detriment that the State would impose upon the pregnant woman by denying this choice altogether is apparent . . . Maternity, or additional offspring, may force upon the woman a distressful life and future. Psychological harm may be imminent. Mental and physical health may be taxed by child care. There is also the distress, for all concerned, associated with the unwanted child, and there is the problem of bringing a child into a family already unable, psychologically and otherwise, to care for it.

The Supreme Court gave voice to what the mothers in Romania and Scandinavia—and elsewhere—had long known: when a woman does not want to have a child, she usually has good reason. She may be unmarried or in a bad marriage. She may consider herself too poor to raise a child. She may think her life is too unstable or unhappy, or she may think that her drinking or drug use will damage the baby’s health. She may believe that she is too young or hasn’t yet received enough education. She may want a child badly but in a few years, not now. For any of a hundred reasons, she may feel that she cannot provide a home environment that is conducive to raising a healthy and productive child.

In the first year after Roe v. Wade, some 750,000 women had abortions in the United States (representing one abortion for every 4 live births). By 1980 the number of abortions reached 1.6 million (one for every 2.25 live births), where it leveled off. In a country of 225 million people, 1.6 million abortions per year—one for every 140 Americans—may not have seemed so dramatic. In the first year after Nicolae Ceauşescu’s death, when abortion was reinstated in Romania, there was one abortion for every twenty-two Romanians. But still: 1.6 million American women a year who got pregnant were suddenly not having those babies.

Before Roe v. Wade, it was predominantly the daughters of middle-or upper-class families who could arrange and afford a safe illegal abortion. Now, instead of an illegal procedure that might cost $500, any woman could easily obtain an abortion, often for less than $100.


Перейти на страницу:
Изменить размер шрифта: