The Glass Cage: Automation and Us (27 page)

BOOK: The Glass Cage: Automation and Us
8.77Mb size Format: txt, pdf, ePub
ads

Human beings are anything but flawless when it comes to ethical judgments. We frequently do the wrong thing, sometimes out of confusion or heedlessness, sometimes deliberately. That’s led some to argue that the speed with which robots can sort through options, estimate probabilities, and weigh consequences will allow them to make more rational choices than people are capable of making when immediate action is called for. There’s truth in that view. In certain circumstances, particularly those where only money or property is at stake, a swift calculation of probabilities may be sufficient to determine the action that will lead to the optimal outcome. Some human drivers will try to speed through a traffic light that’s just turning red, even though it ups the odds of an accident. A computer would never act so rashly. But most moral dilemmas aren’t so tractable. Try to solve them mathematically, and you arrive at a more fundamental question: Who determines what the “optimal” or “rational” choice is in a morally ambiguous situation? Who gets to program the robot’s conscience? Is it the robot’s manufacturer? The robot’s owner? The software coders? Politicians? Government regulators? Philosophers? An insurance underwriter?

There is no perfect moral algorithm, no way to reduce ethics to a set of rules that everyone will agree on. Philosophers have tried to do that for centuries, and they’ve failed. Even coldly utilitarian calculations are subjective; their outcome hinges on the values and interests of the decision maker. The rational choice for your car’s insurer—the dog dies—might not be the choice you’d make, either deliberately or reflexively, when you’re about to run over a neighbor’s pet. “In an age of robots,” observes the political scientist Charles Rubin, “we will be as ever before—or perhaps as never before—stuck with morality.”
3

Still, the algorithms will need to be written. The idea that we can calculate our way out of moral dilemmas may be simplistic, or even repellent, but that doesn’t change the fact that robots and software agents are going to have to calculate their way out of moral dilemmas. Unless and until artificial intelligence attains some semblance of consciousness and is able to feel or at least simulate emotions like affection and regret, no other course will be open to our calculating kin. We may rue the fact that we’ve succeeded in giving automatons the ability to take moral action before we’ve figured out how to give them moral sense, but regret doesn’t let us off the hook. The age of ethical systems is upon us. If autonomous machines are to be set loose in the world, moral codes will have to be translated, however imperfectly, into software codes.

H
ERE’S ANOTHER
scenario. You’re an army colonel who’s commanding a battalion of human and mechanical soldiers. You have a platoon of computer-controlled “sniper robots” stationed on street corners and rooftops throughout a city that your forces are defending against a guerrilla attack. One of the robots spots, with its laser-vision sight, a man in civilian clothes holding a cell phone. He’s acting in a way that experience would suggest is suspicious. The robot, drawing on a thorough analysis of the immediate situation and a rich database documenting past patterns of behavior, instantly calculates that there’s a 68 percent chance the person is an insurgent preparing to detonate a bomb and a 32 percent chance he’s an innocent bystander. At that moment, a personnel carrier is rolling down the street with a dozen of your human soldiers on board. If there is a bomb, it could be detonated at any moment. War has no pause button. Human judgment can’t be brought to bear. The robot has to act. What does its software order its gun to do: shoot or hold fire?

If we, as civilians, have yet to grapple with the ethical implications of self-driving cars and other autonomous robots, the situation is very different in the military. For years, defense departments and military academies have been studying the methods and consequences of handing authority for life-and-death decisions to battlefield machines. Missile and bomb strikes by unmanned drone aircraft, such as the Predator and the Reaper, are already commonplace, and they’ve been the subject of heated debates. Both sides make good arguments. Proponents note that drones keep soldiers and airmen out of harm’s way and, through the precision of their attacks, reduce the casualties and damage that accompany traditional combat and bombardment. Opponents see the strikes as state-sponsored assassinations. They point out that the explosions frequently kill or wound, not to mention terrify, civilians. Drone strikes, though, aren’t automated; they’re remote-controlled. The planes may fly themselves and perform surveillance functions on their own, but decisions to fire their weapons are made by soldiers sitting at computers and monitoring live video feeds, operating under strict orders from their superiors. As currently deployed, missile-carrying drones aren’t all that different from cruise missiles and other weapons. A person still pulls the trigger.

The big change will come when a computer starts pulling the trigger. Fully automated, computer-controlled killing machines—what the military calls lethal autonomous robots, or LARs—are technologically feasible today, and have been for quite some time. Environmental sensors can scan a battlefield with high-definition precision, automatic firing mechanisms are in wide use, and codes to control the shooting of a gun or the launch of a missile aren’t hard to write. To a computer, a decision to fire a weapon isn’t really any different from a decision to trade a stock or direct an email message into a spam folder. An algorithm is an algorithm.

In 2013, Christof Heyns, a South African legal scholar who serves as special rapporteur on extrajudicial, summary, and arbitrary executions to the United Nations General Assembly, issued a report on the status of and prospects for military robots.
4
Clinical and measured, it made for chilling reading. “Governments with the ability to produce LARs,” Heyns wrote, “indicate that their use during armed conflict or elsewhere is not currently envisioned.” But the history of weaponry, he went on, suggests we shouldn’t put much stock in these assurances: “It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside.” Once a new type of weaponry is deployed, moreover, an arms race almost always ensues. At that point, “the power of vested interests may preclude efforts at appropriate control.”

War is in many ways more cut-and-dried than civilian life. There are rules of engagement, chains of command, well-demarcated sides. Killing is not only acceptable but encouraged. Yet even in war the programming of morality raises problems that have no solution—or at least can’t be solved without setting a lot of moral considerations aside. In 2008, the U.S. Navy commissioned the Ethics and Emerging Sciences Group at California Polytechnic State University to prepare a white paper reviewing the ethical issues raised by LARs and laying out possible approaches to “constructing ethical autonomous robots” for military use. The ethicists reported that there are two basic ways to program a robot’s computer to make moral decisions: top-down and bottom-up. In the top-down approach, all the rules governing the robot’s decisions are programmed ahead of time, and the robot simply obeys the rules “without change or flexibility.” That sounds straightforward, but it’s not, as Asimov discovered when he tried to formulate his system of robot ethics. There’s no way to anticipate all the circumstances a robot may encounter. The “rigidity” of top-down programming can backfire, the scholars wrote, “when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule-bound.”
5

In the bottom-up approach, the robot is programmed with a few rudimentary rules and then sent out into the world. It uses machine-learning techniques to develop its own moral code, adapting it to new situations as they arise. “Like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do.” The more dilemmas it faces, the more fine-tuned its moral judgment becomes. But the bottom-up approach presents even thornier problems. First, it’s impracticable; we have yet to invent machine-learning algorithms subtle and robust enough for moral decision making. Second, there’s no room for trial and error in life-and-death situations; the approach itself would be immoral. Third, there’s no guarantee that the morality a computer develops would reflect or be in harmony with human morality. Set loose on a battlefield with a machine gun and a set of machine-learning algorithms, a robot might go rogue.

Human beings, the ethicists pointed out, employ a “hybrid” of top-down and bottom-up approaches in making moral decisions. People live in societies that have laws and other strictures to guide and control behavior; many people also shape their decisions and actions to fit religious and cultural precepts; and personal conscience, whether innate or not, imposes its own rules. Experience plays a role too. People learn to be moral creatures as they grow up and struggle with ethical decisions of different stripes in different situations. We’re far from perfect, but most of us have a discriminating moral sense that can be applied flexibly to dilemmas we’ve never encountered before. The only way for robots to become truly moral beings would be to follow our example and take a hybrid approach, both obeying rules and learning from experience. But creating a machine with that capacity is far beyond our technological grasp. “Eventually,” the ethicists concluded, “we may be able to build morally intelligent robots that maintain the dynamic and flexible morality of bottom-up systems capable of accommodating diverse inputs, while subjecting the evaluation of choices and actions to top-down principles.” Before that happens, though, we’ll need to figure out how to program computers to display “supra-rational faculties”—to have emotions, social skills, consciousness, and a sense of “being embodied in the world.”
6
We’ll need to become gods, in other words.

Armies are unlikely to wait that long. In an article in
Parameters
, the journal of the U.S. Army War College, Thomas Adams, a military strategist and retired lieutenant colonel, argues that “the logic leading to fully autonomous systems seems inescapable.” Thanks to the speed, size, and sensitivity of robotic weaponry, warfare is “leaving the realm of human senses” and “crossing outside the limits of human reaction times.” It will soon be “too complex for real human comprehension.” As people become the weakest link in the military system, he says, echoing the technology-centric arguments of civilian software designers, maintaining “meaningful human control” over battlefield decisions will become next to impossible. “One answer, of course, is to simply accept a slower information-processing rate as the price of keeping humans in the military decision business. The problem is that some adversary will inevitably decide that the way to defeat the human-centric systems is to attack it with systems that are not so limited.” In the end, Adams believes, we “may come to regard tactical warfare as properly the business of machines and not appropriate for people at all.”
7

What will make it especially difficult to prevent the deployment of LARs is not just their tactical effectiveness. It’s also that their deployment would have certain ethical advantages independent of the machines’ own moral makeup. Unlike human fighters, robots have no baser instincts to tug at them in the heat and chaos of battle. They don’t experience stress or depression or surges of adrenaline. “Typically,” Christof Heyns wrote, “they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.”
8

Robots don’t lie or otherwise try to hide their actions, either. They can be programmed to leave digital trails, which would tend to make an army more accountable for its actions. Most important of all, by using LARs to wage war, a country can avoid death or injury to its own soldiers. Killer robots save lives as well as take them. As soon as it becomes clear to people that automated soldiers and weaponry will lower the likelihood of their sons and daughters being killed or maimed in battle, the pressure on governments to automate war making may become irresistible. That robots lack “human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values,” in Heyns’s words, may not matter in the end. In fact, the moral stupidity of robots has its advantages. If the machines displayed human qualities of thought and feeling, we’d be less sanguine about sending them to their destruction in war.

The slope gets only more slippery. The military and political advantages of robot soldiers bring moral quandaries of their own. The deployment of LARs won’t just change the way battles and skirmishes are fought, Heyns pointed out. It will change the calculations that politicians and generals make about whether to go to war in the first place. The public’s distaste for casualties has always been a deterrent to fighting and a spur to negotiation. Because LARs will reduce the “human costs of armed conflict,” the public may “become increasingly disengaged” from military debates and “leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”
9

BOOK: The Glass Cage: Automation and Us
8.77Mb size Format: txt, pdf, ePub
ads

Other books

Texas Rose by Marie Ferrarella
Touched by Vicki Green
No Place Like Home by Barbara Samuel
The Mistress's Child by Sharon Kendrick
Bob at the Plaza by Murphy, R.