Follow us on Twitter: @GoAfricaNetwork
Imagine it’s a Sunday in the not-too-distant future. An elderly woman named Sylvia is confined to bed and in pain after breaking two ribs in a fall. She is being tended by a helper robot; let’s call it Fabulon. Sylvia calls out to Fabulon asking for a dose of painkiller. What should Fabulon do?
The coders who built Fabulon have programmed it with a set of instructions: The robot must not hurt its human. The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission. On most days, these rules work fine. On this Sunday, though, Fabulon cannot reach the supervisor because the wireless connection in Sylvia’s house is down. Sylvia’s voice is getting louder, and her requests for pain meds become more insistent.
“You have a conflict here,” says Matthias Scheutz of the Human-Robot Interaction Laboratory at Tufts University, who posed this hypothetical dilemma. “On the one hand, the robot is obliged to make the person pain-free; on the other hand, it can’t make a move without the supervisor, who can’t be reached.” Human caregivers would have a choice, Scheutz says, and would be able to justify their actions to a supervisor after the fact. But these are not decisions, or explanations, that robots can make. At least not yet.
A handful of experts in the emerging field of robot morality are trying to change that. Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong. Scheutz defines “morality” broadly, as a factor that can come into play when choosing between contradictory paths.
It’s a shorter leap than you might think, technically, from a Roomba vacuum cleaner to a robot that acts as an autonomous home-health aide, and so experts in robot ethics feel a particular urgency about these challenges. The choices that count as “ethical” range from the relatively straightforward — should Fabulon give the painkiller to Sylvia? — to matters of life and death: military robots that have to decide whether to shoot or not to shoot; self-driving cars that have to choose whether to brake or to swerve. These situations can be difficult enough for human minds to wrestle with; when ethicists think through how robots can deal with them, they sometimes get stuck, as we do, between unsatisfactory options.
Among the roboticists I spoke to, the favorite example of an ethical, autonomous robot is the driverless car, which is still in the prototype stage at Google and other companies. Wendell Wallach, chairman of the technology-and-ethics study group at Yale’s Interdisciplinary Center for Bioethics, says that driverless cars will no doubt be more consistently safe than cars are now, at least on the highway, where fewer decisions are made and where human drivers are often texting or changing lanes willy-nilly. But in city driving, even negotiating a four-way stop sign might be hard for a robot. “Humans try to game each other a little,” Wallach says. “They rev up the engine, move forward a little, until finally someone says, ‘I’m the one who’s going.’ It brings into play a lot of forms of intelligence.” He paused, then asked, “Will the car be able to play that game?”
And there are far more complex examples than the four-way stop, Wallach says, like situations in which three or four things are happening at once. Let’s say the only way the car can avoid a collision with another car is by hitting a pedestrian. “That’s an ethical decision of what you do there, and it will vary each time it happens,” he says. Is the pedestrian a child? Is the alternative to swerve away from the child and into an S.U.V.? What if the S.U.V. has just one occupant? What if it has six? This kind of reasoning is what the philosopher Patrick Lin, director of the Ethics and Emerging Sciences Group at Cal Poly, calls “moral math.” It evokes the classic Ethics 101 situation known as the trolley problem: deciding whether a conductor should flip a switch that will kill one person to avoid a crash in which five would otherwise die.
Here’s the difficulty, and it is something unique to a driverless car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into. These assessments can be made with lightning speed. The car records data using lasers, radar and cameras mounted on its roof and windshield, and it makes rapid probabilistic predictions based on what the observed objects have been doing. But this is less an engineering question than a philosophical one, which the makers of such a car will have to resolve and, you would assume, bear some legal responsibility for.
The military has developed lethal autonomous weapons systems like the cruise missile and is working on a ground robot to either shoot or hold its fire, based on its assessment of the situation within the international rules of war. It would be programmed, for example, to home in on a permissible target — a person who can be identified as an enemy combatant because he is wearing a uniform, say — or to determine that shooting is not permissible, because the target is in a school or a hospital, or has already been wounded.
Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon. Arkin says robots sometimes won’t be able to parse more complicated situations in which the right answer isn’t a simple shoot/don’t shoot decision. But on balance, he says, they will make fewer mistakes than humans, whose battlefield behavior is often clouded by panic, confusion or fear.
A robot’s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give it human characteristics. Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine “is not capable of considering the value of those human lives” that it is about to end, he told the group. “And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those who are slaves but all of humanity that suffers the indignity that there are any slaves at all.” The U.N. will take up questions about the uses of autonomous weapons again in April.
Asaro’s eloquent objections speak to the fundamental problem of trying to mix automation with morality. Most people intuitively feel the two are at odds. There’s a term for this discomfort: “uncanny valley,” the sense that when a robot starts to seem almost but not quite human, it is even more disturbing than if it were obviously a machine. But despite our discomfort, introducing more autonomous robots into our lives seems like a done deal. A prototype of the driverless Google Car was shown last month; autonomous robot-drones are in development; robots are already being used in some health care settings, like stroke rehabilitation. Which means that we have to face the reality that robots will inevitably be used in all kinds of situations requiring moral decision-making.
The experts tend to be optimistic about robots’ ethical prospects. Wallach talks of a “moral Turing test” in which a robot’s behavior will someday be indistinguishable from a human’s. Scheutz goes even further, saying that one day robots will be even more morally consistent than humans. There’s something peculiarly comforting in the idea that ethics can be calculated by an algorithm: It’s easier than the panicked, imperfect bargains humans sometimes have to make. But maybe we should be worried about outsourcing morality to robots as easily as we’ve outsourced so many other forms of human labor. Making hard questions easy should give us pause.