Designing a Moral Machine

Back around the turn of the millennium, Susan Anderson was puzzling over a problem in ethics. Is there a way to rank competing moral obligations? The University of Connecticut philosophy professor posed the problem to her computer scientist spouse, Michael Anderson, figuring his algorithmic expertise might help.

At the time, he was reading about the making of the film 2001: A Space Odyssey, in which spaceship computer HAL 9000 tries to murder its human crewmates. “I realized that it was 2001,” he recalls, “and that capabilities like HAL’s were close.” If artificial intelligence was to be pursued responsibly, he reckoned that it would also need to solve moral dilemmas.

In the 16 years since, that conviction has become mainstream. Artificial intelligence now permeates everything from health care to warfare, and could soon make life-and-death decisions for self-driving cars. “Intelligent machines are absorbing the responsibilities we used to have, which is a terrible burden,” explains ethicist Patrick Lin of California Polytechnic State University. “For us to trust them to act on their own, it’s important that these machines are designed with ethical decision-making in mind.”

The Andersons have devoted their careers to that challenge, deploying the first ethically programmed robot in 2010. Admittedly, their robot is considerably less autonomous than HAL 9000. The toddler-size humanoid machine was conceived with just one task in mind: to ensure that homebound elders take their medications. According to Susan, this responsibility is ethically fraught, as the robot must balance conflicting duties, weighing the patient’s health against respect for personal autonomy. To teach it, Michael created machine-learning algorithms so ethicists can plug in examples of ethically appropriate behavior. The robot’s computer can then derive a general principle that guides its activity in real life. Now they’ve taken another step forward.

“The study of ethics goes back to Plato and Aristotle, and there’s a lot of wisdom there,” Susan observes. To tap into that reserve, the Andersons built an interface for ethicists to train AIs through a sequence of prompts, like a philosophy professor having a dialogue with her students.

The Andersons are no longer alone, nor is their philosophical approach. Recently, Georgia Institute of Technology computer scientist Mark Riedl has taken a radically different philosophical tack, teaching AIs to learn human morals by reading stories. From his perspective, the global corpus of literature has far more to say about ethics than just the philosophical canon alone, and advanced AIs can tap into that wisdom. For the past couple of years, he’s been developing such a system, which he calls Quixote — named after the novel by Cervantes.

Riedl sees a deep precedent for his approach. Children learn from stories, which serve as “proxy experiences,” helping to teach them how to behave appropriately. Given that AIs don’t have the luxury of childhood, he believes stories could be used to “quickly bootstrap a robot to a point where we feel comfortable about it understanding our social conventions.”