The race to fill our roads with driverless cars has hit the home stretch. But as this new technology gets closer to the finish line, we need to pump the brakes. Otherwise, we run the risk of careening into an ethical brick wall.
How should we program a car to make the same split-second, life-and-death decisions human drivers make every day? And how do we decide which judgment calls are the right ones? Is your life or the life of a loved one more valuable than a pedestrian with a criminal past? Are children’s lives more important than the elderly or the obese? Developing answers to these questions will be imperative as we move closer to a world in which intelligent machines will have immense control over our daily lives.
Ford set the pace in the automation race last week when it announced that it will develop a fleet of autonomous vehicles for a commercial ride-hailing enterprise by 2021. Days later, Uber unveiled a partnership with Volvo to co-develop self-driving SUVs and said it would start to test driverless taxis in Pittsburgh, Penn. within a few weeks.
Ford and Uber are joined by Google, which has been testing its self-driving vehicles on California roads since at least 2010 and struck a deal with Fiat Chrysler to build autonomous minivans earlier this year. And in March, GM acquired startup Cruise Automation and announced a partnership with ridesharing service Lyft. Meanwhile, Volkswagen, Nissan, Audi, Lexus, Tesla, BMW, and Toyota are all reportedly testing their own autonomous vehicles.
Together, these companies are vying to usher the auto industry into a future that recently seemed more like science fiction than fact. In addition to following the rules of the road, operating a car requires a series of complex decisions based on assumptions and reactions to unforeseeable events. While computers are good at following rules, this level of pattern recognition was considered so difficult to program into self-driving technology that economists Frank Levy and Richard Murnane asserted that it would be near impossible to achieve in their 2004 book The New Division of Labor. Technological advancements in subsequent years have proven Levy and Murnane wrong. And now we face questions about how this technology will respond to the ethical and moral quandaries that occur on the road every day.
Proponents of driverless technology argue that autonomous vehicles will help decrease pollution, eliminate accidents caused by drunk and distracted drivers, improve traffic and fuel efficiency, and offer better access to transportation for the elderly and disabled. But automation raises major concerns that go beyond practical considerations and require us to ask deep questions about our values and how we make decisions.
Researchers at MIT addressed one such question in a study published in June in an article for Science. “When it becomes possible to program decision-making based on moral principles into machines,” they asked, “will self-interest or the public good predominate?”
To answer the question, researchers added a twist to the classic trolley problem, a thought experiment that asks participants if they would divert a runaway train to save the lives of five people in the train’s path, which would lead to the death of an innocent bystander, or if they would do nothing, saving the bystander and killing the five people on the main track. In the MIT scenario, researchers replaced the train with a self-driving car and asked participants if they favored automation programmed to save the most lives or to save their own life or that of a loved one.
The results were mixed. In the abstract, people chose to save the most lives possible. But when they were placed in the car themselves, participants opted to protect their self-interest.
While the study’s results will have implications for government regulation and consumer preferences, Michael Clamann, a senior research scientist at Duke University’s Humans and Autonomy Lab, says the researchers have ignored a major point. “A similar problem in manufacturing would be unacceptable,” he says. “In the past, we designed systems so they don’t have to make life or death choices. That we are having to make these decisions doesn’t make sense.”
What does make sense, Clamann insists, is stepping back and taking a look at the bigger picture. “Why is the car driving so fast it can’t stop? Why does it not know to slow down when near a school bus?” he asks. “You don’t just swap human for computer. We have to look at how we change the system.”
How should system be changed to accommodate autonomous vehicles? Clamann argues for the design of roads that keep pedestrians away from cars, and for the development of more connected cities where self-driving cars can better communicate with each other and their surroundings. Others like Jerry Kaplan, a fellow at the Stanford Center for Legal Infomatics, advocate for reserving lanes and certain roads at peak hours for autonomous vehicles. And government agencies like the Michigan Department of Transportation have offered a few low-tech solutions, calling on states to maintain the quality and uniformity of pavement markings.
The move toward autonomous vehicles raises difficult questions about the role of humans in the face of exponential technological progress. In the face of these dilemmas, one thing is clear: In the race to automate the world, we need to pause. We need to take the time to think about how humans will be affected by the rapid transformation of our everyday experiences, and how we can put sustainable values at the center of everything we do. From there, we can make more nuanced, thoughtful decisions about the role and behavior of machines and how they can best serve humanity.