Will your driverless car be willing to kill you to save the lives of others?: VIDEO


Will your driverless car be willing to kill you to save the lives of others?

Survey reveals the moral dilemma of programming autonomous vehicles: should they hit pedestrians or avoid and risk the lives of occupants?


A driverless Volkswagen E-Golf in Wolfsburg, Germany. Photograph: Julian Stratenschulte/dpa picture alliance/Alamy


Ian Sample Science editor@iansample

Thursday 23 June 2016

There’s a chance it could bring the mood down. Having chosen your shiny new driverless car, only one question remains on the order form: whether your spangly, futuristic vehicle be willing to kill you?


To buyers more accustomed to talking models and colours, the query might sound untoward. But for manufacturers of autonomous vehicles (AVs), the dilemma it poses is real. If a driverless car is about to hit a pedestrian, should it swerve and risk killing its occupants?


The scenario dates back to 1960s philosophy and is clearly an extreme one. The odds of an AV facing such a black and white situation, and being aware of the fact in time to act, are extremely low. Yet when millions of driverless cars take to the roads, small odds can build up to a daily occurrence.


In a raft of surveys published in the journal Science on Thursday, researchers in the US and France set out to canvas opinion on how driverless cars should behave in no-win situations. Rather than clarifying the moral code that vehicles should be programmed with, the surveys highlights bumps in the road for the coming AV revolution.


In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plough into and kill 10 pedestrians. They agreed, too, that it was moral for AVs to be programmed in this way: it minimised deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.


But then came the first sign of trouble. When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.


The conflict, if real, could undermine the safety benefits of driverless cars, the authors say. “Would you really want to be among the minority shouldering the duties of safety, when everyone else is free-riding, so to speak, on your equitability and acting selfishly? The consequence here is that everyone believes that AVs should operate one way, but because of their decisions, they operate in a less moral, less safe way,” said Azim Shariff at the University of Oregon, who conducted the surveys with Iyad Rahwan at MIT, and Jean-Francois Bonnefon at the Institute for Advanced Study in Toulouse.

Further surveys pointed to even more trouble ahead. Before the study, the researchers suspected that the best way to make sure driverless cars are as safe as possible was through government regulation. But the surveys found most people objected to the idea. If regulations forced manufacturers to install moral algorithms that minimised deaths on the road, the majority of people said they would buy unregulated cars instead.


Advertisement


The finding highlights another way in which human decision making could undermine future road safety. Driverless cars could make the roads safer and greener. But only if humans allow them to. “Having government regulation might turn off a large chunk of the population from buying these AVs, which would maintain more human drivers, and thus more accidents caused by human error,” said Shariff. “By trying to make a safer traffic system through regulation, you end up creating a less safe one, because people entirely opt out of the AVs altogether.”


A driverless car will rarely find itself in a situation where the only options are for one group or another to be killed outright. More common will be scenarios where different actions inflict more or less harm on the groups. Even here, driverless cars will have to make tough decisions about who may be harmed most, potentially taking into account the young versus the old, and those who are following traffic rules versus those who are violating them. To foster public discussion, the scientists have created a website where people can create their own robotic ethical dilemmas.


“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” the authors write in the journal. “As we are about to endow millions of vehicles with autonomy, a serious consideration of algorithmic morality has never been more urgent.”



Alan Winfield at the Bristol Robotics Laboratory said the paper exposed what we might have expected: that self-interest weighs more heavily than the greater good. As he puts it: “Driverless cars that might sacrifice the few to save the many are great, as long as you’re not one of the few.”

Both transparency and regulation are necessary for driverless cars to be a success, Winfield adds. “Without transparency you cannot regulate, and without regulation, driverless cars are unlikely to be trusted. Think of passenger airplanes. The reason we trust them is because it’s a highly regulated industry with an amazing safety record, and robust, transparent processes of air accident investigation when things do go wrong. There’s a strong case for a driverless car equivalent to the Civil Aviation Authority, with a driverless car accident investigation branch,” he said. “Without this, it’s hard to see how the technology will win public trust.”

https://www.theguardian.com/science/2016/jun/23/will-your-driverless-car-be-willing-to-kill-you-to-save-the-lives-of-others



kcontents


댓글()