Self Driving Cars Can Be Programmed To Make Moral Decisions

MedicalResearch.com Interview with:
Leon Sütfeld
The Institute of Cognitive Science
University of Osnabrück 

MedicalResearch.com: What is the background for this study? What are the main findings?

Response: Self-driving cars, and especially future fully autonomous cars, pose a number of ethical challenges. One of these challenges is making the “right” decision when it comes to a so-called dilemma situation, in which a collision is unavoidable (or highly probable), but a decision can be made as to which of multiple different collisions to choose. Our study assesses the behavior of human participants in such dilemma situations and evaluates algorithmic models that are trained on this data to make predictions.

Our main findings are that in a controlled virtual reality environment, the decisions of humans are fairly consistent and can be well described by simple value-of-life models.

MedicalResearch.com: What should clinicians and patients take away from your report?

Response: Our study shows that algorithmic ethical decision making with value-of-life models is generally possible and should thus be considered for use in self-driving cars. These models have the added advantage of being quite transparent and their decision making process is comprehensible for humans. At the same time they are in principle equipped to deal with uncertainties and different probabilities of injuries or fatalities. This differentiates this general approach of probabilistic models from systems based on categorical rules, such as “a humans well-being must always be favored over an animals well-being“.

MedicalResearch.com: What recommendations do you have for future research as a result of this study?

Response: We do not see this study as delivering a final solution to the problem, but we see it as a baseline for future research looking into methods for the assessment and modeling of human moral behavior in this kind of situation and potentially others. There are quite a few important aspects of such dilemma situations that we intentionally left out of this study in order not to overload the experimental design — for example the number of people involved, the question of self-sacrifice, as well as considerations of fairness (e.g., when swerving onto the sidewalk).

MedicalResearch.com: Is there anything else you would like to add?

Response: We believe that a public debate on the matter of ethics for autonomous vehicles is very desirable, since this is the first time that algorithmic ethical decision making will make its way into our daily lives. Important points of discussion would for example be to what degree and in which cases categorical rules make sense, and where probabilistic systems might be necessary to come to reasonable decisions. Further, the question of whose moral judgement or behavior the self-driving cars should follow. Do we want them to adhere to some “population mean“ in terms of the moral values, or do we want a rather small group of experts to make that call? And can a person even be an “expert“ in terms of ethics, if our moral code is mostly based on a form of intuition? We thus also encourage the media and politics to take part in this debate. 

MedicalResearch.com: Thank you for your contribution to the MedicalResearch.com community.

Citation:

Leon R. Sütfeld, Richard Gast, Peter König, Gordon Pipa. Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure. Frontiers in Behavioral Neuroscience, 2017; 11 DOI: 10.3389/fnbeh.2017.00122

Note: Content is Not intended as medical advice. Please consult your health care provider regarding your specific medical condition and questions.

 

 

 

Last Updated on July 7, 2017 by Marie Benz MD FAAD