At some point in the nearer-than-might-be-comfortable future, an autonomous vehicle (AV) will find itself in a situation where something has gone wrong, and it has two options:
…either it can make a maneuver that will keep its passenger safe while putting a pedestrian at risk, or it can make a different maneuver that will keep the pedestrian safe while putting its passenger at risk.
What an AV does in situations like these will depend on how it’s been programmed: in other words, what ethical choice its software tells it to make.
To try to understand how users feel about the potential for AVs to make ethical decisions, Jean-Francois Bonnefon from CNRS in France, Azim Shariff from University of Oregon, and Iyad Rahwan from the MIT Media Lab conducted a series of online surveys, full of questions about AVs in ethical quandaries, as well as how ethical decisions made by AVs might influence the user’s perception
In total, the researchers conducted six online surveys of nearly 2000 people. All of which led to this conclusion:
“Although people tend to agree that everyone would be better off if AVs were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs. Accordingly, if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so.”