Drivers Prefer Autonomous Cars That Don't Kill Them - InformationWeek. The coming autonomous cars are creating legal and moral dilemmas that must be resolved before we drive them into the future. You may remember the SciFi movie, iRobot, in which the "utilitarian moral doctrine" is described. In fact, the only way an autonomous device, be it a car or a robot, can make decisions is based upon a utilitarian model. In the movie, the robot saves a drowning police officer, and leaves a child to drown in the car, by comparing the relative utilitarian merits of each. In fact, decisions about life and death need to be informed with human emotions and analysis. At least, in a future that most of us would like to live, humans will still live in a moral universe that is enhanced, not controlled, by technology. So, before you get into that autonomous vehicle, ask the builder what kind of moral model the car has been programmed to follow. Will it kill you to save any human in another car? Who gets to choose?
Comments