What should a Driverless car do if it is presented with a life or death scenario. Either save the driver of the vehicle by swerving into another car to avoid an accident, or save the driver of the opposite vehicle by swerving away from it? That is a decision that would need to be programmed into the car, and one that presents a lot of moral dilemmas.
Driverless cars are one of the promises that the future holds for us. A few years ago, Tesla drivers woke up to an update in their cars which allowed for the car to drive on its own with supervision from the usual driver. Driverless cars have a whole host of applications to make our lives easier than we have ever had it before. They will probably even change the way we think about work, most obviously for those who currently drive for their living -taxis, truckers, bus drivers-, but also for the individual commuting to work.
If you no longer need to focus on driving while on your way to work, what is to keep your employer from demanding you use that time to get work done, especially if widespread wireless internet connections for entire cities may be in our future? Driverless cars may also only widen the gap between the rich and the poor, especially toward the onset of their use. However, while these are all important considerations we must not overlook the morality and mortality of these cars. In short, we need to realize that these cars will have to choose to who to save and who to kill if needed.
This is far from the first piece written about driverless cars and their need for a programmed kill decision, but much of the glamour surrounding the issue has died down and I would like to keep the conversation going. This is especially needed, because apparently these cars are already making these sorts of decisions. The conversation essentially boils down to a moral dilemma that has been argued for decades already, long before the driverless car. These were the “Trolley Problems”, essentially the choice between saving passengers or pedestrians based off of the number of potential victims.
But do we have to hold the machines we create to the same standards of morality that we hold ourselves to? I think that it would be much harder to attempt that, and even more problematic for ourselves than it would be for the car and its kill decision. Philosophers, activists, and lawmakers have all done the necessary job of trying to define moral compasses for humanity since written history, but we have yet to actually have concrete agreement on what those morals are. To hold a machine to our standards of morality is just as problematic as holding ourselves to them in any uniform way because we have not yet figured out what those are exactly. The Trolley Problem is still debated by professors, students, and philosophers everywhere today.
Thus, I think that we need to create a moral code for machines that is separate from humanity. It can be based off of the tentatively agreed upon ideals of humanity’s morality, but there is no way that it could ever be parallel to it (see reasons above). My opinion then, would be that these machines will have to always privilege saving the most amount of humans possible, even if it at the expense of its own passenger. And this is exactly where the hard division between humanity and machine has to stay. The passenger and owner of the vehicle may feel entitled to have their car protect them at all costs, rather than the 2 or two more people involved in a hypothetical accident. But that would be an individual’s human perspective on a problem that I think should be entirely machine.
If you are interested in viewing some of these situations and judging what a machine should do in them, be sure to check out MIT’s Moral Machine online. It is a very interesting and thought provoking experiment, that also helps MIT with some of their research.
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.