For a particular class, I had to write a paper titled "Something that would considered morally impermissible in 22nd century". Immediately, I thought of self driving cars. Although it's wasn't a particular well written paper, it does make a good case for self driving cars.
As of the time of this writing, there are currently approximately 30,000 deaths due to cars in the United States. To put said number is perspective, one is more likely to be in a fatal vehicular accident than be a victim of homicide, overdose from heroine, or be injured from an intentional fire. Even worse, those three statistics combined cause more deaths, per year, than a vehicle does. These deaths aren't at the fault of mechanical error, animal intervention, or even dangerous weather conditions: they are caused by humans. Approximately 94% of traffic accidents are primarily caused by humans. This begs the question, how much safer can an algorithmic, non-self aware autonomous vehicle be, compared to a human?
Because self driving cars are in their infancy (with Carnegie Mellon having the first record of self-driving technology dating back to 1984), it would seem to be almost unfair to compare such a computer to a human driver; or so it would seem. Although humans had a century head start, car projects have already proven to be sufficiently better drivers. Google’s well known self-driving car project (and their PR department) can attest to such a claim:
"We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That's two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving — and still, not once was the self-driving car the cause of the accident." Jacquelyn Miller, Google Spokeswoman
Google is not the only company who is interested; Tesla’s autopilot software is expected to be fully autonomous (and commercially available) come 2017. Along with Google and Tesla, BMW, Mercedes Benz, and Ford have made public claims to be working on self driving car capabilities. There is no dispute; self driving cars will be radically safer in comparison to human drivers, and they will be available soon. Assuming the standard S-curve technological adoption, price to significantly drop, and luddite-esque apposition, the adoption should be mainstream within fifty years at most. Now, what about a hundred years?
Within a hundred years, self driving cars should be illegal; this is not a moral dilemma, this is a "saving over 3000 lives a day" solution. Every time a person gets in a car and chooses to drive and causes an accident, that person chose to cause the accidents. It would come to no surprise that people from the 22nd century that would look at the 21st century driving as "barbaric". When strictly looking at the statistics and the logical arguments, driving in the 22nd century would not only be morally impermissible, it would be illegal.