The morality of AI when it comes to mortality

Robots will inevitably be making more life and death decisions — from split second maneuvers from autonomous vehicles, to military drones determining which vehicle is a valid target. These types of decisions are challenging enough for a human. How will we program robots to make moral decisions we can all live with? This brings me back to my undergrad Ethics in Society class and the impassioned debates around blowing up the fat man stuck in the mouth of the cave. The real question is: are you programming Kantian robots or utilitarian ones? Is moral relativism dependent on your operating system?


Want to receive more content like this in your inbox?