Sounds ominous, doesn’t it? In actual fact, it makes perfect sense. It is a classic case of choosing between two evils: should a self driving car drive into a group of school kids or ram into a wall (potentially killing the driver)? Would you buy a car that could potentially kill you?
This reminds me of the Trolley Problem, another gnawing ethical dilemma.
With the world increasingly embroiled in AI and the associated social implications, the ethical dilemma of algorithmic morality is something I think we urgently need to address. We are probably still grossly under-equipped to have this conversation, given the general lack of awareness beyond what hollywood blockbusters impart upon us. And the conversation will be messy and yield no conclusive results, but if it prompts us to think, that’s already a start.