Questioning the ethical reasoning behind ways of attributing value to lives impacts philosophical dilemmas encountered in policy making and innovation in AI. For instance, this sort of reasoning requires us to determine how self-driving cars should behave when encountering real-life dilemmas such as inevitably crashing into one person as opposed to a group of people. This issue will be examined with the Rocks Case, a case of conflict of interest where all the relevant parties are strangers, and we can either save one person or a group of five. The two courses of action which will be discussed in this paper are: 1) “Ought to Save the Many” (OSM) and 2) “Permitted to Save the Few” (PSF). The position...