Imagine this scenario: You buy a self driving car, say, like Google’s (http://en.wikipedia.org/wiki/Google_driverless_car) and your driving along at 65mph. The brakes fail. The sensors tell the car that if it continues to drive forward 5 people in the crosswalk will die. If it cuts the wheel in either direction, it can avoid that person but will certainly kill another person walking on the sidewalk, no where in the current path of car.
How do you program the car? Do you tell it – don’t take action … let what happens, happen, or do you say… take an action and surely kill someone, but save more?
Does it matter who the people are?
What do you do?
If we kill the one person so five can live, I point you to Pol Pot who said in a public radio address on May 10, 1978, “Exterminate the 50 million Vietnamese… and purify the masses of the [Cambodian] people.” That is, he thought it better to kill ever Vietnamese person, and even if Cambodians died in trying, there were more Cambodians so they, the ‘superior’ would survive.
Compare this to the story that Rav Nosson Tzvi Finkel, late Rosh Yeshiva of the Mir Yeshiva in jerusalem, Israel who said the message of the holocaust was that when 1 blanket was available to cover 5 people in the freezing cold, the blanket was shared. “It was during this defining moment that we learned the power of the human spirit, because we pushed the blanket to five others.”
. .. but what if we’re forced to make the choice? We can’t make the choice to kill because we’re using our value judgments to decide who should live and who should die, and we see that if we allow people to make value judgments over who should live and who should die, well . . .
Answer 2: Don’t Act, but Kill More
Better not to act, you say? Kill the five? How can you kill five over one? How do you know which to kill? Does it make a difference to you if the one person who will live is on the verge of curing cancer and will save millions? Does it make a difference to you if one of the five is a mass murderer condemned to death, anyway? Should it make a difference to the programming of the automated car (assuming it’s possible, and why not?)?
What if you’re watching this out of control car and you can push an old blind man into the path? This will save the other five.
What if he has 1 month to live?
What if your act will save not 1, but 500 people? Would you be a hero or murderer? Still you say, don’t act?
Does it make a difference if you got the car into the situation to begin with (e.g. excess speeding, failure to maintain breaks) or were an onlooker once the moment of decision arrives?
Suppose the car is programmed with patient information – there are currently no organ donors for five patients who will die. The driver of the car is healthy and he, who got himself into this ethical dilemma in the first place, can be smashed into a brick wall, killing him but providing organs to the five people he would have killed / who would have died?
If you say to the above, “sure, we can punish the guy who created the someone-is-going-to-die situation by having him be the person dying and on top of that, using his organs to save five others … then let’s change another variable: why can’t we just have the car drive him to the hospital where men with guns drawn will be waiting, and harvest his organs to save the five others? We can save five people’s lives with this one’s man’s life! Would that be okay?
The above answers are not easy. In modern philosophy, this is consequentialist moral reasoning (morality based in the consequences of act) versus categorical moral reasoning (morality in certain duties and rights). The strict side of each places us in dilemmas we don’t want to be in.
The fact pattern is discussed in the Talmud:
“Two people were traveling, and [only] one of them had a canteen of water. [There was only enough water so that] if both of them drank they would both die, but if one of them drank [only] he would make it back to an inhabited area [and live]. Ben Petura publicly taught: ‘Better both should drink and die than that one see his friend’s death,’ until Rabbi Akiva came and taught: ‘Your brother should live with you’ (Vayikra 25:36) – your life takes precedence over the life of your friend’s.’” (Bava Metzia 62a)
Here, the variable is changed a bit because it is about protecting your life, rather than another. It seems that Rabbi Akiva wins the day with his argument that your life always takes precedence and we cannot make moral choices about the lives of others. When forced, we make no choice. Still, in the Holocaust story above (see “Answer 1″) we “share the blanket”. This, in fact, according to Rav Berel Wein, is how we do quite a lot of things.
I have grappled with the forced ethical dilemma above from a standpoint of Jewish law, and I have no clear answer. It seems I’m in good company. See http://www.vbm-torah.org/halakha/lifeboatethics.htm for a intricate discussion of these dilemmas in Jewish law. Also, take a look at the “Justice” series by Michael Sandel, a professor of ethics at Harvard who discusses these issues based on modern philosophers -
I wonder if such questions have ever been posted to Rav Yitzchok Zilberstein, expert in Jewish medical ethics?