Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

. In l 842, a ship struck an iceberg, and more than 30 survivors W!sle crew!were

ID: 387030 • Letter: #

Question

. In l 842, a ship struck an iceberg, and more than 30 survivors W!sle crew!were crowded into a lifeboat planned to hold 7. As a storm threatened, it became clear that the lifeboat would have to be lightened if anyone were to survive The captain reasoned that the right thing to do in this situation was to force some individuals to go over the side and drown (die). Such an action, he reasoned, was not unjust to those thrown overboardj ii, for they would have drowned anyway. If he did nothing, however, he would be responsible for the deaths of those whom he could have saved. Some people opposed the captain's decision. They claimed that if nothing were done and everyone died as a result, no one would be responsible for these deaths. On the other hand, if the captain attempted to save some, he could do so only by killing others and their deaths would be his responsibility; this would be worse than doing nothing and letting all die. The captain rejected this reasoning. Since the only possibility for rescue required great efforts of rowing he captain decided that the weakest would have to be sacrificed. In this situation it would be irrational, he thought, to decide by drawing lots who should be thrown overboard. As it turned out, after days of hard rowing, the survivors were rescued and the captain was judged for his action. Did the captain make the right decision? Why or why not? Which ethical theory or theories could be applied here? Ethics

Explanation / Answer

Utilitarianism is an ethics of welfare. utilitarianism prescribes that the moral worth of an action is solely determined by its contribution to overall utility, that is, its contribution to the happiness and satisfaction of the greatest member.

Quite simply put, the belief that the best action is the one that maximizes “utility”. Utility doesn’t have a strict definition, but Jeremy Bentham (the founder of utilitarianism) described it as maximizing pleasure over pain. By this, he meant “…the greatest happiness of the greatest number is the measure of right and wrong. The utility gained from saving four workers is greater than the utility gained from saving the one worker, and hence a utilitarian would let one worker die.

whether an act is morally right depends solely on consequences or the goodness of consequences.

Consider one version of the trolley problem: A runaway trolley is heading down the tracks toward five workers who will all be killed if the trolley proceeds on its present course. Adam is standing next to a large switch that can divert the trolley onto a different track. The only way to save the lives of the five workers is to divert the trolley onto another track that only has one worker on it. If Adam diverts the trolley onto the other track, this one worker will die, but the other five workers will be saved.

The trolley problem highlights a fundamental tension between two schools of moral thought. The utilitarian perspective dictates that most appropriate action is the one that achieves the greatest good for the greatest number. Meanwhile, the deontological perspective asserts that certain actions – like killing an innocent person – are just wrong, even if they have good consequences. In both versions of the trolley problem above, utilitarians say you should sacrifice one to save five, while deontologists say you should not.

Psychological research shows that in the first version of the problem, most people agree with utilitarians, deeming it morally acceptable to flip the switch, killing one to save five. But in the second version of the problem, people lean deontological and believe it’s not acceptable to push a stranger to his death – again killing one to save five.

critics of the trolley problem say it is too unrealistic to reveal anything important about real-life morality. But the rise of drones and self-driving cars makes the dilemma perhaps more relevant than ever before. For example, should a self-driving car protect the life of its passengers, even at the expense of a greater number of pedestrians? Here too, our intuitions are inconsistent: we want other people’s cars to maximize the number of lives saved – but think our own car should protect us at all costs. As our technologies become increasingly capable of making moral decisions, understanding our own moral intuitions becomes all the more crucial.