Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Some think that ethical targeting algorithms should prioritize protecting the pa

ID: 3829846 • Letter: S

Question

Some think that ethical targeting algorithms should prioritize protecting the passengers

of the vehicle in cases of imminent accident. Others think that ethical targeting algorithms should do

the opposite: prioritize protecting other drivers and pedestrians over its own occupants. Patrick Lin

defends the second of these two views:

If the car were charged with protecting other drivers and pedestrians over its own occupants—not

an unreasonable imperative—then it should be programmed to prefer a collision with the heavier

vehicle than the lighter one... This strategy may be both legally and ethically better than the

previous one of jealously protecting the car’s own occupants. It could minimize lawsuits, because

any injury to others would be less severe. Also, because the driver is the one who introduced the risk

to society—operating an autonomous vehicle on public roads—the driver may be legally obligated,

or at least morally obligated, to absorb the brunt of any harm, at least when squared off against

pedestrians, bicycles, and perhaps lighter vehicles. (72-73)

I want you to do two things: (a) explain Lin’s argument in this passage, and (b) using at least two of the

morally-relevant factors, discuss whether it supports the conclusion that ethical targeting algorithms

should prioritize protecting other drivers and pedestrians over its own occupants

Explanation / Answer

Actions can be evaluated in various respects. When we evaluate them from the moral point of view, we can do this in two very different ways. We can consider them as morally right or wrong, but we can also judge them morally good or bad. Both evaluations are logically independent of each other as Lin empha­sized that a clear distinction between the morally right and the morally good ‘will do much to remove some of the perplexities of our moral thought. The study of right and wrong action has been dominated by the opposition between teleology (consequentialism) and deontology


Ethics as a discipline explores how the world should be understood, and how people ought
to act. There are many schools of thought within the study of ethics, which differ not only
in the answers that they offer, but in the ways that they formulate basic questions of how to
understand the world, and to respond to the ethical challenges it presents. Most (though not
all) work in ethics — both academically and in the wider world — has a normative purpose:
that is, it argues how people ought to act. But this normative work relies significantly, though
often invisibly, on descriptive arguments; before offering prescriptions for how to address a
given problem, scholars in ethics construct arguments for why it is both accurate and useful
to understand that problem in a particular way. We contend that this descriptive dimension
of ethics is as important as the normative, and that instructors should push their students
to develop the ability to describe situations in ethical terms, as well as to render judgment.
Most approaches to understanding the world through ethics adopt one of the three major
critical orientations: deontological ethics, utilitarianism (sometimes called consequentialism),
and virtue ethics. In order to understand and discuss the ethical issues around AIs, it is
necessary to be familiar with, at a minimum, these three main approaches.

Two ways of grounding this distinction:
1: The distinction between doing and allowing, i.e. between what we actively do (harming), and what we omit to do (failing to help).

This is part of the Doctrine of Acts and Omissions (DAO), according to which there is an important moral
distinction between performing an action that has certain consequences, and omitting to do something
that has the same consequences; killing is worse than letting die because it consists in actively harming
someone, whereas letting them die consists in omitting to save them.

2. The distinction between intended and foreseen consequences.
If we kill, we intend the death. If we let die, the death is foreseen, but not intended.

Consequentialism: we measure morality of conduct entirely by its consequences. Since killing and letting die have the same consequences, they are morally on a par.

Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.

Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.

For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.

Suppose an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

Some road accidents are unavoidable, and even autonomous cars can’t escape that fate. A deer might dart out in front of you, or the car in the next lane might suddenly swerve into you. Short of defying physics, a crash is imminent. An autonomous or robot car, though, could make things better.

While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.

In constructing the edge cases here, we are not trying to simulate actual conditions in the real world. These scenarios would be very rare, if realistic at all, but nonetheless they illuminate hidden or latent problems in normal cases. From the above scenario, we can see that crash-avoidance algorithms can be biased in troubling ways, and this is also at least a background concern any time we make a value judgment that one thing is better to sacrifice than another thing.


Ethics Is About More Than Harm
Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote