During a conversation with a lead engineer working on the Google self-driving car project, it was mentioned that the car would be programmed to consistently break the speed limit. On average, the car will travel 10 mph over any posted speed limit. Why design a car to deliberately break the law? Safety, primarily; since every human-operated car will likely be speeding around the Google car, it would put passengers at greater risk for an accident if the Google car is travelling slower than surrounding traffic.
While this seems like a sensible, if not alarming, design decision on the part of Google engineers, the choice to build a system that subverts our laws raises important questions about how we build ethics into our functional requirements. The question is well-framed by Emily Dreyfus, in an article for Wired, titled “The Robot Car of Tomorrow Might Just be Programmed to Hit You”. The article presents the reader with a very specific edge case: If a Google car is going to crash and is in the middle lane between two cars, how does it decide which car to crash into?
For instance, an algorithm could be built into the Google car to read sensor data to determine the size and make of the two cars. In our scenario, let’s say on the left side is a Ford Explorer, and on the right side is a Smart Car. Conceivably, the algorithm would determine that the Explorer would be better equipped to handle a crash, so the car is steered into the SUV, leaving the Smart Car untouched.
The logic of this edge case is clear: “The Explorer is better equipped to handle a moving collision, so it should be targeted for a crash rather than the Smart Car, which would likely crumple and kill the driver.” The only problem is that this approach introduces biases into the Google Car’s decision making, in essence targeting SUVs for crashes, rather than any other car. Knowing this, SUV sales may be impacted, as people avoid purchasing a vehicle that could be targeted in a collision. Do we really want to unleash a legion of cars equipped with advanced targeting systems?
The answer is obviously, “no” but what alternatives to this edge case are we left with? There really are no good alternatives or a single correct moral choice in this scenario, but some choice will have to be made. This is the sort of problem that robust requirements analysis can identify early, allowing the problem to receive full consideration before anything gets built or tested. Essentially, this killer car is a requirements problem. If we don’t ask it to select a target, it won’t. If we ask it to speed, it will speed. But what should we specify if no good choices present themselves?
In situations like this, the worst thing a product could do is hide this messy moral calculus from its users. As a representative of the users of a product, it is vital to understand the impact of hidden features on a user’s experience with a product. To this end, the best thing a product can do is expose settings to users to give them the power to control the outcome they feel most comfortable with.
“As a Google Car driver, I must have the ability to disable crash-optimization settings, so my car doesn’t choose a target during a collision”.