In medicine, incorrect positives are expensive, terrifying, and even unpleasant. Yes, medical doctors eventually tells you that the follow-up biopsy after that bloop on the mammogram employs you in the clear. But the intervening weeks are excruciating. A false-hearted negative is no better: “Go home, you’re fine, those headaches are nothing to worry about.”
Anyone who builds observation systems–medical exams, security-screening paraphernalium, or the software that constructs self-driving vehicles perceive and assessing their surroundings–is aware of( and afraid of) both types of scenarios. The problem with evading both untrue positives and negatives, though, is that the more you do to be removed from one, the closer you get to the other.
According to a preliminary report released by the National Transportation Safety Board last week, Uber’s system detected pedestrian Elaine Herzberg six seconds before impressing and killing her. It related her as an unknown objective, then a vehicle, then finally a bicycle.( She was pushing a bike, so close enough .) About two seconds before the disintegrate, the system decided it was necessary to slam on the dampers. But Uber hadn’t set up such a system to act on that decision, the NTSB explained in the report. The engineers foreclosed their car from reaching that call on its own “to reduce the potential for erratic vehicle behavior.”( The fellowship relied on the car’s human operator to avoid gate-crashes, which is a whole disperse question .)