Issues raised about no-fly list checks provide a nice lesson in disparate impact of false positives and false negatives
The last-minute apprehension of the would-be Times Square bomber, who had already boarded an international flight despite being placed on the government’s no-fly list, provides one of the rare instances where real-time integration or data propagation is actually needed. Prior to this incident, airlines had a 24-hour window to compare ticketed passengers to the no-fly list after an update; the window is down to two hours now. There are many aspects of the government’s anti-terrorism practices that remain under scrutiny, not just the no-fly list part, but apparent problems with checking the list surfaced in connection with the attempted Christmas Day airliner bombing as well (although in that incident the issue at hand was why the person in question hadn’t been put on the no-fly list despite suspected terrorist affiliations).
There are pervasive shortcomings in the system both in terms of performing as intended, and also for often recurring instances of non-threatening individuals flagged because something about them is similar to someone legitimately on the no-fly list. This has been a problem for years, usually coming to public light when someone famous or important falls victim to false positives from terrorist detection activities. Senator Ted Kennedy ran into this sort of issue five times in the same month back in 2004. More recently we noted the case of European Member of Parliament Sophie In’t Veld, frustrated not only with being called out for screening when traveling but also for not being able to find out what information the government had on her that kept flagging her name, actually sued the U.S. government (unsuccessfully) under the Freedom of Information Act to try to learn what information was on file about her.
As frustrating as the experience may be for anyone incorrectly flagged by the no-fly system or mistaken for someone on any terrorist watchlist, the fact that non-threatening people are mis-identified by the systems as threats is partly by design. No detection system is perfect, so with any such system you have to expect there will be some level of false positives (what statisticians call Type I error) and false negatives (Type II error). The relative effectiveness of information security measures such as intrusion detection systems or access control mechanisms like biometrics is sometimes described in terms of the rates of these types of errors, or in terms of the crossover error rate, which generally is the point at which Type I and Type II error rates are equal. In many cases, have equal rates of false positives and false negatives is not the goal, because the potential impact of these errors is not equivalent. In the case of terrorist watch lists, the government is comfortable with a relatively high false positive rate (that is, mistakenly flagging individuals as threats when in fact they are not) because the impact there is (merely) inconvenience to members of the traveling public. What they want to avoid is a false negative, or the failure to identify a threat before that threat makes it on to an airplane. The fact that Faisal Shahzad was able to board is an example of a false negative, as his name was on the no-fly list, but the airline apparently didn’t check either at the time of purchase or before boarding. Tightening the time constraint within which airlines must check the no-fly list has the effect of reducing Type II errors, which is the primary goal of government anti-terror programs. The government is much less interested in reducing Type I errors, at least if there is any chance that by reducing false positives (say by removing names from watchlists when false positives are associated with them) the chance of false negatives might increase.