Correcting False Positives: Redress and the Watch List Conundrum

Report Homeland Security

Correcting False Positives: Redress and the Watch List Conundrum

June 17, 2005 28 min read Download Report

Authors: Paul Rosenzweig and Jeff Jonas

If Osama bin Laden presented himself for board­ing at New York's La Guardia airport tomorrow, car­rying a ticket issued in his own name, would he be stopped and arrested? One would hope so, because his name is so well known that every Transportation Security Administration (TSA) screener in America would recognize it.

But what of an al-Qaeda operative whose name is not so widely and publicly spoken of? What of, for example, Abu Musab al-Zarqawi, the alleged master­mind behind the Iraqi insurgency? Would he be stopped? Nobody knows for sure.

Thousands of people with known or suspected relationships to terrorism can board America's com­mercial aircraft as passengers without the risk of being singled out by the TSA for detention or sec­ondary screening. The "no fly" and "selectee"[1] watch lists being provided to the air carriers for passenger screening are reported to be a fraction of the actual number of subjects the government con­siders too risky to be permitted to travel to the United States.

As the TSA adds new names to the "no fly" and "selectee" lists, this may not, however, be an unal­loyed good. One of the consequences will be more false positives-that is, more instances in which peo­ple who are traveling are confused with those on the list (i.e., they are "wrongly matched") and, less frequently, instances in which people who are actually on the list contend they are not terrorists and should not be listed (i.e., they are "wrongly listed").

Why is this so? Why are people more likely to be inconvenienced? Because the existing matching system works primarily on the basis of a loose,[2] name-only matching algorithm. And, unfortu­nately, today the name is often the only compara­ble data point between two systems (e.g., a Terrorist Screening Center's watch list and the air­line's passenger reservation list). So long as the sys­tem relies on limited data points (i.e., name only), there will be false positives (and the even more troubling false negatives-that is, systems that fail to identify a known terrorist because of the limited accuracy of name-only comparing systems).[3]

More broadly, the new TSA program, Secure Flight, is just the first iteration of many potential watch-listing missions. If practicable, we can anticipate the use of watch lists in other circum­stances. Just as the TSA will check watch lists for airplane passengers, it is quite likely that watch lists will be used to check the identity of those seeking access to secure locations (like airport tar­macs or nuclear power plants). Thus, the watch list paradigm promises a hopeful technological response to the problem of terrorism-if the redress problems can be solved.[4]

The Problem Of Errors and Redress

This poses a conundrum. What are we going to do about the false positives? What, in other words, will the government do if someone is repeatedly screened or denied access to a plane in error? What if someone is denied a hazardous materials transportation license because of concerns derived from a security watch list? What forms of process will be provided to allow redress of grievances advanced by those who believe that the govern­ment has made a mistake (as, inevitably, it will)? And if a mistake is found, what process and tech­nical means can be used to correct the error? The absence of any concrete set of proposals address­ing this question troubles many-civil libertarians and conservatives alike.

Both to be politically saleable, and because the correction of error is simple justice, any screening system must provide a robust mechanism for the correction of false positive identifications. People's gravest fear is being misidentified by an automated system. The prospect of being forever a screening candidate, or not being allowed to fly, or being denied a privilege, or being subject to covert sur­veillance based on a computer-generated caution derived from watch list comparisons, rightfully is a troubling notion. Moreover, it is a waste of finite resources. When false positives can be eliminated conclusively, investigative effort can be focused on those instances where uncertainty is warranted.

Of course, the same possibility exists in the "real world"; individuals become subjects of suspicion incorrectly all the time. What makes the difference is that in a cyber-system, the "suspicion" may per­sist-both because the records generating the sus­picion are often persistent and uncorrected and especially because the reason for the suspicion is a broad concern for preempting future attacks that is likely to be less susceptible of refutation. By contrast in the real world, law enforcement eventually comes to a conclusion and "clears" the suspect of connection to a specific prior criminal act.

Hence, rather than relying on the nature of investigation to correct false positives, we will need a formal process, including administrative, technical, and, if necessary, judicial mechanisms, for resolving inaccuracies and ambiguities within watch list systems.

The greatest difficulties of all in developing a watch list system may lie in the construction of such a redress process. It must be effective in clear­ing those wrongly matched or wrongly listed. But at the same time, it must have protections against being spoofed, lest terrorists go through the clear­ing process to get "clean" before committing wrongful acts.

But equally problematic, the process will likely not be able to meet traditional standards of com­plete transparency in an adversarial context. For often disclosure of the information, its source, and the algorithms that lie behind the watch-list­ing system will undermine its utility for identify­ing suspicious individuals. Yet, the failure to disclose this information will deprive an affected individual of a full and fair opportunity to contest a misidentification.

What will be necessary are the concepts of cali­brated and substituted transparency, where alter­nate mechanisms of dispute resolution are used. Those are fairly rare in American legal structures and will require careful thought. By and large, these mechanisms are policy and process related and are external to the technologies themselves. But they must be developed at the same time as the technology, for the absence of an answer to the redress question may doom even the most compel­ling watch list system.

This paper is an attempt to identify in some detail the components of an idealized redress pro­cess for a watch list system. As an idealized, notional system it is one of general utility, capable of being used (with modification) in other applica­tions. We will, at times, explain our proposal within the context of the Secure Flight program[5] because it is a contemporary example of the watch-listing mission, and because it is one with which every American who travels by plane will, if the system is deployed, have direct experience. But in the end, the proposals we make are, in our view, of broad utility.[6]

A Technical Primer

To understand the nature of the redress prob­lem, one first needs a working understanding of how the matching process operates. Imagine that the federal government has a watch list that con­tains the three entries listed in Figure 1.

undefined

Now imagine that an airline reservation is made for:

Mohammed Al-Saiyad
1208 Ashton Lane
Santa Rosa, CA
(707) 555-1212

Since the only comparable value is the name, and since loose name-matching is used (i.e., "Mohamed" will also be read as "Mohammed" and other cognates), this passenger will be considered a possible match to the watch list, subject to sec­ondary screening but not, unless additional infor­mation is available, detention.

Now let's assume that these two parties are in fact different people-that is, that the traveler is "wrongly matched" with the terrorist and the pas­senger, Mohammed Al-Saiyad, now aware of his mistaken identification, seeks redress. How can that work?

An Outline for a Solution

Any appropriate redress mechanism will need to solve two inter-related yet distinct problems. First, it will need to accurately and effectively identify false positives without creating false negatives in the process. For though we know that any watch list system will make mistakes by wrongly singling out an individual for adverse consequences, we also know that a watch list system may err by fail­ing to correctly identify those against whom adverse consequences are warranted. And we also know that any redress mechanism must be as tamper-proof and spoof-proof as possible, for it is likely that those who are correctly placed on a ter­rorist watch list will use any redress process avail­able to falsely establish that they should not be subject to enhanced scrutiny.

Second, any redress mechanism must effectively implement the requisite corrective measures. Already we have seen situations in which acknowl­edged "wrongly matched" errors in watch list sys­tems cannot be readily corrected because of the technologically unwieldy nature of the informa­tion systems at issue. Even when the TSA has rec­ognized that a given person (for example, Senator Edward Kennedy) is repeatedly wrongly matched to a "no fly" list entry, correction proves challeng­ing as one cannot just remove the more ambiguous watch list entry[7]. Thus, the legal, policy, and tech­nological mechanisms must be built in to the watch listing system to allow for the effective han­dling of redress.

Identifying the False Positive

Consider first the problem of identifying false positives, those wrongly matched or wrongly listed. We can identify, broadly, four separate ques­tions that an effective redress system will need to address:

  • What are the conditions for consumer inquiry? Who can query and challenge a watch listing?
  • Who is responsible for administering the redress system?
  • What are the applicable rules of transparency? Who gets what information relating to the watch listing and under what conditions?
  • What is the process by which the redress pro­cess will operate?

Each of these questions requires a fairly detailed set of answers. Without being overly prescriptive, the following outlines a reasonable set.

Conditions of Consumer Inquiry
There are several conceivable scenarios under which a watch-listed person might discover that fact and seek to initiate a challenge. The most obvious would be if someone suffered an adverse screening event-a person is arrested, detained, searched, denied a privilege, or in relation to Secure Flight, identified for secondary screening at every attempt to board an airplane. A second scenario might involve a con­sumer-initiated inquiry-just as some consumers routinely check their credit ratings, others might routinely check to see if they are on a watch list.

The optimal redress system must therefore answer first the question of initiation: Under what circumstances may a consumer begin an inquiry as to watch list status?

A portion of the answer to this question is easy: Any individual adversely affected by presence on a watch list should have a right to invoke the redress mechanism. In such circumstances there does not appear to be any value in limiting the medium by which the inquiry is made; inquires should be accepted in person, by correspondence, or via the Internet. Indeed, in many instances, the inquiry will be at the point of consequence-that is, immediately upon being flagged for additional attention while attempting to board a plane.[8]

A more difficult question is posed by the issue of whether to allow self-initiated inquiries, especially if the potential source of such inquiries is broad­ened to permit queries from non-U.S. Persons. With that broadening, a system intended to allow redress for individuals who may be potentially sub­ject to adverse consequences could easily become a tool for terrorists. Putative terrorists might mas­querade as such inquirers, seeking to determine in advance whether their attempt to pass through a watch-listing system would be successful.

Several possible solutions to this problem present themselves:

  1. One might prohibit all self-initiated inquiry and access to the redress mechanism and per­mit only those adversely affected to challenge a listing (just as the Fair Credit Reporting Act enables a consumer to get a free credit report if adversely affected by a credit check). This would prevent all possibility of spoofing the system through self-initiation but would deny preemptive access to redress for those as of yet unaffected. Depending upon our collective assessment of the threat level, this may be the option favored by cautious policymakers.
  2. One might allow a periodic consumer inquiry (akin to the once-per-year rule under the Fair Credit Reporting Act) but limit the availability of a self-initiated inquiry and redress to U.S. citizens. This has the advantage of significantly limiting the likelihood of terrorist misuse while fostering a respect for American interests.[9]
  3. One might permit non-U.S. citizens to pursue self-initiated inquiry and redress but only under tightly controlled circumstances-for example, through embassies and only through in-person inquiry (thus presenting the putative terrorist with the specter of immediate arrest should the watch list check prove positive, and thereby deterring attempts to game the system).

Redress Channels
Where does the inquiring party go to make the inquiry? Consider that most multi-party watch-listing systems will likely have, at a minimum, three distinct zones in which infor­mation persists: 1) an originating system where the watch list record came into existence; 2) a cen­tralized aggregating and disseminating service (for example, the Terrorist Screening Center) that receives watch list data from one or more originat­ing systems; and 3) one or more end-users (for example, the commercial airlines).

Determining the proper entry point for a redress inquiry is complicated by another factor-in many, indeed perhaps most, instances the affected individual will not know the originating source of the information and may not even know the iden­tity of the aggregator. In the context of an adverse consequence, the only component that the indi­vidual will be able to identify with certainty is the end user who imposes the adverse sanction.

From this analysis comes a simple rule: Each end user must be obliged to provide an entry point for complaints. In an idealized system that entry point would involve ready access to an indepen­dent component of the centralized watch list aggregator (or originating system if no such aggre­gation point exists), not operationally associated with the organizational components that use the watch list process. The disassociation, in an ombudsman-like format, with attendant indepen­dence, will provide a procedural assurance to the consumer that his redress inquiry will be dealt with in a timely fashion and objectively. The cre­ation of an independent organizational component will also facilitate resolution of inquiries, as the ombudsman will be familiar with the identity of information originators, information flows, and watch-listing standards defining the minimum thresholds for watch list inclusion.[10]

Conditions of Transparency
Perhaps the most challenging question to answer concerns the issue of transparency. How much information will be made public about the basis for being listed or matched? The fundamental problem is this: Com­plete transparency will foster complete account­ability, and thus better accuracy in redress for wrongly matched individuals. Yet, for those who are challenging their listing, complete transpar­ency will utterly frustrate security, and the disclo­sure of sources and methods will compromise intelligence gathering and allow for terrorists to game the system to avoid identification. Thus, we will need a concept of calibrated transparency, lim­ited in context. We will also need a concept of sub­stituted transparency in which independent proxies for the affected individual are provided information that cannot be provided to the indi­vidual himself. To see how this might work, con­sider the following basic principles:

  • The degree of transparency to the affected indi­vidual can and should vary with the nature of the consequence imposed. The greatest level of transparency is appropriate for the most severe adverse consequences, such as arrest. Some­what less transparency is necessary if the con­sequence is adverse and permanent, such as denial of a hazardous materials transport license or access to a secure facility. Still less transparency is necessary for transient conse­quences, as, for example, with secondary screening at the airport. And even less trans­parency would be appropriate when there is no appreciable adverse consequence, as in the case of a self-initiated inquiry. In short, the amount of disclosure should be graduated, depending in part on the nature of the conse­quence attendant to the watch list.
  • A related, perhaps more controversial, proposi­tion is that American citizens and legal resi­dents (U.S. Persons in legal terminology) should have greater rights to access about information concerning them than non-U.S. Persons. It may be that some will think non-U.S. Persons should be permitted no disclo­sure at all-maybe not even notification of their status. But to the extent that individuals are allowed access to security-related informa­tion concerning them, considerations of national interest suggest that the rights of Americans are, in this context, greater than those of non-Americans.
  • The degree of transparency will also vary based upon the nature of the information that led to the watch listing. Consider two distinct scenar­ios: In one scenario, Mohammed Atta is on a watch list because intelligence from captured al-Qaeda computers identifies him as a terror­ist operative; in another, Michael Jones is on the same watch list because he once shared an apartment with Atta. Broadly speaking, the more specific the information about an indi­vidual and the more sensitive the source of that information, the less transparency that should be afforded to the affected individual. Con­versely, the more attenuated the potential con­nection and the less sensitive the information involved, the greater the disclosure that would be appropriate. To be sure, this will vary by degree-information about Atta's financier is a more sensitive concern than that about his former roommate. But as a general proposi­tion, the less privileged the connection, the greater the appropriate level of disclosure. For example: If the identification information at issue is such that it can be gleaned from the phone book or publicly available government records, it is less sensitive than if it is derived from an overseas electronic interception.
  • There seems to be little, if any, concrete basis for restricting information about the general archi­tecture of any watch list system, identifying broadly what are the originating sources of information; which organizations perform the aggregation and dissemination function; and the identity of the end users. Though there may be instances in which disclosure of this architec­tural information should be restricted, those are likely to be rare and may be addressed on a case-by-case basis.
  • In all situations in which disclosure to the affected individual is limited, it is appropriate to consider alternate disclosure mechanisms. Even if disclosure cannot be made directly, there must be a way to provide some assurance of the accuracy of information. As we outline below, this will mean that during any review process an independent decision maker will need access to all of the underlying informa­tion and decisions.
  • This leads, inevitably, to the most important source of oversight: Congress. Since much of the operation of watch listing systems will involve classified information, the mechanism for oversight must account for that fact. But the fundamental point remains: Congress must commit at the outset to a strict regime of over­sight of the watch list programs. This would include requiring immutable audit logs,[11] peri­odic reports on the technology's use once devel­oped and implemented, periodic examination by the Government Accountability Office, and, as necessary, public hearings on the efficacy of the watch list system. Congressional oversight is precisely the sort of check on executive power that is necessary to ensure that watch list programs are implemented with the appropri­ate limitations and restrictions. Without effec­tive oversight, these restrictions are mere parchment barriers. Although congressional oversight can sometimes be problematic, in this key area of national concern one can be hope­ful that it will be bipartisan, constructive, and thoughtful. Congress has an interest in prevent­ing any dangerous encroachment on civil liber­ties by any watch listing system.[12]

The Redress Process
Finally, we turn to the most important question: What should be the scope and form of dispute resolution? Several fac­tors inform the analysis.

First, and foremost, as we noted at the outset, the question of false positives is not unique to watch lists. Indeed, all law enforcement or intelli­gence activity will, on occasion, result in the identification of a subject who proves, upon closer examination, to have done nothing wrong. In this sense, the dilemma posed by the problem of false positives in watch-listing systems is noth­ing new. As we noted, though, the unique charac­teristics of cyberspace pose challenges for the redress process because of both greater persis­tence of suspicion and greater potential for lib­erty-impinging ambiguities.

But those distinctions should not, at the thresh­old, obscure a fundamental similarity to the prob­lem. As a consequence, implementing laws or regulations should specify that, to the degree that it recapitulates already encountered problems with investigative activity, the law applicable to watch lists should embrace the same remedies that have been used in the past. Thus, for example, when the misidentification of a subject is the product of a good faith inquiry, the law currently allows little or no liability-for the good and sufficient reason of not wanting to deter good faith examination of criminal conduct.[13] All the more so, it would seem, for investigations of terrorist activity. How­ever, as a general matter, the grossly or willfully negligent misidentification of a subject can, and should, subject one to tort remedies, just as it would outside the context of a watch listing mis­sion.[14] Thus, we do not think that the current legal régime for monetary and compensatory dam­ages will need to change.

What will need to change are the rules relating to an individual's right to "correct" information in government databases concerning him. For those who are subject to "traditional" law enforcement or intelligence inquiry, to the extent that inquiry relies upon information from already existing gov­ernment databases, these individuals, even if later determined to have been mistakenly named as a subject, typically have no independent basis for seeking to correct the government databases them­selves; the information contained in them was law­fully collected for other purposes and is not subject to correction. Thus, while the Privacy Act generally affords and individual the right to request amendment and correction of a record pertaining to him (and to sue if the government refuses to amend the record), law enforcement, classified, and intelligence records are exempt from this provision.[15]

Thus there will need to be an amendment to the Privacy Act (or alternate legislation) to permit the amendment and correction of law-enforcement/ intelligence records in certain tightly controlled circumstances.[16] The outlines of such a system would include the following components.

To begin with, one should recognize the possi­bility of a swift, informal, administrative resolution of the issue. There should be available, where fea­sible, a redress process on-site at the first occur­rence of adverse impact. In some situations, that process can definitively resolve identity questions in a manner that warrants permanent correction. It can, for example, conclusively determine that a 9-year-old girl, an 85-year-old grandmother, and a famous Senator are not terrorist threats. Available information might be readily provided by the pas­senger to resolve the ambiguity (for example, proof that the passenger's year of birth is 1961 while the terrorist's year of birth is 1975). In instances where this informal, first-tier review is conclusive, that remedy should be permanent and propagated through the system.[17]

Only if the informal first-tier mechanisms are unable to resolve the ambiguity should more formal processes be necessary. For those, as an initial mat­ter, there should not be direct review by a court.

Our ground for this conclusion lies in the dis­tinction between civil and criminal sanctions. Tra­ditional American law makes court procedures dependent, at least in part, on the consequences that lie at the end of the process. Where the conse­quences are civil in nature-a prohibition on cer­tain conduct, for example-the law generally allows a lower burden of proof (i.e., by a prepon­derance of the evidence) and often uses adminis­trative rather than judicial procedures. By contrast, where criminal sanctions of imprisonment are involved, American law requires proof beyond a reasonable doubt and the provision of criminal judicial procedures. In the context of watch lists, the consequences in question will generally sound more in the nature of civil or administrative sanc­tions than in the nature of criminal ones.[18]

The implementing legislation or regulations should instead provide for administrative review of this essentially civil decision to impose collateral consequences. The administrative process would likely be resident with the independent group responsible for the redress process: for example, a centralized watch list dispute resolution clearing house for all homeland security applications. However distributed and wherever located the process should:

  • Have the obligation to acknowledge and resolve any inquiry within a specified time frame (perhaps 90 days);
  • Capture, maintain, and publish metrics of its performance including statistics about the number of inquiries, dispositions, average dis­position time, ratio of disposition outcomes, and the like;
  • Be authorized, when uncertainty exists, to require the originating agency to provide, where possible, additional information to allow further particularization of the watch list identification;[19]
  • Maintain a detailed (and perhaps immutable) audit log of all its activities to facilitate external accountability and oversight; and
  • Be as transparent as possible in developing and implementing the redress process itself. It is to be expected, for example, that the agency pub­licly disclose the design details of the redress process.

If the initial administrative process does not sat­isfy the consumer inquiry, we envision permitting an appeal to an administrative hearing officer. At this administrative hearing the individual should have a panoply of due process protections, includ­ing the right to be heard and the right to be repre­sented. In accord with the outline presented earlier, however, both at this level and at any subsequent appellate level, the degree of transparency will need to be limited. In particular, we envision a process by which the neutral hearing officer receives all classified information in an in camera manner and determines thereafter whether disclosure to the affected individual should be permitted.

This limitation on transparency need not be as onerous as it might appear. In the first instance, for example, the presumption should be in favor of disclosure, and limitations should be permitted only on a case-by-case basis. Thus, the default option should be for full transparency. And in those instances where full disclosure cannot be permit­ted, the hearing officer will be in a position to craft limited disclosure that permits the affected individ­ual to challenge his listing without necessarily needing to know all the details of how he came to be on the list. Default to greater transparency will be more appropriate for those whose presence on a watch list is the product of associational correla­tions, as those correlations will often (though not always) be less sensitive than the information caus­ing the listing of the underlying core suspect, and not indicative of future terrorist intent.

Finally, there should be a private right of action to appeal any adverse administrative decision to a federal district court. And there, unlike the normal case for the review of an administrative agency action,[20] the review by the federal court should be de novo.[21] We think the de novo standard is appro­priate because the restrictions in question will often impinge on fundamental individual liberties (if only tangentially) such as the liberty to travel or be granted some other privilege. One could, of course, imagine equivalent mechanisms for review that would be equally protective; the one proposed is merely one model.

In adjudicating any such case (through what­ever mechanism adopted) the subject on whom adverse consequences are imposed cannot be placed with the burden of establishing his inno­cence. Such a showing is virtually impossible as it would require proof of an almost unprovable neg­ative. Thus, once a watch-listed subject comes for­ward with a prima facie case establishing a basis for believing that his continuing presence on any watch list is without foundation, the burden should shift to the government. In order to main­tain an individual on any such list or continue the imposition of other collateral consequences, the government should be obligated to prove by clear and convincing evidence (as in the case of pretrial detention)[22] that: a) for significant intrusions such as a "no fly" determination, the subject poses a substantial risk to the community, or b) for more modest intrusions such as additional baggage screening, the subject poses a potential risk. Here, too, a panoply of due process rights (as with any civil case), subject to the limited transparency noted above, ought to be afforded the subject.

Correcting the Wrongly Matched

Having defined the redress process, one next must also devise a redress solution for those sub­jected to being repeatedly "wrongly matched." It will do little good to create a complex procedural mechanism if the watch list process is incapable of implementing corrective action.

What can be done to handle this scenario? One possibility is to require the wrongly matched trav­eler to carry a biometric "I am not that bad guy" certificate. That proposal, however, creates its own problems and an obligation that some might view as too onerous.

Here is one possible alternate course of action. Recall our earlier example of Mohammed Al-Saiyad, the non-terrorist living in Santa Rosa, California. Once it is established that the person is wrongly matched, the individual can provide the aggregating watch listing entity with some additional personal identifiers and this information can be added to the "screening list" (note we are no longer calling this a watch list as now it is used to disambiguate per­sons). This creates a screening list that comprises both the watch list and the list of other non-ambiguous, non-listed, "known" individ­uals. (See Figure 2.)

undefined

Henceforth, when Mr. Al-Saiyad attempts to fly (and uses his address on the reser­vation), his airline reservation will be correctly matched to record #4 (a vetted traveler already determined not to be the similarly named person identified in record #1). In practice, the passenger seek­ing remedy might provide a different attribute or several attributes to enable this future disambigua­tion (for example, phone number, credit card num­ber, frequent flyer number, etc.). Security is maintained because, notably, in this scenario only the record for Mr. Al-Saiyad has been remedied. If a future reservation is made using another name simi­lar to both record number #1 and #4 (for example, Mohamod Al-Sayed ) then, if there are no additional attributes that resolve the identity exclusively to record #4, this would create another watch list match. And that is as it should be: Without the additional identifying information, it is possible that this reservation for Mr. Al-Sayed is that of the watch listed individual in record #1 (though it may also be the vetted individual Mr. Al-Saiyad in record #4 or yet another wrongly matched party). The key point is that the vetted individual holds the information to disambiguate himself-and thus controls his own fate. And if the reservation is on behalf of yet a third individual, that person will be able to pursue the redress processes and have his own vetted iden­tity added to the screening list.

How to achieve this sort of error correction seamlessly? Recall that an idealized system has at minimum three distinct data zones: an originating system, an aggregation/dissemination service, and end users. The creation of the vetted identity record is best directed to the aggregator/dissemina­tion service. In that way, once the person has been identified as wrongly matched, the solution to this condition can be transmitted to all end user sys­tems within this watch-listing system. Another advantage to applying the vetted record at the watch list aggregator level is that this prevents the self-disclosed enhancing attributes (e.g., address, phone, etc.) provided by the innocent consumer from being passed back to the originating intelli­gence and law enforcement entities.

In the system we envision, if a wrongly matched consumer is disambiguated from the watch list (while at the airport and after some delay), when­ever possible this discovery should immediately flow to the watch list aggregator. If the informal processes are sufficient to prove that the individual is not the watch-listed party, there should be no need to require the consumer to initiate a redress process. This detection and correction mechanism alone has promise to significantly improve airport efficiency, particularly in relation to the burden on the system caused by those wrongly matching to the "selectee" list.

The attributes of a suitable multi-party watch listing system will require the following character­istics if the information they contain is to be capa­ble of correction in the manner outlined:

Full Attribution
Any record containing infor­mation about an individual must carry with it full attribution. Each watch-listing record must also identify where it came from (the contributing orga­nization); what originating system[23] and transaction number within that system is associated with the record; when the record was originally created; and, if relevant, when it was last updated or modified prior to its distribution. Any effective error correc­tion will necessarily modify the original record on which the error was based. Without full attribution, changes cannot accurately be cascaded down the network to the watch list aggregating service. Fur­thermore, full attribution is also necessary during the redress process to allow the redress ombudsman to collaborate with data originators.

Tethering
In addition, all data must be teth­ered to their originating source. In other words, using the full attribution characteristics of shared information, all published alterations to the rele­vant record(s) must be forwarded to all relevant subscribers and the originating source. If done correctly, this will ensure that all of the users in a particular subscription environment operate with updated, not outdated, values. In this way, any error corrections systematically approved will be propagated throughout the system.

Residual Information
One final point bears noting: the problem with residual information. In any system of records there will be secondary col­lections of records related to the initial watch listed party (for example, while the original record was for Atta, secondary values may have been collected for his "financier" or colleagues, or roommates). These secondary records must also be tethered to the original source and the secondary record col­lections should also be corrected whenever the underlying primary record is corrected.[24]

The Problem of Uncertainty

The most difficult and challenging question arises when the results of the dispute to a listing are uncertain-that is, when at the end of what­ever process that is adopted, the investigation does not "clear" an individual, but the evidence col­lected is of insufficient strength to allow for defini­tive action (such as arrest). Even after the greatest effort, it may be impossible for the originating agency to disambiguate and determine whether a particular individual is or is not a threat.

In other words, what happens if the answer after investigation is "maybe"? In that situation it would be irresponsible of the government to ignore the evidence (that is, the individual should be placed on some form of "watch list" because of valid sus­picions that are insufficient to allow for prosecu­tion). Yet it would equally inappropriate for the individual to be permanently affected, perhaps without being advised of the effect. One can hope that such situations are few, but they may prove fairly commonplace.

It bears emphasis, however, how narrow the range of cases discussed here is. First, it involves only individuals initially identified on the basis of intelligence-gathered information. Second, it involves only those individuals as to whom a pro­cess of review and inquiry has validated the data to an extent that creates a level of concern. Third, it involves only those individuals as to whom, after subsequent investigation, the conclusion is still uncertain. And, fourth, it involves varying and sometimes minimal levels of residual suspicion. Some watch-listed individuals may be placed on a "no fly" list, but others on the "selectee" list may only have heightened screening of their bags and persons because the residual questions about them are comparatively less significant. If this system operates as envisioned, this narrow class of individ­uals will be one that most Americans will agree are justly subject to scrutiny and are not merely being scrutinized for random or invidious reasons.[25]

Nonetheless, in such situations, the ultimate burden should be on the government to justify any permanent or lengthy deprivation of civil liberties (again, remembering that all intrusions are not equal in nature). And the government should also be under an affirmative obligation to afford the investigated individual notice of the investigation and any inconclusive resolution. If as a result of the investigation, the government believes it is appropriate to impose upon an individual a con­tinuing adverse, non-punitive collateral civil con­sequence, it ought not to be allowed to do so without providing the individual with notice of that decision and due process.

Nor should it be able to enforce those conse­quences indefinitely. There ought to be a presump­tive time frame, of perhaps 90 or 120 days after notification to the individual is provided within which the individual could be maintained on a watch list, or other collateral consequences imposed, before that decision is reviewed and con­firmed (or rejected) again by an independent, neu­tral arbiter-that is, a judge. The time frame might be longer for less significant intrusions (such as enhanced baggage screening) or shorter for more intrusive ones (such as a "no fly" limitation).

Conclusion

Using watch lists to identify potential terrorists is a useful activity. If they work well, watch lists can provide an additional level of protection for America. But if poorly implemented, a watch list system is of little use. As a practical matter, if rid­dled with false positives with no way to correct for them in any efficient manner, it will not serve to direct scarce investigative resources, and as a polit­ical matter, it will not be accepted by the public.

A key component of the equation is a concrete, robust redress mechanism-one that allows for degrees of transparency, accuracy, timeliness, and a consumer's ability to correct errors and ambigu­ities. A watch-listing system with the sort of redress practices outlined here will provide signifi­cant protections to Americans while providing the government a viable means to address one aspect of the national security challenges at hand.

Paul Rosenzweig is Senior Legal Research Fellow in the Center for Legal and Judicial Studies at The Her­itage Foundation. Jeff Jonas is an IBM Distinguished Engineer and Chief Scientist at IBM Entity Analytic Solutions, and was the founder of SRD. Our thanks to John Bliss, Jill Rhodes, and K.A. Taipale for their thoughtful review and comments on an earlier draft.

Authors

Paul Rosenzweig
Paul Rosenzweig

Former Visiting Fellow, The Heritage Foundation (2009-2017)

Jeff Jonas

Visiting Fellow