The proposed framework represents an attempt to address privacy concerns that have dogged past counterterrorist data mining programs like Total Information Awareness.
The report acknowledges the utility of a variety of technologies in the context of security, but cautions that counterterrorism programs need to be operated lawfully, with oversight, and with some recognition of the limits of technology.
Automated terrorist identification, the report says, “is neither feasible as an objective nor desirable as a goal of technology development efforts.”
“To address [the threat of terrorism], new technologies have been created and are creating dramatic new ways to observe and identify people, keep track of their location, and perhaps even deduce things about their thoughts and behaviors,” the report says. “The task for policy makers now is to determine who should have access to these new data and capabilities and for what purposes they should be used. These new technologies, coupled with the unprecedented nature of the threat, are likely to bring great pressure to apply these technologies and measures, some of which might intrude on the fundamental rights of U.S. citizens.”
Privacy is one such fundamental right, and the report finds that current government policy doesn’t respect that right sufficiently.
“The current policy regime does not adequately address violations of privacy that arise from information-based programs using advanced analytical techniques, such as state-of-the-art data mining and record linkage,” the report states.
Data mining techniques may have proven value in a commercial context, but the report warns that identifying terrorists this way is less reliable and prone to error.
“One might argue that the consequences of a false negative (a terrorist plan is not detected and many people die) are in some sense much larger than the consequences of a false positive (an innocent person loses privacy or is detained),” the report says. “For this reason, many decision makers assert that it is better to be safe than sorry. But this argument is fallacious. There is no reason to expect that false negatives and false positives trade off against one another in a one-for-one manner.”
The report recommends that the government be particularly careful when using behavioral surveillance to predict dangerous intent. There’s no scientific consensus about whether such technology — brain scanning, for example — actually works, says the report.
“[P]lacing people under suspicion because of their associations and intellectual explorations is a step toward abhorrent government behavior, such as guilt by association and thought crime,” the report says. “This does not mean that government authorities should be categorically proscribed from examining indicators of intent under all circumstances — only that special precautions should be taken when such examination is deemed necessary.”
The report presents two major recommendations. It argues that the U.S. government should follow a framework, such as the one proposed in the report, to evaluate the effectiveness, lawfulness, and consistency with U.S. values of every information-based program for counterterrorism. And it calls for a periodic review of laws and policies related to privacy in light of changing technologies and circumstances.
The NRC plans to discuss its findings in a one-hour public briefing at 12:30 p.m. EDT today at the National Academy of Sciences in Washington, D.C. A live audio Webcast should be available at the National Academies site.