Heritage Expert Testifies to Congress on Facial Recognition Technology

HERITAGE IMPACT

Heritage Expert Testifies to Congress on Facial Recognition Technology

Jul 26, 2021

Heritage Foundation research fellow Kara Frederick warned lawmakers that facial recognition technology is vulnerable to misuse and portends a slippery slope to mass surveillance.

In recent testimony before the U.S. House Subcommittee on Crime, Terrorism and Homeland Security, Heritage Foundation research fellow Kara Frederick warned lawmakers that facial recognition technology is vulnerable to misuse and portends a slippery slope to mass surveillance. She offered several recommendations to keep the risk of misuse in America low.  

Facial recognition technology has proven its worth in the field, Frederick noted, where some uses of facial recognition comprise legitimate public safety imperatives. 

“Success stories [for facial recognition software] include the detection of Maryland’s Capital Gazette shooter in 2018 and the detention of at least three individuals using false passports at Dulles International Airport ... that same year. But,” she added, “the potential for abuse by agents of the state is high.” 

Risks are manifold, Frederick noted, including false positives generated by inaccurate algorithms and data security vulnerabilities to hacks and leaks, which opens avenues for the exploitation of immutable biometric data.  

Frederick focused her testimony on three specific risks: the circumscription of civil liberties and individual privacy, the outsourcing of surveillance to unaccountable private companies, and the potential integration of face recognition data with other personally identifiable information through the expansion of mass surveillance. 

Frederick warned that plans to expand the outsourcing of domestic digital surveillance to private companies unencumbered by constitutional strictures raises Fourth Amendment concerns. She cited the FBI’s use of open-source facial recognition tools to detain American citizens and law enforcement’s use of surveillance start up ClearviewAI as examples.  

In Frederick’s assessment, such impulses to expand and outsource domestic surveillance can lead to more pervasive methods of monitoring by law enforcement. She described a mutually reinforcing digital surveillance ecosystem that encompasses FRT and trends toward large-scale surveillance. As an example of these expanded surveillance practices, she cited the municipality of Peachtree Corners, Georgia, which is using AI-driven “smart” cameras to monitor social distancing and use of masks. “And once these powers expand,” she warned, “they almost never contract.”  

Authoritarian abuse of facial recognition technology abroad should serve as a cautionary tale for American lawmakers, she said. Beijing uses facial recognition systems to monitor its own population, determine ethnicity, and imprison Uighur minorities in reeducation camps. Democratic protestors in Hong Kong hid their faces and used lasers to thwart such monitoring. Russian officials have used facial recognition technology to identify and throw dissidents in jail as recently as this year. 

To constrain abuse and bound expansion by government agencies, a secure and privacy-protecting framework for the use of digital data obtained by facial recognition technology is requisite, Frederick testified. 

To protect citizens’ privacy, Frederick recommended that Congress: 

  • Establish a federal data protection framework with appropriate standards and oversight governing how U.S. agencies—federal, state, and local—may collect, store, and share facial recognition data. 

  • Ensure any U.S. identity management system used by government actors is secure and reliable, based on proper standards and measurements, and in accordance with National Institute of Standards and Technology guidelines. 

  • Enforce data protection inspections and oversight among all parties. 

  • Update current timelines for publishing and updating all federal agencies’ Privacy Impact Assessments and System of Records Notices to require dissemination every six months of program use.  

Combined with near-historic low levels of public trust in the U.S. government to do “what is right,” Frederick cautioned that the “the unfettered deployment of these technologies by government entities will continue to strain the health of the body politic without immediate and genuine safeguards.”  

Frederick’s full testimony, delivered during the July 13 subcommittee hearing titled “Facial Recognition Technology: Examining Its Use by Law Enforcement,” can be found here