How TSA's facial recognition program could protect privacy rights

June 20, 2024
As airports grow more crowded and identity fraud increases, implementing state-of-the-art security protocols is essential.

The Federal Aviation Authorization Bill passed last month, but one element was missing: a bipartisan amendment to pause the TSA’s ongoing facial recognition program due to concerns about travelers’ privacy rights. 

There are legitimate reasons for implementing The TSA’s facial recognition technology. Ideally, it would speed up the flow of the 2 million daily passengers traveling through TSA checkpoints, while improving identity verification. As airports grow more crowded and identity fraud increases, implementing state-of-the-art security protocols is essential.  

But our faces are our most personal and identifiable features—even more so than our names, fingerprints, or even our voices. This makes facial recognition technology a prime target for bad actors. As a privacy professional, it’s my job to advocate to protect sensitive data. And as AI cloning technology improves, and consumers increasingly use their faces as passwords, safeguarding the privacy and security of biometric data is more crucial than ever. 

The TSA’s facial recognition program (currently in use at 84 airports, with plans to expand to over 400) is still in the pilot stages. However, the agency has indicated that this technology may one day be the norm

The privacy advocacy community doesn’t have to see this as a failure. Privacy rights and security protections are not mutually exclusive. The TSA can adopt this program while also protecting the privacy rights of travelers. But the agency must take a page out of the privacy best practices handbook—implementing strict data governance protocols and oversight, and offering more reasonable and accessible opportunities to opt out of the program. Here’s what that would look like. 

Strict data governance protocols

The TSA says it will not store the biometric data collected and used for identity verification at security checkpoints. One exception is data shared and used by the TSA’s parent agency, the Department of Homeland Security, (DHS) to test the efficacy of their facial recognition technology. 

The DHS said in January that it has protocols in place to define what biometric data the TSA can share for testing, how it is securely transferred, as well as who can access and use it. It also said that any data saved for evaluation has strict retention periods, after which the department will delete it. These are all promising signs that the TSA takes biometric data collection seriously. 

However, the DHS does have a history of security lapses. In 2018, a data breach exposed the personally identifiable information (PII) of over 240,000 current and former employees. In 2019, hackers stole thousands of traveler photos. I’m hopeful that the DHS learned from these incidents and improved its data security protocols. Yet opting not to store traveler information at all—a practice known as data minimization in the privacy world—remains the safest option. 

Providing clear and fair opt-outs

I recently traveled through Amsterdam’s Schiphol Airport, which uses biometric data verification at passport checkpoints. While a large notice informed me of my privacy rights, I was forcefully herded into the biometric data verification line. When I visited the website on the airport signage to learn more, I found a generic overview of my GDPR rights. A security agent even reprimanded me for taking a photograph of the privacy notice. I left with a sense that while I theoretically had the option to opt-out, it would be a major inconvenience.

The TSA has an opportunity to make its consent practices outstanding in comparison. For example: 

  • Adding clear and abundant signage both in person and on the TSA’s website that describes the program and travelers’ right to opt out 
  • Making airport staff available to answer clarifying questions, or provide translated documents  
  • Creating separate, clearly marked, and well-staffed lines for travelers choosing to participate or opt out 
  • Providing travelers an opportunity to revoke consent without facing consequences

When the TSA designs an appropriate system for opting out, the agency must also consider seniors (who may be unfamiliar with what biometric data collection entails), minorities (who are at greater risk of misidentification by AI systems), and non-English or ESL speakers (who may struggle to understand checkpoint signage). 

Routine audits and public oversight 

Even the most robust data governance programs need routine audits and third-party oversight. The TSA should conduct routine privacy impact assessments (PIAs) to examine the risk potential of data collection and storage, and verified third parties—like a public oversight committee—should also review these PIAs. Privacy rights groups like EPIC, the CDT, and the 14 senators concerned about the TSA program, are all excellent candidates for an oversight committee. An oversight committee would also help build trust in the program so more travelers feel comfortable opting in.

Wide adoption of facial recognition technology is inevitable, but that doesn’t mean we have to be complacent about how organizations collect and govern our biometric data. The rollout of this facial recognition program will set a baseline standard for how other federal agencies adopt and govern similar tools. If privacy-protective best practices are the norm from the very beginning, future iterations of this technology will face the same set of checks and balances. 

The TSA has the opportunity to set a new standard for America’s security. Will they take it? 

------------
Fast Company © 2024 Mansueto Ventures, LLC. Distributed by Tribune Content Agency, LLC.
Associated media