Legal Brief: Facial Recognition Again Under Fire
This article originally appeared in the July 2020 issue of Security Business magazine. When sharing, don’t forget to mention @SecBusinessMag on Twitter and Security Business magazine on LinkedIn.
I am the family historian. I take pictures and videos of the kids. I make digital picture shows and movies. With a daughter graduating from middle school this year, my wife recently asked me to make a slideshow with photos of my daughter. My usual method is to use Google Photos – because the fastest way to find photos of my daughter is to search on her face using Google’s amazing facial recognition technology.
Facial recognition technology works by employing biometric software to analyze a photo or video in order to create a map of an individual’s face, which can then be used to identify that person; however, not all uses of facial recognition software are as innocent as a family slideshow.
In recent years, facial recognition technology has drawn increasing scrutiny, as foreign governments use facial recognition to track ethnic minorities, large retailers try to mitigate shoplifting, and domestic law enforcement agencies control border access and support criminal investigations.
As a former military and federal prosecutor, I want law enforcement to have all the tools it needs to keep us safe; however, the primary responsibility of any prosecutor is to pursue justice. When it comes to the use of facial recognition technology, determining what is justified is an evolving consideration. Recent events have heightened the attention on this important technology while also giving regulators and private companies cause for concern.
NIST Study Causes Concern
In a study published in December 2019 by the National Institute of Standards and Technology (NIST), facial technology algorithms were tested on a database of 18 million photos comprised of more than eight million people. Remarkably, the study concluded that Asian, African American and Native American people were up to 100 times more likely to be misidentified by facial recognition than white men. Women were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups. Middle-aged white men generally benefited from the highest accuracy rates.
The NIST study was corroborative of previous studies – which also found high error rates; however, the NIST study examined a wide variety of technologies, and not all were beset with such high error rates. Nearly 200 algorithms were voluntarily submitted by nearly 100 private companies, academic institutions and other developers. Some had sharply lower error rates – suggesting that bias was not ubiquitous. Also, not every technology was included in the study – because not every developer participated. Amazon’s cloud-based Rekognition technology, for example, was not offered for testing.
This leaves open the criticism, asserted by some, that the NIST study and previous studies are flawed because they used outdated algorithms and/or did not properly deploy the technology. Nevertheless, the prominence of the NIST study prompted concern among regulators and legislators. In the absence of federal regulation, some local municipalities – particularly in California and Massachusetts – banned the use of facial recognition by public officials.
In February 2020, two U.S. Senators introduced legislation – the Ethical Use of Facial Recognition Act – demanding a temporary moratorium on the federal use of facial recognition to protect the privacy of consumers from rapidly advancing facial recognition technology and data collection practices that, according to the sponsors, heighten the risk of over-surveillance and over-policing, particularly in minority communities disproportionately impacted by the technology.
In addition to halting federal use of facial recognition technology, the legislation would prohibit the use of federal funds to be used by state or local governments for investing in or purchasing the technology. The bill includes limited exceptions for law enforcement use of facial recognition pursuant to warrants issued by a court.
Big Tech Joins In
Although the federal legislation is since stalled, the recent nationwide protests and the groundswell of support for social justice reform has inspired renewed attention – including from some very powerful forces.
In early June, IBM, Microsoft and Amazon announced that they were exiting the facial recognition business or temporarily restricting its use by law enforcement.
On June 8, IBM CEO Arvind Krishna sent a letter to Congressional leaders stating that IBM had discontinued the use of its facial recognition software and opposed the use of the technology for mass surveillance, racial profiling, and other violations of basic human rights and freedoms. IBM called on Congress to adopt a national policy to bring greater transparency and accountability to policing.
Microsoft made a similar commitment, stating, "We will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology."
Similarly, Amazon announced a one-year ban on the use by police of its facial recognition software.
New York Passes State Surveillance Law
Within days after the announcements by the big tech companies, the New York City Council on June 18 passed the Public Oversight of Surveillance Technology Act (POST Act), which requires the NYPD to reveal details on the facial recognition software, cellphone trackers and other surveillance tools it uses to monitor people.
In particular, the NYPD must: publicly describe its arsenal of surveillance tools; disclose whether a court has authorized it to use them; publicly post on its website rules regarding who can access its surveillance technology; and identify which entities outside the NYPD, including federal authorities, can access the data it collects.
Notably, the POST Act was first introduced in 2017, but did not gain momentum until the recent nationwide protests.
Facial Recognition is Not Going Away
Despite the initiatives by these powerful private and public forces, facial recognition technology is not going away. The big tech companies are not permanently discontinuing the development of these technologies; rather, they are calling on Congress to implement appropriate regulation. In any event, these companies are not the only purveyors of the technology. Other companies will fill the void.
Discontinuing the use of facial recognition by federal law enforcement agencies, such as the DEA and ICE, will be a challenge (and perhaps unwise). Congress must also balance any regulatory initiatives prompted by recent events with the critical goal of maintaining law and order and combating terrorism.
For example, the 2005 REAL ID Act (anticipated to be fully implemented in 2021) requires all U.S. states and territories to verify that individuals have only one government issued license. Rooted as a response to the events of 9/11, this is a task predicated on collecting and sharing biometrics, including facial recognition data. The requirements of that law cannot readily be met without the robust use of facial recognition technology.
Even amidst a global pandemic, researchers are crawling the internet for photos of people wearing face masks to improve facial recognition algorithms. So, yes, the technology will continue to evolve and continue to be used. The issue is what role will Congress play, how will bias be mitigated, and can we balance the goals of law enforcement and public safety with the rights of all of our citizens.
Meanwhile, as this issue is debated, I will still be making those picture slideshows of my kids – putting facial recognition to use to commemorate the important events of my life. If only all such uses of the technology were as easy.
Timothy J. Pastore, Esq., is a Partner in the New York office of Montgomery McCracken Walker & Rhoads LLP (www.mmwr.com), where he is Vice-Chair of the Litigation Department. Before entering private practice, Mr. Pastore was an officer and Judge Advocate General (JAG) in the U.S. Air Force and a Special Assistant U.S. Attorney with the U.S. Department of Justice. Reach him at (212) 551-7707 or by e-mail at [email protected].