Alice enters a twelve-digit alphanumeric passcode to access her account. Then she curses and re-enters a different twelve-digit alphanumeric code several times. Numbers, capital letters, special characters, no repeats—it’s all just a little too much.
After this torturous series of steps, Alice receives a text message and enters another four-digit verification code to prove she has access to the phone number she listed. Finally, Alice enters her new account securely after fifteen minutes of hassle.
Meanwhile, Bob touches a screen with his fingers. A camera turns on and verifies that the face peering at the screen matches the fingerprints provided. A moment later, Bob has secure access to the system.
The Benefits of Biometrics
Alice and Bob both logged in, but only one of them did so quickly and easily. The other struggled and cursed technology. Alice used our industry standard of passphrase + identifiers with various steps that could be socially spoofed. She may have even jotted the new passphrase down on a piece of paper not realizing that doing so exposed her to a security vulnerability.
Bob used unique physical identifiers—biometrics—to prove his identity quickly, conveniently, and with an assurance of high security.
Biometrics represents a huge advancement from the centuries-old use of paper documentation like ID cards, passports, and visas. Biometrics provide a very convenient form of identity. You may forget your password or leave your wallet at home, but you’re never without your fingertips or face.
Biometrics are also unique. Unlike your name, your fingerprints, irises, and face are identifiers that belong to you and no one else. IDs or docs can be forged, and passwords can be breached. It’s much harder for a would-be identity thief to capture and use your biometric signature.
Biometrics also provide for built-in strong authentication. When multiple biometrics are provided in concert, it creates a holistic picture of a biometric identifier that has many high-fidelity unique matching modalities. The combination of these modalities makes it increasingly hard to replicate for hackers and thieves. Also, because it’s very hard for thieves to replicate biometric identifiers, it’s more likely that people gaining access to things like airline flights and secure facilities are who they say they are. Consumer security and business trust are both preserved and enhanced.
Convenience and Risk
The convenience of unlocking your phone with a fingertip is undeniable. But there are many outstanding questions in the field of biometrics as identity.
One of the greatest advantages of a biometric signature is that it is unique to each person and doesn’t change over time. Ironically, unchangeability is also one of the biometrics’ greatest vulnerabilities. Once biometric data has been breached, it’s compromised forever. We can’t change our physical attributes like we change a password.
Privacy and confidentiality are also issues with deep ethical implications. If privacy means having control over how and when we are represented to others, then biometrics that uniquely identifies us might easily be encroaching on our fundamental privacy. That’s especially true when you acknowledge that we, as individuals, don’t control the collection, storage, or use of these proxy bits of our identity.
After all, despite the multiple modalities, it turns out that biometrics aren’t 100 percent failsafe. Sensors can be spoofed, or their readings inaccurate. The capabilities of hackers and thieves evolve almost as quickly as the technology to keep them at bay. One example is the new DeepMasterPrint, by Philip Bontrager and other researchers, who demonstrated a machine-learning-based exploit that can hack many cellphone fingerprints. (Bontrager lists preventative measures against the hack here)
Most significant new technological advances offer benefits but also carry risks. Biometrics are no exception. And because of the very personal nature of biometrics, the stakes are particularly high. Biometrics is a powerful technological advancement in the identification and security space. But with that power comes a deep need for accountability and close ethical scrutiny.
Ethical concerns about the rapidly expanding scope of biometric data collection are well founded.
Ethical Issues with Biometrics
Today, we know that facial recognition technology has already led to adverse outcomes. Early facial recognition tech couldn’t easily recognize women or people of color and sometimes could not identify many people of Asian descent. This early tech was biased towards people of European descent and led to false recognition. Microsoft, Google, and other tech companies are now working to amend these issues with new facial recognition algorithms. The non-profit group Algorithmic Justice League has brought the Safe Face Pledge to the forefront. Many companies are beginning to adopt it to confront these known issues.
Even more concerning is the idea that your face or fingerprints could lead to action towards you, without you knowing the purpose for which those biometrics are being used. Some countries are identifying people in a crowd without notifying them or seeking their consent. Other countries are using biometrics without sufficient notification (a recent court case in January ended in a judgment against an entertainment company for using a child’s fingerprint to allow them to ride a roller coaster).
Just as you have a right to know what personal information Facebook is collecting about you, you should also have the right to know how a company or agency identifies your biometrics and for what purpose. You should know whether your biometrics are going to be used to show you targeted ads, and you should have the ability to turn that feature off. If your biometric signature can be recognized in a crowded public place, you should be aware of and never taken by surprise. You should also know whether law enforcement has access to that biometric data.
All too often, tech companies refuse to open their technology for public review, which makes it hard to understand the level of bias that exists under the cover of obscurity or silence. Facial recognition services should be regularly tested for accuracy and unfair bias in a transparent manner.
Yet the onus is on both the tech companies to communicate and the general public to learn and make decisions. Sadly, a new PEW study showed that 74 percent of the American public doesn’t understand that Facebook is targeting advertising to individuals based on a profile they’ve built of their interests. The tech responsibility only goes so far: an educated public is also required to listen, learn and act.
Furthermore, what happens when the tech misidentifies you? Targeting ads to the wrong person is just the tip of the iceberg in terms of the consequences of misidentification. What happens if your fingerprints are mislabeled in a government database? In the future, consumers may not be able to access government services or purchase common products if their biometrics are not labeled correctly or their face is misidentified by a faulty algorithm.
The use of biometrics can also lead to new privacy violations. For example, today, public buildings in many municipalities include CCTV cameras. These video surveillance cameras could easily be connected to real-time facial recognition services, for all manner of purposes, which could lead to unforeseen consequences. Camera recognition policies could be put in place for reasons of “public safety,” with unintended consequences for freedom of speech, assembly, and even religion—an inexorable creep toward a surveillance society.
Identifiers used for targeted advertising are one thing—the degree to which your shopping behavior is private is arguable. What’s inarguable is the need to discuss this topic rather than letting commercial terms dictate what the world knows about us.
Safeguarding Biometric Identity
A solid first principle in biometric identity is as simple as it is challenging: respect human dignity.
The United Nations has published a guide (the Compendium of Recommended Practices for the Responsible Use and Sharing of Biometrics in Counter-Terrorism) to help bring oversight in the expanding use of biometrics to fight terrorism. This should serve as a starting place for governments and agencies.
In August 2003 the EU Commission Advisory Body on Data Protection and Privacy issued a working paper on biometrics that lays out some further fundamental principles.
- Purpose principle: no data shall be collected without a specific purpose.
- Respect for proportionality. Biometric data can only be used if adequate, relevant, and not excessive.
- Excess or unnecessary data should be destroyed immediately.
Some leading solutions providers in biometric identity have taken the UN and EU advice to heart and spelled this out to their partners and customers.
Interestingly, most of these ethical issues were anticipated over a century ago. U.S. Supreme Court associate justice Louis Brandeis advocated privacy protection in 1890 when he co-authored an article in the Harvard Law Review that newly described a “right to be let alone.” Brandeis stated then that the development of “instantaneous photographs” and their dissemination by newspapers for commercial gain had created the need for a new right to privacy.
Today, we need to listen to the lessons of the past to ensure that we continue to respect and learn how to protect and maintain these long-understood rights.
About the Author:
Ned Hayes is the General Manager for SureID, and a Vice President at Sterling. He was educated at Stanford University Graduate School of Business and the Rainier Writing Workshop. He has also studied cyborg identity and robotic ethics at the Graduate Theological Union at UC Berkeley.
He is a technologist, identity researcher and author. His most recent novel was the national bestseller The Eagle Tree, which was nominated for the Pacific Northwest Booksellers Award, the PEN/Faulkner, the Washington State Book Award and was named one of the top five books about the autistic experience.
He co-founded the technology company TeleTrust and was the founding product lead for Paul Allen’s ARO team at Vulcan. He has also provided product direction for new technology innovation at Xerox PARC, Intel, Microsoft and Adobe and has contributed to a variety of technology patents for these companies.