Harvard University professor Jim Waldo says there are places in the US where people already identify themselves that can be used in place of remote face recognition for some swaths of the population. He supports a federated approach to proofing so people can show up at a US Postal Service branch office to verify their identity. GSA has worked on a pilot program with USPS for in-person identity checks.
For the past 15 years, Waldo has challenged students in a class he teaches about privacy to design a digital identity system that can verify a person is who they claim to be. He’s noticed that most students generally start out thinking that requiring a digital ID for everybody is a good idea but become less confident it can work as they talk through the details.
Checking identity at scale with automation inevitably leads to problems for some, because technologies like face recognition are statistical, Waldo says. Those failures lead to suspicion about the pattern of errors, because “nobody actually believes this stuff is going to be fair or non-discriminatory,” he says. “It’s a trust issue, not a technology issue.”
The NIST is in the process of revising its digital identity guidelines. A draft calls for offering an alternative to face recognition. It also adds a requirement to evaluate biometric technologies for performance across demographic groups on an ongoing basis. The NIST, which regularly tests commercial face recognition algorithms, has found many have problems identifying certain groups of people.
Not all federal agencies agreed with a face recognition use mandate: In comments submitted on the revision process in 2020, the Social Security Administration urged alternatives to face recognition, citing “privacy, usability, and policy concerns” alongside questions of discrimination falling heaviest on people of color.
Ryan Galluzzo, the lead on the NIST’s digital identity program, says the revision has a focus on expanding choices for federal agencies and people signing in to government apps and websites. He calls face recognition a “socially sensitive technology.”
“While it has valid applications to identity proofing use cases, we are also very interested in ways to provide individuals and organizations with innovative and responsible options that can bring similar convenience and security at higher assurance levels.”
Precisely how the US government should treat face recognition has been an issue of increasing debate. Earlier this month, a slate of Democratic lawmakers in both houses of Congress introduced a bill that would place a moratorium on use of face recognition by federal agencies, although the proposal is unlikely to succeed.
Federal agencies have also come under pressure from the White House to weigh the potential discriminatory impacts of algorithms. An AI Bill of Rights released by White House Office of Science and Technology Policy in October says people have a right to live lives free from ineffective algorithms. An executive order on racial equity signed by President Biden last month says government agencies should be “protecting the public from algorithmic discrimination.”