Access Management , Biometrics , Identity & Access Management
NIST's New Biometrics Databases Offer Help With IAM
Agency Also Releases Study on That Raises Concerns About Facial Recognition TechnologyThe National Institute of Standards and Technology has released three new biometrics datasets to help organizations research new types of secure digital identification and authentication management systems.
See Also: Webinar | Prepping for IT Security Audits in 2025: Considerations for Modern PAM Programs
In a separate project, NIST has released a study on facial recognition technology that highlights some challenges.
The new biometrics datasets contain fingerprints, facial photographs and iris scans that have been stripped of any identifying information. Anyone who had their information included in the database gave consent, NIST says.
NIST says the datasets will enable researchers to test their experimental identification and access management systems before deploying them in real-world tests.
"This all gets back to reproducible research," says Greg Fiumara, a computer scientist at NIST. "The data will help anyone who is interested in testing the error rates of biometric identification systems."
The release of these datasets comes at a time when identity-related fraud is increasing, and lawmakers and companies are looking for new ways to bolster authentication through newer technologies, such as machine learning.
At a congressional hearing in September, members of the U.S. House Financial Services Committee heard from expert witnesses about how $15 billion was stolen from American consumers in 2018 through identity theft and how newer technologies might help reduce fraud (see: Congress Hears Ideas for Battling ID Theft).
In that hearing, lawmakers and security experts noted that "synthetic identities," which involve cybercriminals using stolen information to mimic a person and carry out identity-related fraud, are posing a challenge to inancial institutions.
MasterCard and others have started offering biometric-based identities for consumers in a move away from passwords and other older forms of identity management.
Bridging the Gap
A major issue with using machine learning and artificial intelligence to help with identity and access management is that many organizations lack biometrics datasets to help train the algorithms to make them more accurate, according to NIST.
The new datasets look to bridge some of that gap.
"Few available resources exist to help developers evaluate the performance of the software algorithms that form the heart of these systems, and the NIST data will help fill that gap," the agency notes
The biometrics data is broken down into three datasets that contain data collected at different times from different sources:
- SD 300: This is data collected from 900 hardcopy ink cards with fingerprint information from people who are deceased. This information will allow manufacturers of IAM systems to evaluate how well their newer products can produce results that will be interoperable with older, paper-based records.
- SD 301: This dataset is considered "multimodal," meaning that different biometric markers, such as fingerprints and iris scans, are all linked. This will enable researchers to test identification systems that match a person's picture to their fingerprints.
- SD 302: This dataset also contains fingerprints, but these were gathered through eight commercially available or prototype devices, including a contactless fingerprint reader.
"This opens up possibilities for types of multimodal research that haven’t been done before," Fiumara says. "We want to get more secure and more accurate identification, as multimodal systems are harder to spoof."
Facial Recognition Controversy
While government and private organizations are more willing to use biometrics as part of the identity and access management process, not all the technology has been embraced by the public.
The use of facial recognition technology, for instance, has proven problematic, with many questioning how reliable it is, whether it violates privacy and if the algorithms used incorporate some form of bias (see: Facial Recognition: Balancing Security vs. Privacy).
On Thursday, NIST released a study on facial recognition technology, which evaluated 189 software algorithms from 99 developers. The report found that the majority of facial-recognition systems exhibited some type of bias. In addition, NIST found that with U.S.-developed algorithms, there were high rates of false positives in one-to-one matching for Asians, African Americans and native groups, which include Native Americans, American Indians, Alaskan Indians and Pacific Islanders.
"While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” says Patrick Grother, a NIST computer scientist and the report’s primary author. "While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms."