In 2009 the National Academy of Sciences (NAS) called out most forensic science disciplines for their lack of rigor, in the report Strengthening Forensic Science in the United States: A Path Forward. This stirring critique spawned a variety of efforts intended to introduce science based standards across multiple disciplines, such as the Organization of Scientific Area Communities (OSAC) and the American Academy of Forensic Sciences Standards Board (ASB). The authors of the NAS report cited deficiencies in all the pattern evidence disciplines, and compared those subjective techniques to the more quantifiable methods of DNA analysis. The report specifically criticized latent fingerprint analysis for its lack of a foundational statistical model. From the NAS report:
“Current published statistical models, however, have not matured past counts of corresponding minutia and have not taken clarity into consideration. (This area is ripe for additional research.) As a result, the friction ridge community actively discourages its members from testifying in terms of the probability of a match; when a latent print examiner testifies that two impressions “match,” they are communicating the notion that the prints could not possibly have come from two different individuals.”
Since the NAS report, a considerable amount of effort has been devoted to develop more standardized practices in forensic friction ridge analysis, as well as the creation of new probability models and language intended to assist examiners in the analysis and expression of friction ridge pattern evidence.,,, Yet currently there is still no widely accepted model as indicated by the 2017 International Association for Identification (IAI) position statement on the matter.
The NAS report described a dearth of published statistical models, but what about unpublished models? For decades criminal justice agencies have been employing automated fingerprint identification systems (AFIS) to identify fingerprint records and latent print evidence, using algorithms that leverage proprietary statistical models. Since AFIS matchers are based on tightly controlled trade secrets, they operate like a black box to those agencies that rely on them for criminal identification and background checks.
AFIS vendor training programs for their forensic practitioner customers do not include the interpretation of match scores, crucial to using these systems for the objective measurements called for by the authors of the NAS report. This type of statistical guidance would undoubtedly reveal some of the ways these systems use specific friction ridge features, and would therefore expose these systems to reverse engineering.
As organizations like the Center for Statistics and Applications in Forensic Evidence (CSAFE) explore the viability of statistical modeling in pattern evidence disciplines, they will probably notice that AFIS vendors have already done decades of research, and those vendors have fielded operational systems, to solve the same type of problem forensic researchers are now investigating. However, leveraging operational AFIS systems to produce a probabilistic model suitable for evidentiary reporting poses some challenges.
- AFIS match scores are not a measure of probability by themselves, and must be interpreted using proprietary information.
- It is unknown how feasible AFIS probability scores will be on the totality of forensic evidence since these systems are only used to identify a subset of data in everyday case work.
- In order to protect trade secrets, AFIS vendors would likely have to shield users from specific knowledge about how the algorithm works, probably rendering any probability score by itself useless for scientific courtroom testimony purposes.
Eventually, the search for a viable forensic friction ridge probability model will lead those researches to AFIS vendors, who have been working on these problems for decades. Since most examiners already use these systems in their day to day work, one can imagine an AFIS ancillary function that provides the objective statistical measurement sought by the forensic and jurisprudence community.
However, if such a functionality were to be implemented it would need to go through substantial independent white box testing, to prove that it is fit for purpose and reliable. The National Institute of Standards and Technology (NIST) has been evaluating AFIS technology for years and publishing those studies., Yet these tests only show the overall effectiveness of one proprietary system over the other, and do not measure the probative value of each individual identification or verification test.
Biometric vendors may hold the key to the much sought after probability model that has eluded forensic researchers for decades. Yet before any such AFIS application is put to use for probative reporting, it must be thoroughly understood and tested through independent white box verification and validation, something AFIS vendors have understandably resisted since inception.
- Neumann, Cedric; Ausdemore, Madeline. (2019). “Defence Against the Modern Arts: the Curse of Statistics ‘Score-based likelihood ratios’”. ↵
- Garrett, Brandon, Gregory Mitchell, and Nicholas Scurich. (2018). “Comparing Categorical and Probabilistic Fingerprint Evidence”. Journal of forensic sciences 63, no. 6 (2018): 1712-1717.↵
- Swofford HJ, Koertner AJ, Zemp F, Ausdemore M, Liu A, Salyards MJ. “A method for the statistical interpretation of friction ridge skin impression evidence: Method development and validation”. Forensic Sci Int. 2018 Jun; 287:113-126. doi: 10.1016/j.forsciint.2018.03.043. Epub 2018 Apr 3. PMID: 29655097.↵
- Swofford, H., S. Cole, and V. King. “Mt. Everest—we are going to lose many: a survey of fingerprint examiners’ attitudes towards probabilistic reporting”. Law, Probability And Risk (2021).↵
- Fiumara, Gregory P., et al. “Evaluation of Latent Friction Ridge Technology”. NIST, 30 Sept. 2021, https://www.nist.gov/itl/iad/image-group/evaluation-latent-friction-ridge-technology.↵
- Watson, Craig I., et al. “Fingerprint Vendor Technology Evaluation.” NIST, 10 Nov. 2018, https://www.nist.gov/publications/fingerprint-vendor-technology-evaluation.↵