Is the ACLU right about facial recognition?

Facial recognition is back in the spotlight again thanks to a recent test of Amazon’s “Rekognition” product performed by the American Civil Liberties Union (ACLU).  Rekognition is a system, provided as a service by Amazon, used to identify a person or object based on an image or video (commonly known as ‘Facial Recognition’). This facial recognition technology is one of many new products to take advantage of new advancements in machine learning.

The goal of this test by the ACLU was to gauge the accuracy of the Rekognition product. To perform this test, a database was created from 25,000 publicly available arrest images. Headshots from all 535 members of congress where then compared, using the Rekognition service, against the arrest image database.  The result was that the Rekognition service produced 28 incorrect matches. Therefore, this means that according to Rekognition, 28 members of congress had also been arrested or incarcerated (which is not true).  Immediately following, the ACLU started sounding alarm bells regarding the accuracy of facial recognition and the ethical implications of its use by law enforcement. A 5% rate of error from any law enforcement tool is an understandable cause for concern. After all, these errors could add up to thousands of innocent American’s lives being ruined.

The problem is that while the ACLU’s concerns are valid, their testing of the Rekognition service is not. When using the Rekognition service there is a parameter for a confidence threshold. This threshold is the level of certainty that the Rekognition service must have for it to consider a match as being correct. If the threshold is set to 50%, the service must have 50% or more confidence that two pictures contain the same individual or object for it to be considered a match. During the ACLU’s testing, the required confidence interval was set to 80%. This is a major flaw in the ACLU’s testing methodology. An 80% confidence interval is normally used for recognizing basic objects, such as a chair or a basketball, not a human face. Amazon’s own recommendation to law enforcement is to use a 95% confidence interval when looking to identify individuals with a reasonable level of certainty.

The debate that the ACLU is trying to start regarding facial recognition and law enforcements is an important one. As new machine learning technologies are implemented by law enforcement, exercises such as these will be extremely important, but they must be done correctly. If the ACLU had used a 95% confidence interval, then this test would be a much better exercise for determining the validity of facial recognition technology as a law enforcement tool. Another major improvement that the ACLU could make to their exercise is to increase the sample size. Using the members of congress as an example does make for a compelling story, but a sample size of 535 individuals is still small. Overall the ACLU is demonizing a technology that it does not even know how to correctly use. This will only lower people’s confidence in the ACLU, not facial recognition technology.