High-Profile Lawsuits over False Facial Recognition Matches Signal Trouble for Software Companies as Accusations of Bias and Misuse Grow

Subscribe To Download This Insight

By Elizabeth Stokes | 3Q 2023 | IN-7076

A Detroit, Michigan resident filed a lawsuit against the city last month, accusing the police of wrongly arresting her following a false facial recognition match. The suit is the latest in a string of high-profile stories of people being arrested after inaccurate facial recognition readings, prompting debates about the technology’s fairness and accuracy.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Woman Wrongfully Arrested after False Facial Recognition Match

NEWS


Detroit, Michigan resident Porcha Woodruff is suing the city and a detective with the police department after being arrested following a false facial recognition match. The police arrested Woodruff in February of this year at her home on carjacking and robbery charges. A facial recognition software used by the department had matched Woodruff with a woman in surveillance footage related to the crime. Woodruff, who was 8 months pregnant at the time of her arrest, was charged in court and released on a US$100,000 personal bond before the case was dismissed a month later.

Woodruff's suit is the third lawsuit filed against the Detroit Police Department regarding a wrongful arrest based on a faulty facial recognition match and follows a string of recent high-profile stories of police wrongly detaining a black person after a facial recognition error. These cases have once again ignited debates about facial recognition software, as lawmakers and privacy organizations accuse the technology of being ineffectual and inherently biased against people of color.

Debate Ensues over Law Enforcement Policies and Software Bias

IMPACT


Woodruff’s case and those like it have received national coverage and caught the attention of human rights organizations already suspicious about the use of facial recognition technology in all areas of public life. The American Civil Liberties Union (ACLU) of Minnesota recently filed a suit on behalf of Kylese Perryman, a young man who was jailed for 5 days and charged with carjacking and armed robbery after a false facial recognition match.

The high-profile wrongful arrests of black people in Detroit and other cities have led police departments to reconsider how they deploy and rely on facial recognition technology. Many police organizations in the United States use facial recognition software to aid investigations, though some departments have certain limitations on its use. Detroit, for example, only uses facial recognition software in violent crime and home invasion investigations. Many police organizations are prohibited from relying solely on a facial recognition match for an arrest—these departments must acquire other evidence to charge someone with a crime. However, the suit filed on behalf of Perryman accuses the police of not gathering readily available evidence that would have distinguished the plaintiff from the suspect.

The arrests have not only reignited arguments about law enforcement’s facial recognition policies. Debates have reemerged about the fundamental fairness and accuracy of facial recognition software. According to the New York Times, all six people who reported being wrongfully accused of a crime based on facial recognition technology used by police have been black. Several studies have shown that facial recognition technology is less accurate when reading the faces of people of color. A federal study conducted by the National Institute of Standards and Technology (NIST) revealed in 2019 that facial recognition systems are more likely to misidentify people of color than white people and that women are more likely to be misidentified than men. The study also found that the faces of African American women led to higher incidents of false matches in the types of searches often conducted by the police.  Another study in 2018 found similar disparities in gender classification algorithms in facial-analysis systems.

Routine and Transparent Testing

RECOMMENDATIONS


Big-name tech companies, notably IBM and Amazon, have changed their facial recognition policies since the federal 2019 study. IBM no longer develops or offers facial recognition technology, and Amazon does not allow law enforcement to use its facial recognition software. However, there are several actions that facial recognition companies can take—short of divesting from their products entirely—to ensure their software is used fairly.

Training data continues to be a significant factor in determining the fairness of facial recognition technology. Ensuring that different genders and ethnicities are represented equally in training data is imperative, particularly if a company’s facial recognition software is being used by police and investigators to inform arrests. Facial recognition companies should be transparent about their algorithms’ accuracy across demographics, conducting routine tests to ensure accurate readings across different population groups and publishing these testing schedules so the public can confirm that these products are equitable and accurate.

Importantly, the NIST study revealed that inaccuracies between different types of populations lowered if the algorithms overall were more accurate. This is good news, as Artificial Intelligence (AI)-informed algorithms in many technology sectors have greatly improved over the last 4 years, meaning that it is likely that facial recognition software has become more accurate (and, therefore, less biased) in the years since the federal report was published. These findings should serve as extra motivation for facial recognition companies to improve their products. Constantly testing and refining algorithms will not only help these companies gain more customers, but will also ensure that these products are equitable. When law enforcement is a company’s consistent customer, the stakes can be high for any error or false reading—facial recognition companies have both a commercial incentive and a societal obligation to ensure their product is reliable and consistent.

 

Services

Companies Mentioned