(adsbygoogle = window.adsbygoogle || []).push({});
Local News

US government study ignites racial bias debate in facial recognition tools

Many facial recognition systems misidentify people of colour more often than white people, according to a US government study, adding fuel to the scepticism over the technology widely used by law enforcement agencies.

Dec 23rd · 2 min read

Many facial recognition systems misidentify people of colour more often than white people, according to a US government study, adding fuel to the scepticism over the technology widely used by law enforcement agencies.

The study by the National Institute of Standards and Technology (NIST) found that, while conducting a particular type of database search known as “one-to-one” matching, many facial recognition algorithms falsely identified African-American and Asian faces 10 to 100 times more than Caucasian faces.

Furthermore, the investigation found that African-American females are more likely to be misidentified in “one-to-many” matching, which is often used to identify a person of interest in a criminal investigation.

While some companies have played down earlier findings of bias in technology that can guess an individual’s gender, known as “facial analysis,” the NIST study was evidence that face matching also struggled across a wide range of demographics.

Joy Buolamwini, founder of the Algorithmic Justice League and also a researcher at the Massachusetts Institute of Technology (MIT), called the report “a comprehensive rebuttal” of those saying artificial intelligence (AI) bias was no longer an issue.

The study comes at a time of growing discontent over the technology across the US, with critics warning it can lead to unjust harassment or arrests.

As part of the study, NIST tested 189 algorithms from 99 developers, excluding companies such as Amazon.com who did not submit one for review. This is because what was tested differs from what companies sell, in that NIST studied algorithms detached from the cloud and proprietary training data.

Following an analysis of these algorithms, the report found that  China’s AI start-up SenseTime, valued at more than $7.5bn (£5.75bn), had “high false match rates for all comparisons” in one of the NIST tests.

SenseTime’s algorithm produced a false positive more than 10 per cent of the time when looking at photos of Somali men. So hypothetically speaking, if this were to be deployed at an airport, a Somali man could pass a customs check one in every 10 times he used passports of other Somali men.

Meanwhile, Yitu, another AI start-up from China was more accurate and had little racial skew.

The study also found that Microsoft Corp had almost 10 times more false positives for women of colour than men of colour in some instances during a one-to-many test. Its algorithm showed a little discrepancy in a one-to-many test with photos just of black and white males.

Congressman Bennie Thompson, chairman of the US House Committee on Homeland Security, said the findings of bias were worse than feared, at a time when customs officials are adding facial recognition to travel checkpoints.

“The administration must reassess its plans for facial recognition technology in light of these shocking results,” he said.

In February 2018, a Massachusetts Institute of Technology (MIT) study found that commercial facial-recognition software can come with in-built racial and gender biases, failing to recognise the gender of the darkest-skinned women in approximately half of the cases.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

last updated: 2019-12-23@15:12