Back to Insights

03.03.2022

What Good Is AI Facial-Recognition that Only Recognizes Some Faces?

Tamara Kerrill Field



The concern about racially biased AI programming extends beyond facial-recognition programs to predictive models and workflow solutions. The healthcare industry is poised to implement AI on a massive scale. Multiple studies and surveys have found that about 90% of healthcare executives see their organisation’s future as inextricably linked to AI.


A controversy is simmering to a boil in the artificial intelligence sub-sector where Black people are drastically underrepresented both professionally and academically, leading to AI programming with baked-in racial bias.

The resulting software often has led to negative consequences for Black end-users, and researchers worry that more biased AI software is coming down the pike.

One thing is clear: these issues aren’t going away. From healthcare to transportation, AI is impacting the future of virtually every industry and every human.

A groundbreaking 2018 study confirmed racial bias in facial-recognition technology, a problem somehow unanticipated by its creators, none of whom were Black. The study from MIT and Stanford found that widely-used AI programs developed by major tech companies had a skin tone bias, either failing to recognize dark-skinned faces or misidentifying them. The error rates were stunning, between 20% and 40% for dark-skinned women.

It wasn’t the first time those with dark skin tones encountered bias in widely-marketed AI programs.

In 2015, Google’s photo app labelled a black couple "gorillas", a mistake with obvious racist connotations. The app was developed with AI machine-learning technology, which automatically grouped together photos with similar content, such as orchids and electric cars.

Apparently unable to fix the photo-identifying program at the time, Google did a stopgap workaround between 2015 and 2018: the company blocked the word “gorilla” from searches and image tags. A Wired piece from 2018 goes into more detail. (Apparently, Google was eventually able to fix the issue, and a search for “gorillas” now results in images of actual gorillas.)

The academic study and the Google incident provide a snapshot of bias found in image-focused AI programs. There are problems on a couple of fronts: first, traditional data sets can add bias to AI programs. Depending on the use of the facial-recognition software, there are either too few or too many images of dark-skinned faces plugged into the algorithms that train the machines. In other words, the photographic data does not sync with actual US Census data, which shows Blacks make up 13% of the population in the US.

Second, very few Black technologists are involved in coding and building AI technology. Researchers in the field say lack of diversity at the inception point makes it far less likely that biases will be flagged and remedied.

Alex Najibi examined the overall issue in 2020. Then a fifth-year Harvard University PhD candidate, she recorded her thoughts in the Graduate School of Arts and Sciences’ science policy and social justice blog:

This is a system of unconscious bias when you don't have diversity, when you don't have people in the room to say, well, let's step back on this data, because what's happening is people are using historical data to solve current problems.

In that same vein, Ryan Pannell, Chairman, Kaiju Worldwide, examined the crucial importance of diversity and inclusion in finance, which increasingly deploys AI as a central technology – albeit programs that are not marketed to the general public. He says companies will pay a high price for refusing to create a workforce that matches the makeup of the general population:

There must be an understanding by business owners and corporate officers that this is the only way to remain viable in a global marketplace. It's the only way to remain solvent long-term… When you prioritise diversity in the workplace, you challenge institutionalised ideologies, stereotypes, and business practices…

The concern about racially biased AI programming extends beyond facial-recognition programs to predictive models and workflow solutions. The healthcare industry is poised to implement AI on a massive scale. Multiple studies and surveys have found that about 90% of healthcare executives see their organisation’s future as inextricably linked to AI. About 80% of the companies in the sector report that they are currently implementing AI tech.

The reason for the big push forward: the nation’s patient load is increasing while the doctor population is shrinking. Fast Company recently took a comprehensive look at the future of AI in doctors’ offices, hospitals, and other healthcare organisations:

AI is capable of analyzing data from various sources – electronic health records, images, therapies, etc – and developing models that will predict the best possible approach to any given patient’s care journey, thereby streamlining operations and ensuring the most favorable outcomes.

Marzyeh Ghassemi published a series of papers on machine-learning techniques while a doctoral candidate at MIT. Ghassemi looked at how AI can utilise medical records to predict patient outcomes. It didn’t take her long to find potential problems. Ghassemi’s findings were chronicled in a February 2022 MIT News article.

Upon a closer look, Ghassemi saw that models often worked differently — specifically worse — for populations including Black women, a revelation that took her by surprise. “I hadn’t made the connection beforehand that health disparities would translate directly to model disparities,” she says. “And given that I am a visible minority woman-identifying computer scientist at MIT, I am reasonably certain that many others weren’t aware of this either.”

Ghasessemi worries that healthcare-focused AI could suffer from the same lack of diversity that led to bias in facial-recognition programming. One of the best ways to test the stream of talent coming into the field is to look to the nation’s universities.

Though there has been some success at increasing the number of Black students seeking doctorates in AI due to the advocacy of such organisations as Black in AI, research points to small percentages that have remained static for years. A Stanford University 2020 study of 15 US-based universities found that Blacks made up just 2.4% of those seeking new PhDs in AI. Figures for universities in other nations, as well as those for some US universities, are hard to nail down. Those seeking doctorates with a focus on AI are often lumped into the larger computer science category.

But no one’s arguing that Blacks seeking AI PhDs have somehow been undercounted. The Stanford study found that among those awarded computer science PhDs in 2019, only 3.1% were Black.

This is of special concern at a time when AI is changing the technological landscape in the US and around the globe. Though few companies are completely AI-fueled, many are on their way to implementation across entire companies. Major industries in transition include healthcare, transportation, finance, manufacturing, education, retail, human resources, and advertising.

It remains unclear how many Black scientists work in the AI field, but overall employment statistics at two of the world’s biggest AI players offer a grim perspective. A 2021 NBC News article looked at the numbers:

At Google, Black women represent only 0.7 percent of its technical workforce and Black employees make up 2.4 percent of the technical workforce overall, according to the company’s 2020 diversity report. At Facebook, Black employees make up only 1.7 percent of its tech workforce.

The size of that discrepancy becomes more dramatic when considering, again, that Blacks make up 13% of the US population.

The biggest “Black in AI” news story to date is about a woman who was part of that 0.7% at Google. Timnit Gebru says she was ousted from Google in 2020 for questioning company practices that she felt could lead to bias in AI, including lack of support from company higher-ups. Google insists Gebru resigned.

A prominent scientist and co-lead of Google Research's AI ethics team, Gebru drew the ire of Google executives when it was revealed that she, along with academic colleagues, planned to publish a research paper examining the ethics of natural language processing. The paper suggested that companies (including Google) were in such a rush to produce ever-more-powerful AI models that they were not pausing to consider whether bias was being baked into them.

Time magazine wrote about Gebru in December 2021. The article looked at AI algorithmic biases from a different perspective, suggesting that Big Tech – a sector dominated by white male executives – may actually be fueling a culture of bias:

To some who are of the same mind as Gebru, (bias in AI programming) is only the first epiphany in a much broader – and more critical – worldview. The central point of this burgeoning school of thought is that the problem with AI is not only the ingrained biases in individual programs but also the power dynamics that underpin the entire tech sector.

Gebru is also an author of the seminal 2018 facial-recognition study published in the Proceedings of Machine Learning Research. Then a graduate student at Stanford, she and co-author Joy Buolamwini of MIT raised questions about how three of the world's tech giants developed their AI facial-recognition programming — specifically, how they taught neural networks to look for patterns in big data sets.

Gebru and Buolamwini found major discrepancies between the success rates claimed by the tech companies and those the pair discovered by testing the programs. The study was based on AI software created and brought to market by US companies Microsoft and IBM, as well as the China-based Megvii.

The companies claimed their AI software functioned correctly as much as 97% of the time. But Gebru’s and Buolamwini’s research discovered major discrepancies based on skin tone. They found that the programs had a rate of error between 20% and 34% when used by dark-skinned women. The failure rate for dark-skinned people regardless of gender was 20%. In the case of the darkest-skinned women, the programs failed to identify them 40% of the time. That’s compared to an error rate of 0.8% for light-skinned men. Gebru and Buolamwini wrote in their paper:

The substantial disparities require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.

Ruchir Puri, the chief architect of IBM’s Watson artificial intelligence system, said the study changed the way IBM designed facial-recognition software. He told MIT News:

“We have a new model now that we brought out that is much more balanced in terms of accuracy across the benchmark that Joy was looking at. It has a half a million images with balanced types, and we have a different underlying neural network that is much more robust.”

Those efforts led to IBM’s Diversity in Faces database, designed to advance the study of fairness and accuracy in facial recognition by looking at more than the usual measures of skin tone, age, and gender. The company’s efforts were stymied, however, when IBM was sued in 2020 for using photos in their algorithmic data set without permission from their subjects.

Amazon, which marketed a much-criticised AI facial-recognition program used by police, pulled its product off the market around the same time. The company initially said its subsidiary, Rekognition, planned to make improvements, but the software was shelved permanently in 2021. The program in question, based on an algorithm that used thousands of police mug shots, often misidentified people with dark skin tones, a major problem when investigating crime.

Joy Buolamwini sees those two events as progress. She told Fortune:

"With IBM's decision and Amazon's recent announcement, the efforts of so many civil liberties organisations, activists, shareholders, employees and researchers to end harmful use of facial recognition are gaining even more momentum," said Joy Buolamwini, who led the MIT study and founded the Algorithmic Justice League, which is calling for a nationwide moratorium on all government use of facial recognition technologies.

"The first step is to press pause."



Tamara Kerrill Field’s writing and commentary on the intersection of race, politics and socioeconomics has been featured in USNews & World Report, the Chicago Tribune, NPR, PBS NewsHour, and other outlets. She lives in Portland, ME.