Artificial discrimination

According to the AI Now Institute, there is a diversity crisis in the AI sector across gender and race. While the lack of women and black people in the scientific workforce is a crisis on its own, this lack of diversity is contributing to the software itself reinforcing inequality in the way AI is used.

Many researchers have shown that bias in AI systems reflects historical patterns of discrimination—unsurprising when women comprises 15 percent of AI research staff at Facebook and just 10 percent at Google, while only 2.5 percent of Google’s workforce is black. In academia, over 80 percent of AI professors are male.

Women comprise just 10 percent of the AI research staff at Google.

Women comprise just 10 percent of the AI research staff at Google.

This is a problem, a problem fundamentally about power. The diversity problem affects how AI companies work, what products get developed, who they are designed to serve, and who benefits from them.

AI systems can be incredibly useful in myriad ways and industries but at their most basic, they’re systems of discrimination: they differentiate, rank, and categorise. 2018 saw AI systems rapidly introduced into more social domains, including core social institutions, against a backdrop of rising inequality, political populism, and industry scandals.

In any normal year, manipulating national elections using AI would be the biggest scandal.

In any normal year, manipulating national elections using AI would be the biggest scandal.

In any normal year, Cambridge Analytica seeking to manipulate national elections in the US and UK using social media data and algorithmic ad targeting would have been the biggest story, but in 2018, it was just one of many scandals. In March, Google was found to be building AI systems for the Department of Defense’s drone surveillance program; US Immigration and Customs Enforcement modified its own risk assessment algorithm to produce only one result, with the system recommending ‘detain’ for all immigrants in custody; an autonomous car killed a pedestrian; IBM Watson produced ‘unsafe and incorrect’ cancer treatment recommendations; a voice recognition system in the UK designed to detect immigration fraud cancelled thousands of visas and deported people in error; and the New York City Police Department contracted IBM to build an ‘ethnicity detection’ feature using its police camera footage of thousands of people in the streets of New York, whose footage had been taken without their knowledge or permission.

While these stories occupy the headlines, all the major powerhouses of the technology industries continue to invest heavily in AI. As these investments are realised into products, a steady stream of examples are demonstrating a persistent problem of gender and race-based discrimination. Image recognition systems miscategorise black faces, sentencing algorithms discriminate against black defendants, chatbots adopt racist and misogynistic language when trained on online discourse, and Uber’s facial recognition doesn’t work for trans drivers. 

The question is no longer whether there are biases in AI systems—it’s how to address these harms.

There is growing consensus that AI systems perpetuate and amplify bias, and that computational methods are not inherently neutral and objective. In most cases, such bias mirrors and replicates existing structures of inequality in society.

The question is no longer whether there are biases in AI systems—it’s how to address these harms.

The question is no longer whether there are biases in AI systems—it’s how to address these harms.

Both within the spaces where AI is being created, and in how AI systems are designed, the costs of biases, harassment and discrimination are borne by the same people: gender minorities, people of colour, and other under-represented groups. Similarly, the benefits of using AI systems tend to accrue primarily to those already in positions of power: people who tend to be white, educated, and male. All this points to a relationship between patterns of exclusion within the industries driving AI production, and the biases that manifest in the results of its systems.

 One example is that of Amazon’s experimental hiring tool. The company hoped that a resume scanning tool it developed would be able to efficiently identify qualified applicants by comparing their CVs to previous hires. The system quickly learnt to reject applications from candidates whose CVs contained the word ‘women’. After uncovering the bias, Amazon engineers tried to fix the problem by directing the system to treat this term as ‘neutral’ but the tool was eventually abandoned as the company could not ensure the algorithm would not be biased against women. Gender-based discrimination was built too deeply within the system and in Amazon’s past hiring practices.

Kuan Luo, Abadesi Osunsade and Cadran Cowsanage, co-founders of Elpha, a platform for women and non-binary individuals working in tech

Kuan Luo, Abadesi Osunsade and Cadran Cowsanage, co-founders of Elpha, a platform for women and non-binary individuals working in tech

Unfortunately, examples of bias and discrimination can be found across all the leading tech companies driving the development of artificial intelligence. Apple dismissed concerns about its lack of workplace diversity as a ‘solvable issue’ while simultaneously calling proposals for diverse hiring practices ‘too burdensome’; an audit of Google’s pay practices found six to seven standard deviations between pay for men and women in nearly every job category; a lawsuit filed against Tesla alleges gender discrimination (and one worker recounts there were more men named ‘Matt’ in her group than women); and black employees at Facebook recount being aggressively treated by campus security and dissuaded by managers from participating in internal Black@ (an employee resource group) activities.

Discriminatory practices lead to discriminatory tools.

A growing number of researchers argue that some biased systems are so wrong that they’re undeserving of being ‘fixed’. Machine learning methods to detect sexual orientation, for example, simply should not be built at all.

Unfortunately, examples of bias and discrimination can be found across all the leading tech companies driving the development of artificial intelligence.

Unfortunately, examples of bias and discrimination can be found across all the leading tech companies driving the development of artificial intelligence.

Unfortunately, there’s no easy fix to a lack of diversity in this field or in any other. Despite decades of research, there has been little meaningful headway in remedying the diversity problems in industry or academia—in fact, diversity numbers have either declined over the last decade or stagnated. Evidence suggests that a focus on addressing the pipeline problem (the flow of diverse candidates from school to industry) has not translated into meaningful action by the companies in question.

The pipeline problem tends to place the onus to solve discrimination issues on those who are discriminated against, rather than the perpetrators, with the problem portrayed as a lack of success by girls and women, rather than the obstacles that masculine-dominated social institutions raise to women’s success.

It is long overdue for technology companies to directly address exclusion and discrimination in the workplace

It is long overdue for technology companies to directly address exclusion and discrimination in the workplace

It is long overdue for technology companies to directly address exclusion and discrimination in the workplace but the current structure in which AI development and deployment occurs works against any meaningful change. Those profiting from the systems are incentivised to accelerate the development and application of them without taking the time to build diverse teams or test for disparate impacts; those most at risk lack the access to accountability mechanisms that would allow for legal appeals or redress.

Fortunately, new coalitions are beginning to form between researchers, activists, lawyers, technology workers and civil society organisations to support the accountability and ongoing monitoring of AI systems but these coalitions need a commitment from technology companies to protect them.

The products of the AI industry already influence the lives of millions; addressing diversity issues is therefore not just in the interest of the technology industry, but of everyone whose lives are affected. ■

Techno

Back to the start

TechnoComment