<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Fact Check with Logically.

Download the Free App Today

5 Examples of Biased Artificial Intelligence

5 Examples of Biased Artificial Intelligence

Many companies now use AI systems to perform tasks and sort through data that formerly would have been assigned to human workers. While AI can be a helpful tool to increase productivity and reduce the need for people to perform repetitive tasks, there are many examples of algorithms causing problems by replicating the (often unconscious) biases of the engineers who built and operate them. Here are 5 examples of bias in AI:

  1. Amazon’s Sexist Hiring Algorithm

In 2018, Reuters reported that Amazon had been working on an AI recruiting system designed to streamline the recruitment process by reading resumes and selecting the best-qualified candidate. Unfortunately, the AI seemed to have a serious problem with women, and it emerged that the algorithm had been programmed to replicate existing hiring practices, meaning it also replicated their biases.

The AI picked up on uses of “women’s” such as “women’s chess club captain” and marked the resumes down on the scoring system. Reuters learned that “In effect, Amazon’s system taught itself that male candidates were preferable.” Rather than helping to iron out the biases present in the recruitment process, the algorithm simply automated them. Amazon confirmed that they had scrapped the system, which was developed by a team at their Edinburgh office in 2014. None of the engineers who developed the algorithm wanted to be identified as having worked on it.

 

2. Flawed Criminological Software

COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm used in state court systems throughout the United States. It is used to predict the likeliness of a criminal reoffending; acting as a guide when criminals are being sentenced. ProPublica analyzed the COMPAS software and concluded that “it is no better than random, untrained people on the internet“.

Equivant – the company that developed the software – disputes the program’s bias. However, the statistical results the algorithm generates predict that black defendants pose a higher risk of reoffending than a true representation while suggesting that white defendants are less likely to re-offend. Black defendants were almost twice as likely to be misclassified with a higher risk of reoffending (45 percent) in comparison to their white counterparts (23 percent).

 

3. Facebook’s Ad’s Algorithm

In 2019, Facebook was found to be in contravention of the U.S. constitution, by allowing its advertisers to deliberately target adverts according to gender, race, and religion, all of which are protected classes under the country’s legal system. Job adverts for roles in nursing or secretarial work were suggested primarily to women, whereas job ads for janitors and taxi drivers had been shown to a higher number of men, in particular men from minority backgrounds. The algorithm learned that ads for real estate were likely to attain better engagement stats when shown to white people, resulting in them no longer being shown to other minority groups.

This issue stems from how the AI machine learns. As is the nature of machine learning algorithms, the ad platform formed a pattern from the data it was given, but the pattern reflected existing societal inequalities and left unchecked would have helped to propagate them further.  In response to these findings, a spokesperson for Facebook said that they had “made important changes to our ad-targeting tools and know that this is only a first step,” but the company was unable to avoid a lawsuit by the U.S. Department of Housing and Urban Development for violating the Fair Housing Act.

 

4. Racism in U.S. Healthcare Allocation

Last year a team from the University of California, Berkeley discovered a problem with an AI that was being used to allocate care to 200 million patients in the U.S., which resulted in black patients receiving a lower standard of care. Across the board, black people were assigned lower risk scores than white people, despite the fact that the black patients were also statistically more likely to have comorbid conditions and thus in fact experience higher levels of risk. This in turn meant that black patients were less likely to be able to access the necessary standard of care, and more likely to experience adverse effects as a result of having been denied the proper care. The problem stemmed from the fact that the system was allocating risk values using the predicted cost of healthcare as the determining variable, and because black patients were often less able to pay or were perceived as less able to pay for the higher standard of care, the AI essentially learned that they were not entitled to such a standard. Having made this discovery, the UC Berkeley team worked with the company responsible for developing the tool to find variables other than cost through which to assign the expected risk scores, reducing bias by 84 percent.

 

5. An App for (Some) Girls

Early in 2020, an Australian screenwriter announced the launch of Giggle, a social networking app that aimed to allow girls to chat in small groups, or “Giggles,” and relied on gender verification AI software to ensure that only girls were able to join. The platform was less than well-received, particularly on Twitter where people drew comparisons between the software and the eugenicist practice of phrenology. The AI also automatically excluded many trans girls, meaning that if they wanted to use the app they would have to contact the makers directly to have their gender verified, which in itself entails ethical conundrums and raised questions about how sensitive the developers were to the real-life application of their software. As of April 2020, the app still hasn’t been officially launched.

 

The problems with some of these AI systems were easily solved, such as the healthcare tool that simply needed a better range of variables on which to base its conclusions. Other systems had to be scrapped altogether. In other instances, such as with Giggle, the problem was a broader one; whether or not AI is at all suited to the task in question, or if other means altogether would be more appropriate. In this way, the path to improving machine learning systems reflects the problems with the systems themselves, in that a one size fits all approach is likely to be insufficient. Equally, AI does not necessarily exacerbate structural problems, but neither can it solve them on its own. AI is a powerful tool, but one whose in-built intelligence can only live up to that of the people who program it.

Related Articles