Bias by definition refers to an inclination or prejudice for or against a person or group, especially in a way considered to be unfair. Artificial Intelligence is not itself biased however, it is impossible for those who create AI systems to not input a degree of bias. Human perception is partly responsible for this because the opinions of individuals are reflected consciously and unconsciously throughout many AI systems.

Here are some examples of biased AI:

1.Tay

Female Artificial Intelligence Chat Bot

Tay is the name of a Microsoft twitter chatbot (named after the acronym “thinking about you”) released on March 23 2016. The chatbot was designed to learn from interacting with people on twitter, whilst mimicking the language patterns of a 19-year-old American girl.

Controversy struck when the bot started to post offensive content; forcing Microsoft to shut Tay down only 16 hours after its launch. Two days after its release Microsoft confirmed the bot had been taken offline; releasing an official apology on their blog.

A chatbot is a form of artificial intelligence which conducts a conversation via auditory and, in this case, textual methods. These methods are used to convincingly simulate how a human would respond. For a chatbot to function it requires input from users. Twitter ‘trolls’ took advantage of Tay's "repeat after me" capability by deliberately inputting offensive messages, resulting in inflammatory and racist outputs from Tay.


2. COMPAS

Digital Artificial Intelligence Algorithmic Policing

COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm which is used in state court systems throughout the United States. It is used to predict the likeliness of a criminal reoffending; acting as a guide when criminals are being sentenced. Propublica analysed the COMPAS software with criticising that “it is no better than random, untrained people on the internet“.

Equivant - the company who developed the software - disputes the programme’s bias. However, the statistical results that it provides arguably contains a bias as the system predicts that black defendants pose a higher risk of reoffending than the true representative whilst suggesting that white defendants are less likely to reoffend. Black defendants were almost twice as likely to be misclassified with a higher risk of reoffending (45%) in comparison to their white counterparts (23%).

3: Facebook’s Ad’s Algorithm

Various Job Roles That Are Stereotypically

While the service was not set up to be bias toward people, unfortunately bias has presented itself. Ads have been tailored and assigned to Facebook users depending on their demographic background. In regards to race and gender ‘coincidently’ jobs such as nurses, secretaries and preschool teachers were suggested primarily to women. Whereas job ads for janitors and taxi drivers had been shown to a higher number of men, moreover men of minorities. Ad’s for real estate were said to attain more engagement when shown to white people, resulting in them no longer being shown to other minority groups.

This issue stems from how the AI machine learns. As is the nature of machine learning algorithms, the ad platform formed a pattern from the data it was given, but the data that it learned from allowed bias to slip through, making the AI unreliable in tailoring ads to certain groups of people.  In response to these findings, a spokesperson for Facebook said that they have “made important changes to our ad-targeting tools and know that this is only a first step,”In spite of this, the company has already been sued by the US Department of Housing and Urban Development for violating the Fair Housing Act as advertisers were able to ads based on race, gender and other characteristics.


4: Gender or eyeliner.

via GIPHY

Gender classification using iris information is a relatively new topic, iris classification methods usually refer to texture. However, a number of research papers have looked into whether machine learning algorithms can work out someone's gender from a picture of their iris. Some of these studies act as a prominent example of how data-driven bias in AI can skew results. Many of the datasets, used to train the algorithms, contained images of eyes with and without eyeliner, which wasn’t accounted for. This meant that it was never really possible to tell if the algorithms used were deciphering gender from iris texture or the presence of eye makeup. This is a clear example of how selection bias can influence the outcome of a study, or undermine it completely, and why preventative steps need to be taken when screening participants.

5: Facebook Translated 'Good Morning' to 'Attack Them'

Automatic Translate Keyboard With Various Different Countries

Online translation services have, no doubt, made communication between languages much easier. Many have made translation apps a staple of travelling abroad but there are times when reliance on smart technologies have got us in to bother. In October 2017 the Israel police mistakenly arrested a Palestinian after relying on automatic translation software. The service translated a picture of the construction site worker "good morning" as "attack them". The police were notified of this post; arresting the man as they believed that the man could be planning out an attack. Luckily, after questioning they believed that a mistake had been made, and the man in question was released hours later.


As we know, AI isn't born prejudice but it can be taught it. In many cases AI only reflects our own unconscious biases which are present in the data that it learns from. Facts cant be biased, but they often reflect the perceptions that societies hold us to. When building machine learning systems it’s paramount that the training data is carefully reviewed to prevent prejudice and ensure equal opportunity.