<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Fact Check with Logically.

Download the Free App Today

Machines aren't born biased, they are taught it.

Machines aren't born biased, they are taught it.

NB: This article's header image is by Stephen Lilley from Flickr


Technological advancement has restructured how we behold and interact with the world. As trust in humans erode, we have fostered a dependency on machines (think of the panic when you can’t feel your phone in your pocket). The buzzword AI has been thrown around tech and non-tech communities alike; peddled as a knight in shining armour, ready to rid the world of its problems. In fairness, artificial intelligence plays a central role in a great number of systems that shape the fabric of western society - from government systems to aviation to telephone customer service.


But what is AI?

Vishal Maini, of Google's DeepMind AI, defines it as the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals, although artificial intelligence has been around for decades, it has gained considerable momentum over the past few years due to rapid developments in computer processing power - allowing the utilisation of the technology in fields that were previously unimaginable. You can unlock your phone with your face. Great right? Yet the idea of a machine being perceptive is worrisome, especially when we put our faith in them to make important decisions for us.


So, can machines be biased?

Well not inherently, but the people who make them are. Human perception and bias are inextricably linked, and there is a multitude of research in behavioural economics, social psychology and cognitive science that has defined the 150+ classified cognitive biases that exist today. A cognitive bias is defined as "a mistake in reasoning, evaluating, remembering, or other cognitive processes" and they can reflect onto machine learning systems in a number of ways.


Data-driven bias

Data is at the heart of all artificial intelligence systems and when datasets are skewed, the results will also be skewed. Studies into whether machine learning algorithms can decipher gender from a picture of someone’s iris act as a prominent example of this. Many of the datasets used in these studies contained images of eyes with and without eyeliner, which wasn’t accounted for, meaning that it was never really possible to tell if the algorithms were deciphering gender from iris texture, or from the presence of eye makeup. It’s paramount that data is varied to prevent systemic bias, but it’s also essential that the datasets used to train machine learning algorithms do not contain miscellaneous variables that allow algorithms to misidentify inputs, and therefore distort outputs.


Another point to note is the homogeneity of data scientists training machine learning algorithms. The tech industry is male-dominated and many of the studies that didn’t account for eye makeup were lead by men. It could be argued having a diverse team of data scientists can help alleviate such problems. Logically knows this all too well, which is why we spent over a year training our models with a diverse team, made up of people with varied ideological, social, economic, and cultural backgrounds to prevent systemic and algorithmic bias.


Bias is also present in other forms of AI systems and, in some cases, algorithms are intentionally created with inherent bias. Any system that is aimed at personalisation will show people what they want to see, thus creating an algorithmic confirmation bias. Take Facebook for example. Facebook’s newsfeed algorithm is designed to show you what you and your friends are interested in. You and your friends are most probably friends because you bear similar traits and passions. So if your friends like and share posts that you’re most probably inclined to agree with, it creates a filter bubble, where it’s hard to find opposing/different views to your own.


Is Logically any different?

Logically’s platform does use artificial intelligence to show you the news that you want to see - but not based on your friends’ interests. We give you news from a diverse range of sources, across the political spectra to counter algorithmic filter bubbles, present you with opposing views and broaden your perspective.


Our diverse content analysis team have been rigorously trained to detect biases, logical fallacies and tone of coverage as well as fact-check and verify information. Every judgment made by this team is corroborated by, at least, two other experts which we use to further enhance our automated systems.


Perhaps AI systems will never be completely free from bias, but in having such a thorough, multivariate process for training and moderating our algorithms, we hope to create a platform that can advance civic discourse and reduce human bias in order to help people make better-informed choices based on the news they read.

Related Articles