The word 'bot' is splashed around all over the internet. This 6-part series attempts to establish what bots actually are — and how you might recognise the misleading ones. In this fourth instalment, Bot Power, we first look at bots' impact on social media dynamics, before exploring their target audiences and most receptive demographics.

NB: This article's header image is by Pietro Jeng from Unsplash.

a. Impact

In March 2017 a study on 'Online Human-Bot Interactions' analysed a sample set of Twitter accounts, identifying over a thousand criteria with which to pick up on bot characteristics. The machine learning model — called a classifier — focused on six areas: activity time, network patterns, sentiment and content of tweets, and the metadata of users & friends. The study posited that bots made up between 9 and 15 per cent of the nearly 14 million accounts analysed. Other studies estimate that 4.9 to 6.2 per cent of social media users are bots.

That's the putative state of affairs for unofficial bot accounts. What about those bots that are above-board? In 2016 the business software provider Oracle conducted a survey asking 800 professionals across Europe, the Middle East, and Africa about their official, commercial bot usage. Thirty-six per cent reported that their brand already used customer-facing bots; that's over 1 in 3 companies across one third of the world's surface. Another 80 per cent of respondents said they expected their business to serve customers via bots come 2020.

MIT's Sloan School of Management led another study into bots' effectiveness. 'The Impact of Bots on Opinions in Social Networks' analysed 2.3 million tweets, focusing exclusively on Twitter activity surrounding the second Trump-Clinton debate in 2016. Of the approximately 78,000 users sampled, the researchers surmised that 396 were bot accounts. They used the DeGroot method, which models how an individual's opinions align with their network's opinions. True, 396 users represented less than 1 per cent of the network. But Tauhid Zaman, the study's lead, concluded that "a small number of very active bots can actually significantly shift public opinion – and despite social media companies’ efforts, there are still large numbers of bots out there, constantly tweeting and retweeting, trying to influence real people who vote."

Photo by Elliott Stallion on Unsplash

Bots most likely played a part in Britain's 2016 Brexit vote, too. In May 2018, the National Bureau of Economic Research presented calculations estimating that bot activity on social media increased the 'Leave' vote by approximately 1.76 percent. The study, called 'Social media, sentiment and public opinions: Evidence from #Brexit and the #USElection', also stated that bot activity accounted for 3.23 per cent of actual Trump votes in the U.S. presidential race. That's more than 2 million ballots — cast by real people — influenced by imaginary social media users. Academics Yuriy Gorodnichenko, Tho Pham, and Oleksandr Talavera concluded that "the effect of bots was likely marginal, but possibly large enough to affect voting outcomes in the two elections."

The researchers noticed two more intriguing things. The first: "bots have a tangible effect on the tweeting activity of humans, but the degree of their influence depends on whether they provide information consistent with humans’ priors [emphasis my own]." Practically, this means a pro-choice Facebook user might swiftly dismiss a conservative bot calling for the repeal of Roe v. Wade, the US's landmark abortion ruling of 1973. But if that same user encountered a bot claiming that a nearby Planned Parenthood centre had been attacked, they might rush to share that news, or at least give it enough credence to verify it on other outlets.

It doesn't end there. The study found that "a message with positive (or negative) sentiment generates another message with the same sentiment." In essence, they're describing a snowball effect, one designed to amplify a specific feeling towards a topic. Granted, it's no revelation that social media is algorithmically designed to make sentiments balloon. But here, we're talking about human behaviour — about how porous we are to one anothers' feelings. And this proven emotional snowball effect does underpin the role bots might play in a digital misinformation tactic called astroturfing. Dr Stefan Stieglitz defines astroturfing as "creating the impression that a vast majority is in favour of a certain position. Political campaigns are therefore disguised as spontaneous ‘grassroots’ behaviour" despite being "carried out by a single person or organisation." Now, what if that single person or organisation, rather than manually (and insincerely) posting emotive, opinionated, or deceptive content, used a bot or an army of bots? Then, we are faced with a system that induces real users into mimicking the feelings displayed by fake accounts. And with bots to hand, this can be done at scale.

Politics aside, social media bots can also influence economic dealings in a number of ways - particularly within financial markets. Currently, some financial service providers base at least part of their buying recommendations on prevailing opinions on social media. Manipulating the value of a company's shares could be as simple as releasing a sufficient amount of social media bots, all spreading the same message about that company. In 2017 a number of publications scrutinised how trading bots — the computers programmed to buy or sell shares systematically — represented a huge portion of stock market activity. However, more research needs to been done on the relationship between social media bot activity & financial markets.

b. Who is most targeted, or most susceptible?

Source: skitterphoto.com

The question of who is most targeted by or susceptible to bots is thorny territory - and one from which it's almost impossible to abstract politics. Until now, the bulk of research into social media bots has sought to understand either their origin, or the socio-political consequences of their online activity. To do so, most investigations have been conducted along familiar political lines. For example, a paper published in February of this year by the University of Southern California analysed the habits of 215 bot accounts, as well as their reach across Twitter. Researchers Badawy, Ferrara and Lerman observed that there were "about four times as many Russian Trolls posting conservative views as liberal ones," while conservative bots "produced almost 20 times more content" than the liberal ones. They also estimated that, overall, approximately 4.9% of liberal users and 6.2% of conservative users on Twitter are bots.

Badawy, Ferrara and Lerman went beyond analysing the behaviour of bot accounts. They examined how (human) social media users interacted with bot-generated content — and what their political denominations might be. Their sample found that conservatives retweeted bot-generated content "about 30 times more [...] than liberals." This led them to conclude: "Misinformation (produced by Russian trolls) was shared more widely by conservatives than liberals on Twitter."

Now, this doesn't entail that conservatives are unequivocally more likely to spread disinformation (although there is more evidence to that effect). Rather, it suggests that they are targeted more frequently and intensely by those manning bots, and are perhaps more likely to believe the inflammatory or spurious content that bots generate. In January of this year economics professors Hunt Allcott of NYU and Matthew Gentzkow of Stanford released a paper which made two assertions. The first: in the 2016 presidential election "fake news was both widely shared and heavily tilted in favour of Donald Trump." The second: "Democrats are overall more likely to correctly identify true versus false articles." Granted, the weight of evidence seems to indicate that right-wing audiences are more acutely susceptible to malinformation, be it bot-generated or otherwise. Yet there are diverging voices.

For instance, it's worth highlighting the work of the MIT researcher Tauhid Zaman & his colleagues Zakaria el Hjouji, D. Scott Hunter, Nicolas Guenon des Mesnards. Their study, 'The Impact of Bots on Opinions in Social Networks', found that pro-Clinton bot accounts generated more content, both in terms of volume and frequency, than pro-Trump bots in his own Twitter sample. "We find that the bots produce a significant shift in the opinions, with the Clinton bots producing almost twice as large a change as the Trump bots, despite being fewer in number. [...] The asymmetry in the opinion shift is due to the fact that the Clinton bots post 50% more frequently than the Trump bots [both emphases my own]."

If we leave the US-specific political divide, what emerges is more threatening still. Across the globe, citizens are being targeted, especially during electoral periods or in times of political decision-making. Bot-enabled disinformation campaigns have affected the bodies politic of Brazil, the US (see here for a concrete example), Britain, Lithuania, Latvia, Mexico, Saudi Arabia and Turkey, and more.

There have even been documented cases of bots used to target active-duty military personnel as well as veterans, although the researchers noted that the "sophisticated behaviour of troll and bot accounts makes precise disambiguation of these two categories difficult", which leaves us unsure how much targeting was manual and how much automatic. Confusion about what makes an account authentic is a factor which plays into the hands of those programming bots, and as Natural Language Generation technologies progress, "precise disambiguation" between human and computer-operated will become harder to achieve. Public worry about this issue was compounded two months ago by the Pew Research Centre for Journalism and Media's enquiry, 'Social Media Bots Draw Public's Attention and Concern'. It found that only 47 percent of Americans were "somewhat confident" they could identify social media bots from real humans.

In sum, however, if you are a real person, active on social media, then you should be aware that both your real-world personhood and your digital activity represent a variety of gains for third parties. Brands, governments, militaries, private organisations and lone individuals can accrue such gains cheaply and quickly by using bots. These gains include your vote, your capital (think of the influencer fraud that bots enable), your identity, your attention or "eyeballs", and your clicks. From a content quality perspective, bots are game-changers because they make it possible to circulate malinformation at literally inhuman speeds, and their networks can be programmed to systematically and vastly amplify certain types of "news".

The next instalment of this series is Part 5: Who's Behind Bots?