The word 'bot' is splashed around all over the internet. This 6-part series attempts to establish what bots actually are — and how you might recognise the misleading ones. In this fifth instalment, I try to identify the different actors creating and operating bot accounts, as well as their objectives.

NB: This article's header image is by John Noonan from Unsplash

To assess the impacts of bots on our socio-political processes, it's important to consider the intention, budget, and clout of those using bots in the first place. Researchers Badawy, Ferrara, & Lerman analysed the 'Digital Traces of Political Manipulation' on Twitter during the 2016 Russian interference campaign. Published this February, their report lists the range of possible culprits behind social media bot deployment. These include: "State- and non-state actors, local and foreign governments, political parties, private organisations, and even individuals with adequate resources." Any one of these actors "could obtain operational capabilities and technical tools to construct misinformation campaigns and deploy armies of social bots to affect the directions of online conversations."

At worst, this sounds terrifying; at best, alarmist. Yet it is undeniable that, for most internet-connected countries, social media platforms are becoming de facto extensions of the public square — even, arguably, replacing it. The content we encounter online shapes our standpoints, summons our emotions and, provided the point is either sufficiently striking or sufficiently repeated, directs our real-world actions. So let's look at concrete examples of who uses bots to disinform; where; and why.

Bots can be a valuable asset to financial entities such as banks, private equities, and pension funds. In January of this year, Prof Dr Dr Dietmar Janetzko addressed the Bundestag, Germany's federal parliament, to answer the question: Do social bots influence public opinion? Janetzko recalled a pre-bot phenomenon called "the pump and dump principle", whereby "on the basis of false data, a share is aggrandised before being resold by the culprits who made the false claims in the first place." He argued: "When financial service providers make recommendations based on a prevailing opinion in the social media, this opens the gates for share manipulation." In other words, one could bring the value of a company's shares up or down at will, simply by deploying a sufficient amount of social media bots all spreading the same message about that company.

In addition to being used to manipulate markets, bots can effectively influence public opinion — media coverage in the last two years has focused on this consequence of bot activity fairly intently. An innovative example of bot-driven opinion warfare came last year, ahead of the British general election. Within the UK's leftwing Labour party, a handful of computationally-savvy campaigners had created a chatbot which they launched on Tinder in June 2017. The bot was designed to encourage hesitant or unengaged young Britons to vote for the Labour representative in their constituency. The 'Tinderbot' sent "somewhere between 30,000 and 40,000 messages, targeting 18- to 25-year-olds in constituencies where the Labour candidates were running in tight races," explains Philip Howard of the Institute of Electrical and Electronics Engineers. "It’s impossible to know precisely how many votes are won through social media campaigns, but in several targeted districts, the Labour Party did prevail by just a few votes."

By now, much ink has been spilt on the subject of the Internet Research Agency [IRA], a St Petersburg-based company which conducts digital influence operations for Russian political and business interests. A series of leaked documents revealed that several such operations were orchestrated between 2014 and 2018, chiefly targeting the electoral processes of foreign nations such as Ukraine, Britain, and the USA. The IRA's staple service is the provision of human trolls, or 'bloggers', trained to carry out propaganda campaigns according to very specific requirements: a day's work involves maintaining six Facebook accounts which publish at least three posts a day; a month's work should yield 500 new subscribers. Should the trolls be assigned to Twitter, then those figures rise to managing 10 accounts with up to 2,000 followers and publishing 50 tweets, all on a daily basis. As these were the IRA staffers' goals in 2014, it's probable that the numbers have changed since then. Still, they illustrate just how formalised a practice computational influencing has become — and beg the question of what could be achieved were one to place several thousand automated accounts, requiring only minimal human supervision, at the 'bloggers' fingertips.

In 2017, the Oxford Internet Institute published a working paper by Sergey Sanovich of New York University, which focused on how digital misinformation and Russian computational propaganda interact. He notes that "over half of Twitter conversations originating in Russia include highly automated accounts", which push out "vast amounts of political content." Sanovich's findings also answer the above paragraph's question about the possibilities opened up by 'hybridising' techniques during influence operations. In his conclusion, he describes how the Russian government "combined the ability of bots to jam unfriendly and amplify friendly content, and the inconspicuousness of [human] trolls posing as real people and providing elaborate proof of even their most patently false and outlandish claims." Put simply, a blend of automated accounts and professional imposters carried out far-reaching manipulation campaigns on social media, to forward national interests at home and abroad.

Moreover, the Russian government deepened that symbiotic approach by synchronising those campaigns with the outputs of "various other information outlets, including Russian-branded TV broadcasts and web news, proxy civil society agencies, and web outlets." In other words, the Kremlin has developed an exceptionally holistic approach to information warfare — and there's no reason to think other nations or organisations aren't adopting similar strategies.

Indeed, Russia should not be considered the sole truant when it comes to malinformation. There are documented cases of bots being used to influence political outcomes in other countries — but these tend to be domestic, rather than transnational operations. For instance, bots were used extensively & aggressively in Brazil: this year and in 2014's general elections; during the 2016 presidential impeachment campaign; and during the 2016 Rio de Janeiro mayoral race.

Protests in Salvador, Brazil against newly instated president Michel Temer in autumn 2016, after his bot-enabled impeachment campaign ousted Dilma Rousseff [Photo by Felipe Correia on Unsplash]

The state-ordered murder of journalist, US resident, and Saudi dissident Jamal Khashoggi in Saudi Arabia's Istanbul embassy last October caused an international outcry. Within days of Khashoggi's killing, networks of bots ("botnets") began behaving identically and systematically on Twitter to amplify pro-Saudi talking points. Josh Russell, an info-tech specialist, approached NBC in October with a spreadsheet showing how hundreds of accounts had "tweeted and retweeted the same pro-Saudi government tweets at the same time." At the time NBC presented their evidence to Twitter, pro-Saudi tweets such as "We all trust Mohammed bin Salman" [], or "Unfollow enemies of the nation" [#الغاء_متابعه_اعداء_الوطن] had already begun to trend heavily & abruptly.

A few weeks later in November, the FBI tipped Facebook off to "accounts that appeared to be engaged in authentic coordinated behaviour." Among the 36 Facebook accounts, 6 pages, and 99 Instagram accounts, there were 12 Instagram accounts which posted mostly in French to their 76,000 followers. Given the past four weeks of intense protests across France, it's worth taking a closer look at which themes these twelve inauthentic accounts touched upon: immigration, religion, football, ecology, race, and politics both domestic and international. Analysts from the Digital Forensics Research Lab [DFR] pored through the dataset, and realised that the accounts' behaviour bore comparison with "earlier Russian troll operations, which used existing social concerns and tensions to promote division and attack specific politicians in various countries, especially the U.S. [emphases my own]."

Put plainly: if a troll user posted about football, this often came with dog whistles towards the radical, sometimes violent 'ultra' fans, or with heavily nationalist undertones. If posting about politics, the message was consistently anti-Macron and again, inflected with nationalism. If posting about black or Muslim women in France (@les_femmes_musulmanes was the most popular of all 12 accounts, with 34,100 followers), this might include an image of two black women in bikinis, accompanied by an unusual assortment of hashtags: #immigrationlawyer, #h1bvisa, #L1visa, #k3visa, #marriagevisa, and #blackexcellence. Or, in an image posted by a 'proud Cameroonian woman' showing a bride and her bridesmaids, you might find, nestled among the hashtags, #blacksupremacy.

George King, a data scientist and senior research fellow at the Tow Centre, has come across this phenomenon in his own work on inauthentic Twitter accounts. King described it to Logically as "the way in which extreme content gets mixed in with more everyday fare." It's a factor which has arguably stretched the flanks of public discourse to normalise explicitly extreme, divisive, or segregational content online.

Overall, the French-language operation does indeed recall the 2016 US elections. Back then, Russian-funded troll accounts such as Blacktivist or South United used pre-existing racial narratives to stoke animosity and further entrench communities in their sense of victimhood, dispossession or entitlement. In the French case, it's estimated that this network — whether bot- or human-operated — reached "135,000 users at a very minimum." They achieved such reach by 'audience building'. It's a practice where, for instance, users include hashtags like #follow4follow or #like4like to signal their willingness to trade in social media's chief currency: popularity. Audience building is also an excellent way to rapidly and inconspicuously build an apparently grassroots network.

As yet, there has not been any research into the concrete role these 12 accounts played in the 'yellow vests' protests, which on a peak day set 282,710 people marching, parts of France on fire, and president Emmanuel Macron back-pedalling. In any case, such a correlation would be difficult to assess, given that much of the 'yellow vests' organising happened on closed or secret Facebook groups. Last week, in Le Super Daily's podcast episode, a French social media specialist commented that "weeks of social media mobilisation preceded the yellow vests' physical mobilisation [emphasis my own]", with Facebook serving as "the digital breeding ground for their movement."

In sum, there has been a sharp increase in the institutional use of bots on social media in the past two years. The Oxford Internet Institute observed that the number of countries engaged in "formally organised social media manipulation campaigns" had almost doubled, from 28 nations last year to 48 in 2018. The report found that each country had at least "one political party or government agency using social media to manipulate public opinion domestically." Additionally, the evidence points to some of these campaigns being built as a defensive response to "junk news and foreign interference", while others were definitely on the 'offence', "spreading disinformation during elections".

This may seem overwhelming — states, private companies and random individuals, all capable of programming bots to move among us, emote like us, talk like us. But there is one silver lining: bots aren't picture-perfect imitations of human just yet. Thankfully, there are a dozen tell-tale signs which can alert a social media user to the presence of a bot. Want to know what those are? It's all listed in Part 3.

The next instalment of the series is Part 6: Where next for bots?