Just as with the 2016 US presidential election, it's likely that efforts to analyse the role of disinformation in the 2018 US midterms will continue well into the future. However, in the wake of the election, there were several trends that quickly became apparent—here's what they taught us about false news in 2018.

In many ways, the 2016 US presidential election changed how we understand the nature of mis- and disinformation, and the threat it poses to democratic processes the world over. Although not a new concept, 2016 saw the term 'fake news' career into the contemporary lexicon, on the back of claims that misinformation—driven by the increasingly ubiquitous penetration of social media—had swayed the election results.

In the two years since the election, numerous attempts have been made to comprehend and classify the implications of false news, and the motivations of the actors that create and spread it. A recent study conducted by Knight Foundation has found that in the month prior to voting day, there were over 6.6 million tweets that linked to either fake 0r conspiracy publishers. In light of recent findings by the MIT Initiative on the Digital Economy, which show that false news is 70% more likely to be retweeted than news that's factually accurate, this is especially significant.

On Facebook, a Buzzfeed News analysis has found that in the three months before voting day, the top fake election stories on the platform generated more total engagement than the top stories from 19 other major news outlets, including the New York Times, NBC News and the Washington Post.

Social media platforms and search engines have faced public scrutiny as they've been called to account for their own responsibility in facilitating the dissemination of false news. There has been a particular kind of fear reserved for the spectre of foreign interference after a report was issued by the Office of the Director of National Intelligence in January 2017, outlining its assessment that Russia had conducted an influence campaign in the 2016 presidential election.

In response, the big players have moved to address mis- and disinformation on their platforms, with Facebook announcing a collaboration with third-party fact checkers in December 2016, to identify and analyse stories that are flagged as false. In May 2018, the platform confirmed that it had disabled approximately 583 million fake accounts in the first quarter of 2018, and on the eve of the midterms said that it had removed 115 Facebook and Instagram accounts that were linked to election-related misinformation campaigns. Meanwhile in November, Twitter announced that it had removed approximately 10,000 bot accounts that were involved in voter-suppression efforts between September and October. Yet despite these efforts, there is ongoing public debate about whether digital publishers are doing enough to combat misinformation on their platforms, and who else, if anyone, is responsible for tackling the issue.

Just as attempts to understand the role played by mis- and disinformation in the 2016 elections have been prolonged by the complexity and intangible nature of the problem, it's likely that efforts to analyse this phenomenon in the context of the midterms will continue well into the future. However, in the wake of the election, there were several trends that quickly became apparent. Here's what they taught us about false news in 2018.

There were fewer 'fakes'

In the lead-up to the 2016 election, the most obvious examples of misinformation involved completely fabricated content—think articles suggesting that Pope Francis endorsed Donald Trump's candidacy for president, and allegations that Hillary Clinton sold weapons to ISIS, as confirmed by WikiLeaks.

These 'fake' stories were motivated largely by a complex financial incentive structure that played into the ever-growing appetite of American readers for politically partisan news. Savvy content creators around the world clued in to this dynamic, realising that they could turn a significant profit from the Google Ad revenue that was generated by visits to their website, and consequently creating articles with the express purpose of attracting clicks. Pieces that painted Donald Trump in a positive light, or those that attacked his rival Democrats, were especially popular—and especially lucrative. A Wired investigation into the fake news industry operating out of Macedonia found that between August and November 2016, one teenager earned almost US$16,000 from his two pro-Trump websites.

While the 2018 midterm election period has involved fewer examples of the viral, completely false stories that were so prevalent last time around, as allegations of foreign influence campaigns, voter suppression efforts and politically-motivated rumours surface, it appears the hyper-partisan themes of mis- and disinformation—and the tactics used in its dissemination—have evolved.

In an interview with CNN Business, Alexios Mantzarlis, director of Poynter's International Fact-Checking Network, argues: "The full-on viral 'fake news' of yonder is playing a somewhat secondary role compared to (A) misinformation pushed by and for openly political purposes and (B) bizarro conspiracy theories emerging from messaging boards and getting amplified wittingly or unwittingly by folks on Facebook/ Twitter."

Speaking to CBC News, Buzzfeed's Jane Lytvynenko reiterates this assessment, stating that the majority of online misinformation at the moment is not 'fake news' in the traditional sense, but rather forms of hyper-partisanship that distort factual pieces of information in order to serve a particular agenda. Hyper-partisan content is particularly problematic given that technically, it isn't false. Thus platforms and watchdog institutions are presented with a range of challenges when addressing this kind of content, all stemming from the fact that it is essentially impossible to verify an opinion. Yet while some argue that hyper-partisan content constitutes 'business as usual' for political comms, there is mounting evidence that these themes constitute an unprecedented and insidious threat.

As the nature of misinformation has changed, so too have the key players. There has been a shift away from isolated fake news farms and towards a group of coordinated online actors, who engage in what Data & Society describe as 'media manipulation'—the strategic use of social media, bots and memes in order to increase the salience of their ideas, and the use of journalists and online influencers to spread content.

It's getting tangled

Media manipulation efforts have been especially apparent in the escalation of interaction between hyper-partisan internet actors and mainstream political discourse. Buzzfeed's Charlie Warzel argues that as political actors have begun to use online spaces to find sources of obscure viral content that can then be broadcast to followers, media outlets are using the social media accounts of political actors as a kind of assignment editor, taking posts and tweets as the basis of news stories, which then serve to insert specific ideas and phrases into the public conversation.

The New York Times explores the way in which #JobsNotMobs arose from this feedback loop, travelling rapidly from obscure far-right online communities and into mainstream political debate in the lead-up to the midterms.

On October 11, a video which cut together footage of protesters with that of news anchors arguing against the use of the term 'mob' went viral on Twitter.  The depiction of the 'mob' of protesters in this "supercut" video played into a recurring motif amongst far-right online spaces—that of the violent left-wing. Soon after the video appeared, the phrase "jobs not mobs" began gaining traction on Twitter, with "jobs" referring to the current low levels of unemployment in the United States, credited by some to Donald Trump's presidency.

A key moment in the life of the phrase came when Scott Adams, creator of the popular "Dilbert" cartoon, tweeted favourably about it. From there, a screenshot of the tweet was uploaded to Reddit, along with a meme that featured an image of factory workers placed above a violent protest, with "Jobs Not Mobs" set on top.

JOBS NOT MOBS from The_Donald

The meme and hashtag continued to gain popularity on Twitter and Reddit, before being picked up by prevalent Facebook pages and shared by a range of pro-Trump activists and well-known conservatives, including former House Speaker Newt Gingrich.

On October 18, it was retweeted by Donald Trump himself, marking the culmination of its trajectory into mainstream political discourse. As the President repeatedly tweeted the slogan, and began incorporating it into his midterm campaign strategy, Reddit users celebrated their success.

#JobsNotMobs is an obvious and important example of how the link between hyper-partisan online groups and mainstream political discourse has strengthened over the past two years. As the Times notes, "with this meme, the far-right internet had found an opening for a new Republican talking point, molding it into a compact slogan and seeding it with the most powerful conservatives in the country."

Users on Reddit's r/The_Donald speculate on the success of the "Jobs Not Mobs" meme (image captured from screen).

It's moving behind closed doors

It's not just the nature of misinformation that has evolved since the presidential election. Early analysis of the strategies of dissemination of mis- and disinformation in the midterms seems to indicate that propagating actors are adopting more coordinated and tactical approaches in their attempt to spread false information.

In the second instalment of his series, The Micro-Propaganda Machine, Jonathan Albright, of the Tow Center for Digital Journalism, finds that as Facebook has taken measures to address obvious examples of mis- and disinformation on its News Feed and Pages in the wake of the 2016 election, bad actors have migrated into closed Groups. Here they have engaged in "shadow organising" activities—coordinating influence operations and seeding conspiracy theories privately, before pushing these messages out onto the rest of the platform. Albright argues: "Groups represent a huge tactical shift for influence operations on this platform. They allow bad actors to reap all of the benefits of Facebook...with few—if any—of the consequences that might come from doing this on a regular Page, or by sharing things out in the open."

Facebook allows users to choose one of three privacy settings when creating a Group—Public, Closed or Secret. Public groups are available for anyone to find, view and join, while Closed Groups are a little more complicated. The name, description and member list of a Closed Group are visible to the public, however the Group posts can only be seen by members, and a user must either ask to join the Group or be invited by a member. Secret groups are invisible to the public, and users must also either request to join or be invited.

Albright points to one particular rumour circulating prior to the election, which claimed that George Soros was funding the migrant caravan moving through Central America. Frustrated with a lack of analysis of the origins of this rumour by mainstream institutions, he conducted his own investigation, and found several Facebook Groups that were active in seeding the Soros-caravan conspiracy. These groups, however, are closed to the public, meaning that they didn't show up in public searches or API requests.

According to Albright, the Soros-Caravan rumour is a key example of this kind of shadow organising activity—a conspiracy theory that was conceived within closed Facebook Groups, before moving being pushed out onto the rest of the platform and into the mainstream news cycle. From there, the migrant caravan became a pivotal election issue, fuelling contentious debate on immigration and border control policy, directly prior to election day. This, he argued, might've been the "defining theme of the final stretch of the 2018 midterms."

There are real-world consequences

On October 29, General Terrence O'Shaughnessy, head of the United States Northern Command, announced that approximately 5,200 US troops would be sent to the southwest border with Mexico before the end of the week, in addition to the 800 troops that had already been pledged. The decision, broadcast just days before the midterms, was justified as a response to the migrant caravan travelling through Central America, and came as Donald Trump used his Twitter account to amplify concerns about the caravan, illegal immigration and border control policies.

However, a Politico report, published less than a month later, announced that the 5,800 troops would begin to come home as early as mid-November, with all troops set to return from the border by Christmas. Army Lt. Gen. Jeffrey Buchanan stated that the conclusion of the operation had been set for December 15, and that he was not aware of any plans for redeployment.

Critics on both sides of the aisle condemned Trump's decision to deploy troops to the border as a political manoeuvre intended to secure electoral gain, with the rapid withdrawal announcement only serving to further enflame these criticisms. Writing for the New York Times, Gordon Adams, Lawrence B. Wilkerson and Isaiah Wilson III argue: "The president used America's military forces not against any real threat but as toy soldiers, with the intent of manipulating a domestic midterm election outcome, an unprecedented use of the military by a sitting president."

When asked during an exchange with reporters if he believed that George Soros was funding the migrant caravan, Mr Trump replied: "I wouldn't be surprised, I wouldn't be surprised,... I don't know who, but I wouldn't be surprised. A lot of people say yes."

The decision by the Trump administration to send troops to the border—only to remove them promptly after the election—illustrates another kind of feedback loop, in which conspiracy theories seeded in closed online spaces go on to inform mainstream political discussion, and from there, transform from discursive concepts into tangible consequences.

In the period preceding the US midterm elections, this transformation was also apparent in the October 27 mass shooting in a Pittsburgh synagogue, which left eleven dead and seven injured, as well as the series of 14 pipe bombs that were sent to prominent Democrats and Trump critics. Each of these incidents highlighted an important fact about the character of the contemporary information ecosystem. While false information, hyper-partisan content and conspiracy theories are formulated in inaccessible online spaces, ultimately it is the public that must deal with the consequences of this disinformation.

These consequences only serve to highlight the increasing sophistication and coordination of disinformation tactics in the US midterms is emblematic of the evolving nature of disinformation, as malignant actors find increasingly innovative ways to attempt to influence the direction of public discourse, and blur the lines between what is real and what is false. Understanding the tactics of these actors, and developing appropriate strategies to counter disinformation efforts, will be crucial in this information war.

This article's header image is by Elliott Stallion from Unsplash.