In November 2018, Logically had the pleasure of sitting down with George King — computer scientist, Senior Research Fellow at Columbia's Tow Center for Digital Journalism and creator of Muck, a tool for data journalism. We discussed his role in '/VoterFraud', an interdisciplinary report hoping to shed light on the unusual Twitter activity surrounding voter fraud as a political theme.


Logically: Let’s talk about the birth of /VoterFraud as a project — what was the aim, and what were some of your first impressions when got involved?


George: The Guardians team was looking at a broad spectrum of political posts on Twitter ahead of the election that was coming up. There is a Republican or right-wing narrative in the United States about voter fraud, which we feel is a made-up problem that is used to disenfranchise minority voters. It's an old narrative that has existed for decades, but it started really heating up on social media approaching the election. We felt a good first step for us was to focus just on this one hashtag, #voterfraud, but then we expanded it. After much deliberation, we settled on five hashtags: #VoterFraud, #VoterID, #DemandVoterIDNow, #VoterIDNow, and #ElectionFraud.


When I began collaborating with Guardians, they were using interactive Twitter search tools — just typing in queries to look at account activity.—They were looking at all these “alt-right” accounts that were pushing various narratives, and they saw very unusual account behaviour.

None of the accounts actually seemed to be related to anyone relevant in the real world, and so the question was, did they just become famous within their circles on Twitter, or is there some influence machine that is perpetually boosting these accounts into the public eye?


Obviously, a big concern [at the moment] is the degree to which there's either automated or conspiratorial meddling in political affairs on social media, and I thought that it sounded like a good computational journalism project to get involved with.


My background in this field is that I worked at Columbia a couple years ago on a one-year fellowship through the Tow Center, in partnership with the Brown Institute. My research project there was essentially looking at practical  and technical problems in data journalism, and how to improve that with programming tools. I built a tool called Muck; that is essentially a system for creating data projects, and we thought this would be a good application of that tool. I started tinkering around with it, thinking: “Okay, how can we pull this data out of Signal and into a format where we can actually do our own analyses?” Up until that point, they had been using just the online portal, which does not give access to a high volume dataset with which to do more sophisticated analysis.


Logically: You mentioned unusual behaviour; what did that look like?


George: Twitter has several text conventions; where you put the user's name first and then the message and that's considered a reply, whereas if you just reference the user elsewhere in the text of your tweet, it's a ‘mention’.


For certain accounts, we saw that these replies and mentions and retweets were happening at a certain low volume, and then all of a sudden they would take off like crazy. Then tonnes of other people on Twitter would start replying to them and mentioning them, and we felt that the volume was such that this couldn't really be organic Twitter behaviour. So we started looking at whether we could detect this behaviour computationally and automatically.


We have not yet succeeded in formulating an algorithm that can do this automatically, but we have done analysis on the accounts that were identified manually, and it looks to us like there's something unusual going on there. There is a lot of concern about bots on Twitter, and we're not making any claims as to whether or not these accounts are bots or whether the tweets are written by humans. There are documented cases elsewhere of real people who have some sort of agreement allowing a commercial entity to take over their account temporarily, or inject tweets into their otherwise organic stream. There are all kinds of variations and we really don't know what it is we're looking at in that regard, so we specifically avoided the bot-or-not narrative.


Logically:  Did you have any hypotheses that you were setting out to test? What were you expecting to see?


George: The hypothesis was simply that there is unnatural behaviour on ‘political Twitter’, for lack of a better term, and that there are actors who actually have serious resources — funding and computational ability — to create content in order to influence people. So the underlying idea here is that there are major efforts to create influence that is not organic. I think this is a little bit difficult to describe because I personally find Twitter so crazy to begin with — you know, trying to try to draw a distinction between what is normal Twitter and what is shady Twitter is hard.

One of the big questions that I had when I first saw these graphs was: “How do we know that this isn't just what a regular viral spike looks like?”

The original popular fascination with social media was this notion of virality which almost by definition is abnormal. This isn't how traditional political discourse worked. Or maybe it is, but on a different time scale. So you can ask questions like: “Okay, is what we're looking at now just regular viral content or not? Is viral content on social media just like a big news story in the 90s, that was on the front page of all the papers and everyone talked about it? What's the difference?” It's unclear to me whether or not it's just a difference in volume and speed, or if there's a qualitative difference in the way that things work now.

To be honest — I just got sucked into it because I thought it was so fascinating from a technical perspective and because it seems really important. My feeling is that to make strong claims, you need a lot of background data.


Logically: What did you actually end up finding? You had these ideas about artificial behaviour. What did you actually observe and report on in the project?


George: Well, there are two basic patterns that we saw. The first was that there are certain days where the conversation around voter fraud really blows up. If you look back to 2016, you see that the activity in November, right around the presidential election, dwarfs all other months. You see 80-90,000 thousand uses of the voter fraud hashtag per day in some instances. Then there was this thing with individual accounts, where you see you see them sort of just take off, and it’s the same accounts involved over and over again.

They are just motoring along at a low level of activity and then all of a sudden they surge up, and then stay up for months or years. It looked artificial, because none of the accounts actually seemed to be related to anyone relevant in the real world, and so the question was, did they just become famous within their circles on Twitter, or is there some influence machine that is perpetually boosting these accounts into the public eye? The real question is, to what extent are these accounts colluding behind the scenes? We feel like the preliminary evidence suggests that that is happening and that it deserves more research.


Logically: Did you face any challenges while you were collecting the data, trying to interpret it and eventually processing it into a report?


George: In my mind, the biggest challenge is actually the least exciting. Assessing data quality is very hard — we struggled to figure out how to attain the sort of quantity of data that we needed to do more analysis, and then once we acquired it, we had to work hard to verify that what we had was what we actually thought it was.

I think if we were in an academic context, our advisors or funders might encourage us to narrow the scope down to a point where we could make a really strong claim about something very particular. But where would that get us?


We really had to buckle down and figure out all of these details about what kinds of data we had, and what claims we could make from it. You see in the report, we took a very cautious approach with our claims. If this had been academic research, we might have spent a year or more making sure that we had done everything we could to verify the fidelity of the data and checking that all of our calculations were correct.


I'm a software developer by trade, and every project that I've ever done requires a huge amount of effort at the end to finish all of the little details — that's the nature of deadlines. We could have spent another month just improving the design and presentation, or just improving the data processing. Then, right at the end, we actually acquired a much larger data set that we hoped would answer some of our questions, but we didn't have time to get into it.


We felt very proud of getting the project out the door and just in time for the election — it was a completely non-negotiable deadline. Bloomberg wrote about it and some other journalists were very interested, and it prompted follow-on questions. But I felt like we just cracked the door open on the question of how to assess the credibility of social media accounts.


What we really hope to show is that there are powerful actors who are influencing political discourse online, but we don't know for sure. Are there organised, powerful political actors behind some of these accounts? Are these just well-meaning citizens who are riled up about a topic, or is it something more sinister?


Logically: I think that is quite an interesting dichotomy in the field of misinformation because on the one hand, you're a group of people who are looking to try and improve the credibility of information that exists. But on the other hand, you're dealing with a very topical and sensitive issue that has immediate real world effects; and so it draws parallels for me with research on climate change, in that for such a long time people were so cautious about making any kind of statement that would…


George: Right. One interesting question is, what place do we occupy in this ecosystem? We are not academic researchers. We don't have the institutional resources to put out an academic level report. We're not journalists either, but we saw that a space between those two camps. So we put ourselves out there with this report, which is neither academic nor news, because we felt that it deserved investigation. And so even though our claims are very tentative, we do stand by them. All we can say is everybody needs to look more at this problem, and we need to develop the techniques to figure out how to answer these questions.

I think if we were in an academic context, our advisors or funders might encourage us to narrow the scope down to a point where we could make a really strong claim about something very particular. But where would that get us? The economy of academia is such that researchers make these particular claims and then get accolades for it, and then get more funding—but that's not really what we're after, right?


We're trying to look at the big picture and we did not have the time to build the completely watertight case. So we just kind of went ahead and did our best, and landed with this big messy picture. It's a snapshot of these 200 accounts which we identified on a case by case basis. The nature of a social media graph is such that when you try to look at the data, it just points in all directions, right? In our report, you'll see that we have these charts showing the spikes, and then below we have big blocks of associated usernames. What do we do with that? With our website, we're just inviting the user to click on these links and get a feeling for what is out there.


Logically: Do you see any parallels between this idea of a citizen report — the gathering of the evidence and presentation that your work embodies — and this rise in the popularity and effectiveness of citizen journalism and particularly crowdsourced investigation?


George: I don't know. I guess the phrase that comes to mind is fight fire with fire. I'm not actually advocating for that, but I do feel like the “alt-right” has found a very effective wedge that is prying apart the traditional political armour of the mainstream status quo. Democrats were badly caught off guard by the “alt-right”, and they still haven't figured out the counter-tactics. Arguing with the “alt-right” narrative might be a losing battle because it's shifting and nebulous and ultimately very extreme. It's highly emotional, and it plays on all these angles, like patriotism, racism, skepticism of science, skepticism of institutions.

The way in which extreme content gets mixed in with more everyday fare is a really big part of the problem.


It's very hard to debate with extremists. Debate, op-ed, all of these traditional political news formats do not appear to be working very well currently. They don't work on Trump. Donald Trump doesn't debate, he bullies. So can we do instead? Well, one obvious thing is to peer into the crazy morass of what the “alt-right” actually is and say: “Hey, let's call these things out for what they are. The narratives that we're looking at — like voter fraud — are essentially dog whistles for racism. It's very, very racist political propaganda and it needs to be exposed as such.


That's the spirit behind what we're doing: we wanted to make a site that engaged people to take a deeper look at what this content is. The reason that we have put the artsy sort of intro at the top is because we wanted to invite people to take a more thoughtful, closer look. “This is what the influence machine really looks like. All of these accounts are presenting themselves as everyday Americans, as patriots and so forth, but it's not at all clear that they are that at all.


The way in which extreme content gets mixed in with more everyday fare is a really big part of the problem. Scrolling through one of these accounts now, I see cheesy Melania Trump ‘Merry Christmas’ images, and then right next to that is this very loaded picture of a bloodied border patrol officer with anti-immigration language, and then supercharged dog whistle stuff about how dirty the Tijuana River is, and on and on and on…"


Logically: Ideally, what positive impact do you see this report having? What would the next steps be if you had unlimited resources? Dream big here.


George: That's funny because I think that the dream I have for this is practical: as far as we could tell our website inspired and helped some journalists look at this problem in more depth. We provided a certain vantage point, which was useful to them, and percolated out into some coverage during the last days leading up to the election — and we felt good about that. We learned a lot about how to get into these data-heavy kinds of investigations and so the immediate positive result is that now we're prepared to do more of this kind of work. I think it’s an open question whether or not we should continue improving this particular report. There's lots that we can do to make it better. Or should we apply the same techniques, the same tools to something else that is more pressing today? The voter fraud narrative is not done with, it's going to stick around and deserves more work — but the nature of the election cycle means that going back to it right now is the last thing on our minds.


So if I were to dream big about this, what I think about is: how do you collect data that can actually help someone make stronger claims about Twitter, about social media more generally, about online discourse? Given the nature of the “alt-right”, I feel that there's a lot of technical work to be done. Another thing this project could do would be to help inform other people about doing this kind of analysis. That's a little bit tricky though, because this report relies so heavily on access to Twitter data. I don't know how much people could learn practically from it without also having that access.


I guess in summary, the positive outcomes that we could continue to run with are a sort of technical education; and the broadening of perspective for other professionals and interested individuals.

I imagined Muck as a tool for data journalists, there was a hypothetical that I settled on about being very late in the production of an article— as you're fiddling with the design and the final touches—discovering that there's actually a flaw in your underlying calculations and then going back and fixing that and having the whole thing rebuild automatically, because the tool knows how to rebuild all the pieces.


Logically: You mentioned possibly repurposing some of the computational tools derived from this project and applying them to other issues or other investigative angles. Could you tell us one of your ideas for the future?


George: Sure! My background technical work has been to build a set of tools for doing stuff like this and every time I do a project, it's challenging because there is tension between focusing on the report or the tools. Do I make do with what I have and just get the report done? Or I do I put the report on pause and try to improve the tool that I'm using for the future? Building tools is very time consuming and can be a tremendous distraction from the actual task at hand. There was a little bit of both with this work. This project was the first time I felt like Muck actually succeeded in delivering a product I couldn't really have built without it, and that was extremely gratifying. I think that if I do more projects like this, Muck will continue to provide more and more support, as I iron out the problems and discover what other kinds of things I need to make these projects possible. So does that answer your question?


Logically: Kind of, yes.


George: So which part am I missing?


Logically: Ok, so considering the themes currently dominating public debate, or the phenomena occurring within social media — where might you want to apply some of Mucks’ tools, basically?


George: I think that with this format we can provide more nuance than a typical, data-driven news article. By creating a whole website we are liberating ourselves from the constraints of the newspaper format or the long form magazine format or the academic format. We can create our own narrative format, which is not a blog post, news piece, or white paper. And I like that, it's a cool medium. It's very time consuming. But the advantage is that it doesn't have to be quite as neat, and we control the the publishing. If we went back to this and we said: “Hey, now we want to do more advanced analysis on the account graph structure", we could stick another section into the report.

By creating a whole website we are liberating ourselves from the constraints of the newspaper format or the long form magazine format or the academic format.


The question is, will anybody read it? We have to spend our time wisely, but we can write in deeper descriptions that people can come back to and refer to over a longer period of time. We feel at liberty to update them and hopefully people will learn from them. Because so much of the problem that I see — zooming way out in terms of the societal problem that we're trying to address — is that these are big messy questions.


The question of voter fraud propaganda on Twitter is a window into enormous, messy questions about the “alt-right” and American two party politics as a whole. These questions deserve nuanced investigation and one of the problems we have is that we consume our political information in these various specific quantities.


We get it in tweets and Facebook posts and sound bites on TV, and then from news articles, and to a lesser extent, long form and so forth. But all of those are very constrained formats. I like the idea of making a website which is much less dense than a typical academic paper, but can also go into much more detail and allows for a lot more exploration. So that's the thing I'm excited about. There is an opportunity there, if we can fund it. There's a great deal of value in the slightly more open-ended reports, I think.


Logically: Yeah, I definitely agree. Thank you so much. Do you have anything you'd like to add or any questions you don't think we covered?


George: Thank you! As I mentioned earlier, trying to produce a report and at the same time build the tools is very tricky. I can get very distracted by these longer term goals and building reusable code or a tool that will help me make something easier in the future. But at the same time, that stuff is really valuable because in addition to enabling future work, it also forces me to consider the clarity and efficacy of what I'm doing.


For example, when I factor out a function into a library, I have to come up with a good name for it and articulate exactly what it is supposed to do. Often that helps me find mistakes. Does that make sense?


Logically: Yes, definitely.


George: One thins that is missing from the report is a technical write up. In software engineering, there's a funny term, “a post mortem”, which is basically a reflective report after something goes wrong — like when a website goes down, or there's a security issue or something like that… It sounds more negative than I mean in this context, but I think a reflection on what worked and what didn't for us in producing this thing would actually be worthwhile. It might also be interesting for readers to hear a little bit about how the proverbial sausage is made.

Getting to the bottom of the descriptions is really important. It makes the narrative better. It improves the actual output, the writing of the report. And it also ferrets out these serious problems of accuracy in the calculations themselves. And without that effort, all the fancy stuff is useless.


One anecdote I want to share is that when I imagined Muck as a tool for data journalists, there was a hypothetical that I settled on about being very late in the production of an article— as you're fiddling with the design and the final touches—discovering that there's actually a flaw in your underlying calculations and then going back and fixing that and having the whole thing rebuild automatically, because the tool knows how to rebuild all the pieces. This was the story I used to explain to people why I thought the tool was valuable. And the most gratifying thing that happened in this project was we did have a serious problem like that, and the tool did in fact work as intended.


We were very behind, we'd been working like crazy trying to get it finished and Brett was looking at the charts and said: “Hey, these spikes are not the right size.” And it turned out that I had been counting things wrong because I had misunderstood Zach. He had been using the word ‘mentioned’ which I interpreted strictly to be the notion of a mention on Twitter, which is when you ‘@’ reference another user. But Zach was using the term ‘mentioned' as it appeared in Signal's tool, and meant the appearance of hashtags in tweets.


So I had constructed this weird metric for what I thought he wanted, and so our numbers were subtly wrong. But thanks to Muck I was able to just change that line of code and press rebuild and then you know, 10 minutes later the whole website is reconstructed and we're able to publish it.  So that ability to really —


Logically: — Iterate your dreams…?


George: Yeah. I mean, it's funny because it's kind of an embarrassing mistake and we're lucky that we caught it, but it was it was very validating for me. On a closing note, I think that the real nature of this work is that you have to articulate what it is that you're measuring, and discuss it; then when one person is using slightly different language, you have to really dig into it and ask: “What do you really mean by that? Are you using different words just because you’re being casual about describing it, or is it that you don't really understand the particular thing that I'm saying? Or is there some other particular thing you're trying to say, but I'm not understanding it?” Getting to the bottom of the descriptions is really important. It makes the narrative better. It improves the actual output, the writing of the report. And it also ferrets out these serious problems of accuracy in the calculations themselves. And without that effort, all the fancy stuff is useless.


For me, the most interesting experience I had was seeing how everyone had a different thing they were really excited about: the narrative, the design, the visualisations. Each piece took a huge amount of effort and it makes the report as exciting as it is, but underneath it is this other big pile of work that's much less exciting, and you have to do that dirty work in order to get the good stuff.


Logically: Yes. It'll be interesting to track how Muck progresses. I think it sounds like a really interesting piece of software.


George: Yes, it will be. It's intriguing to me the course that it's taken, because it started off as a very simplistic proof of concept and then evolved into a much more serious tool. What's interesting about it is that in the context of data journalism, one of the things that we learned was that average newsrooms have their own content management systems, and the developers have their favourite tools, and so forth. Muck started out with this very simple functionality to run a set of computations in Python and then integrate the results into a rendered webpage. It then became clear that a professional data journalist wasn't really trying to generate a whole web page or a whole series of web pages — because they have to use their own CMS anyway, and have their own editorial processes.


It didn't really make sense for Muck to build a whole website for them, because that's not how newsrooms work. So that feature got ignored and I focused on other aspects of the data build system. But then two years later I get into this project with Guardians, and in fact the thing that they needed was to generate an entire static website. So that initial vision for Muck fit the bill perfectly and it's gratifying to see it work. It's ironic that we've sort of now invented this format for ourselves that again, is a little bit of its own thing. It's not a blog post, it's not a paper, it’s something else — and Muck is actually very well suited to that.