After a recent string of 5G conspiracy theories led to a spate of mobile tower arsons in the United Kingdom, Twitter announced it would be tightening rules on its platform to tackle the spread of information it deemed false and dangerous, committing to remove any posts judged as “direct incitement to action” or having the potential to cause “widespread panic.” Meanwhile, YouTube said it would be taking similar measures to fight “medically unsubstantiated” content that contradicted the World Health Organization’s health advice on coronavirus, such as unproven cures and treatments. Putting bans and restrictions in place to combat the spread of dangerous false information can certainly be defended from a public safety point of view given these circumstances. In some cases however, it leaves these sites yet again straddling a fine line between civic duty and censorship; while in others, it raises questions as to whether such initiatives even work.

This is not the first time that sites like Twitter or YouTube have had to enforce tougher restrictions—these kinds of platform manipulations have also backfired in the past—but these recent examples at the very least serve to remind us that we are in many ways still behind in the fight against false information. Last week, an investigation we did with the Guardian shed some light on an audio clip that went viral on YouTube before being taken down a few days later, about a “Former Vodafone Boss” who had claimed that coronavirus was not actually a virus but a reaction to 5G-radiation-induced cell poisoning. Our investigation revealed that the voice behind the claims belonged to a pastor who’d indeed once worked at Vodafone, but in a junior sales position and for less than a year, long before 5G technology was any kind of priority. Articles revealing his identity have since been shared thousands of times, but while interest in this story eventually died down, tweets and videos of the original clip are still being widely posted and shared, with many still referring to the speaker as a Vodafone boss or even CEO.

A few key factors contributed to the original recording gaining such traction in the 5G conspiracy world and reaching millions in the first place, one of which was that Pastor Jonathon James misrepresented his own credentials, which made his claims seem more credible to some. But even with his identity revealed, some Twitter users repurposed our findings to merely validate the fact that Pastor Jonathon really did work at Vodafone, somewhat missing the point that his lack of seniority should bring into question the credibility on which his entire claim depends.

The fact that the original story about a mysterious Vodafone executive whistleblower may still be more attractive than the story revealing his actual identity shouldn’t come as a surprise, but it certainly suggests that, when fighting the spread of false information, debunking may not be enough. It can even be argued that YouTube taking down the original clip so quickly may have contributed to its popularity in the first place. After all, reposts of the clip are now often being tagged with a ‘taken down on YouTube/share now before it gets taken down’ caption, further promulgating the idea that the clip contains secret information they don’t want you to have.

Websites like Twitter, Facebook, and YouTube seem to be in a ‘damned if you do, damned if you don’t’ type of spot. On one hand, given that false information can often travel much faster—and more effectively—than accurate information, they can’t simply surrender their platforms to it. On the other, it’s also inevitable that this kind of policing is bound to at some point get things wrong.

Both journalists and scientists make mistakes—it can even happen quite often. We’ve accepted the World Health Organization as a global authority on the subject of the coronavirus pandemic—and justifiably so—but that shouldn’t mean that the WHO is right one hundred percent of the time—or even necessarily above reproach. Most people wouldn't dispute that posts spouting vitamin C cures or unproven drugs can have serious consequences and present a danger to the public, or accept that Dr. Rashid Buttar should get a place at the table in the coronavirus discussion with equal validity to that of the WHO, but the question of where you draw the line of what is false versus what is valid rebuttal becomes pivotal. To simply class all information contradicting the WHO as ‘false’—in a time of pandemic where scientific data is constantly being re-evaluated—is a simplistic way of tackling the problem.

YouTube CEO Susan Wojcicki told CNN that “[a]nything that would go against World Health Organization recommendations would be a violation of [YouTube’s] policy,” but should the WHO’s recommendations ever change, it could mean that was once acceptable advice is now not, or vice-versa. In response, FOX News host, Tucker Carlson, called out YouTube’s measures as “ludicrous” and noted that the WHO has already changed its position regarding several aspects of the pandemic. Though this part of his argument may stand, it’s also hard to take Carlson’s defense of freedom of speech genuinely when he has contributed to the peddling of the exact kind of dangerous, unsubstantiated disinformation these measures attempt to guard against. But Carlson is not alone in this respect, to an extent that the most vocal freedom of speech advocates in such situations often seem to come from the same poisoned well, one seemingly more interested in its right to profit from sensationalist and unfounded claims than actual freedom of speech.

“This is not about science,” Carlson went on to add. “Censorship never is about science, it's about power. Big technology companies are using this tragedy to increase their power over the population." But if anything, the current situation highlights the contrary: a powerlessness in these companies’ efforts to contain problems that their platforms have in many ways helped to create. Far from ever being a power play, the move is a defensive one time and time again. And the question, rather than being a science vs. power one as Carlson put it, remains that many of these platforms struggle to maintain this appearance of a public forum while being first and foremost for-profit corporations.

YouTube has tried different methods of platform manipulation to disincentivize users from profiting from content violating its terms of service in the past, including demonetization from ad revenue, which received heavy scrutiny and had questionable success. Carlos Maza, a Vox host who was on the receiving of end of political pundit Steven Crowder’s homophobic slurs last year, tweeted at the time: “Demonetizing doesn’t work. Abusers use it as proof they’re being ‘discriminated’ against. Then they make millions off of selling merch, doing speaking gigs, and getting their followers to support them on Patreon. The ad revenue isn’t the problem. It’s the platform.”

In the UK, much of the anti-5G sentiment has been traced back to conspiracy theorist David Icke, whose videos on coronavirus have been viewed by tens of millions. On Saturday, YouTube deleted Icke’s channel altogether—two days after Facebook did the same—after reportedly warning him numerous times about violating content terms and conditions. Despite the removal of Icke's channel, YouTube did state that it would allow reposts of Icke's videos by members of the public, which appears to be some form of ethical compromise on behalf of the platform: to allow the content, but delegitimize the source. Although that kind of compromise may be fine for YouTube, there's no guarantee it will help with the crux of the issue—which may in the end lie more with how people receive information than how that information travels. The day after the ban, at least tens of thousands tuned in to a London Real stream in which Brian Rose interviewed Icke, causing both #LondonReal and #DavidIcke to trend on Twitter in the UK. Many of Icke’s theories have been dismissed as nonsense and pure fabrication, but his legion of followers insist they are merely part of an alternative point of view. Understanding the false information problem—as well as finding a solution for it—may prove impossible until we fully address what attracts people to 'alternative facts' in the first place.