Facebook removed the official Facebook page and Instagram profile of far-right personality Tommy Robinson on Tuesday.
The social media giant cited continued violations of its hate speech and organised hate policies, stating that Robinson’s posts “[use] dehumanising language”, involve bullying, and urge “violence targeted at Muslims”.
The BBC reported that Robinson would not be allowed back onto either platform.
One post encouraged his followers to “make war” on Muslims; another urged them to terrorise and behead followers of the Koran.
Twitter had banned Robinson—erstwhile leader of the English Defence League, a far-right anti-immigration group—in May 2018 for flouting its “hateful conduct” policy.
By November 2018, e-payment platform Paypal ceased processing Robinson’s payments. YouTube halted advertising on Robinson’s account in January.
Some maintain that permitting hateful speech and activity in public spaces normalises them, providing impetus for ‘copycat’ behaviour. Marginalised or oft-targeted groups, such as Muslims or transpersons, also claim that accepting such behaviour within mainstream culture creates anxiety and a hostile environment.
But what happens after platforms ban inciteful personalities? Arguably, these people and feelings regroup elsewhere, operating even more insularly.
Can keeping controversial, divisive elements inside social circles help foster dialogue? Is the solution engagement or exclusion?
Credit for this article's header image goes to Getty.