What to Do if De-Platforming Actually Does not Work?

A recent study by the Journal of Quantitative Description suggests that following the 2021 US Capitol attack, many radicalized users, after seeing discussion groups closed down and messages deleted, evaded established social media platforms towards others which had no strict policies on radial speech. If deplatforming actually does not work, and as the concept of stereotypical counter-narratives is contested, what can be the solution?

Should the findings of the said study prove to be generalizable, I firstly suggest geo-blocking of platforms or the display of specific messages on sites which are proven to harbor criminal or extremist users, whereas algorithms will point to criminal content and moderators will have the final say. Security agencies in countries where geo-blocking cannot be enforced will find ways to follow up on the promotion of hate in other ways. Geo-blocking of whole platforms, granted, is severe and no solution in and for every national environment.

Secondly, counter-messaging campaigns should be individually tailored. This will be possible, in the near future, with the help of Artificial Intelligence (AI). Such messaging needs to gain more credibility by showing how radical or extremist activities will damage the desired outcomes of criminal groups and individuals. There needs to be more psychologically refined conveyance. ‘This is bad’ does not do the trick. We need proof of concept, with platforms exchanging positive experience in view of clamping down on criminal activity. Fortunately enough, effective counter-speech, which has financially been very costly, will, with the advent of AI, finally be feasible, in unison with human moderation. Although ethical questions remain.

Thirdly, advertisement needs to be scrutinized more closely by established social media platforms, and refused to be put online more frequently. What is more, to date, the display of ads to counter hate has, in many cases, had little effect – as of now at least. If one knows what ads sometimes figure on mainstream sites, even those not harboring hateful debate, one will agree that ads prone with fake content are of no avail to users. On the contrary: they are detrimental.

Fourth, instead of putting users in monolithic filter bubbles, social media platforms should be encouraged to meet the complex needs and wishes of their users, and why not by asking more directly, and more often, about their content preferences. There could also be a more refined analysis of user behavior, consent given, so to make sure that when surfing the internet, individuals are not encouraged to merely see things in either black or white but are presented the chance of a more satisfying user experience.

The above, as is evident, applies to the world wide web and, notably, is to be seconded by appropriate steps in the analog world.

Thorsten Koch, MA, PgDip
Policyinstitute.net
23 April 2023

Leave a Reply

Your email address will not be published. Required fields are marked *