A note on the article ‘New Models for Deploying Counterspeech: Measuring Behavioral Change and Sentiment Analysis’

There have long been discussions in academic and practitioner circles, at times with conservatives and progressives holding opposite opinions, on whether counter narratives directed at persons vulnerable to extremism yield positive effects. Erin Saltman, Farshad Kooti and Karly Vockery take up the challenge to find answers to the question of counter-speech efficiency, based on a large body of data, thanks to a cooperation with Facebook and with counter-extremism NGOs.

In Studies in Conflict & Terrorism, in their article published today, the authors, on the one hand, measure exposure to both “soft and hard” counter-speech on “low-prevalence-high-risk audiences engaging with Islamist extremist terrorist content,” where early at-risk indicators exist with reference to violent religiously packaged content in either English or in Arabic.

On the other hand, the authors measure the impact of “redirecting white-supremacy and Neo-Nazi related search-terms,” with regard to Facebook’s “online safety intervention models” to counter-extremist websites and content provided by NGOs.

The overall aim of the study: to reach findings as to whether impacts would be quantifiable.

On a hopeful note for practitioners, academics, deciders, and stakeholders alike, behavior or sentiment is not influenced by counter narratives in a negative way. For most of those among the Facebook users whose questionable behavior was aggregated, changes in behavioral routines was “not statistically significant.” But the – probably – much smaller group of “low-prevalence high-risk” users which were exposed to alternative, soft or more direct counter narrative content, showed a lessened drive to consume “violent extremist content.” Again, on a positive note, disengagement content is presented to ever larger audiences by online companies.

Moreover, “passive online search” could lead to “positive sentiment shifts,” according to the authors’ initial results and suggestions. They explain that “an at-risk audience” should be defined in clearer terms, based on more than one indicator. As to the presentation of counter narratives, authoritative content or personalities should get a say rather than social networks directly, and the key takeaway messages should be presented early, to better reach groups vulnerable to extremist propaganda, the article points out.

A.R.C. Europe welcomes Facebook’s readiness in the interest of transparency and would like to provide a number of points as to the promising approach and the findings yielded by Saltman, Kooti, and Vockery:

1) The academic approach ought to be replicated a) on the time axis and b) with a view on other social networks, search engines, and content providers. This might allow for the positive results to be corroborated and for practical steps by the IT industry to be refined, and for future results to be more generalizable.

2) There are limits to internet monitoring: online behavior often cannot be tracked, and offline behavior after exposure to either problematic or disengagement content can hardly be monitored. However, since this is the case, cooperation with social network platforms e.a. is a highly valuable option.

3) Besides political counter-messaging, soft or hard content, disengagement content should be tailored, building upon operators which align with different groups’ socialization and personality traits.

4) Instead of providing soft or hard messages only once, there ought to be continued and alternated exposure to disengagement content, e.g. with a view on emotional factors such as belonging and other positive emotions. Operators adverse to the desired results should be avoided, based on online records.

5) While violent extremist content can lead to long-lasting intolerance and even violence by groups or persons, the mostly negative effects of exposure to extremism and crime, as shown by film studies, should be applied to academic, data-driven approaches with regard to preventing and countering violent extremism.

6) The societal value of taking down or deleting problematic and highly problematic content should not be underestimated. However, take-downs should be logged and filed by every internet service provider and platform in order to give competent authorities, stakeholders, and providers themselves a chance to a) police infringements, b) see criminal behavior met by consequences, such as exclusion and prosecution, and c) in time, develop and craft more and more efficient ways to counter violent extremism, via counter-narratives and otherwise.

Thorsten Koch, MA, PgDip
30 March 2021

Saltman, Kooti, and Vockery’s article can be accessed and read at this link:

Leave a Reply

Your email address will not be published. Required fields are marked *