Counter-Extremism: Limits in Assessments, Regarding ‘Lessons Learned’ and ‘Best Practices’

In preventing and countering extremism (P/CVE), there are various ways to conduct analyses and evaluate outcomes. However, ‘lessons learned’ are not necessarily generalizable, and ‘common standards’ frequently exist to a lesser extent than expected, while ‘best practices’ often depend on settings and factors which are specific to such an extent that they may not apply, or may not fully apply, to other contexts. This goes for both qualitative or quantitative analyses and assessments. The issue of contextual comparativeness regards geographical settings, different societal groups, and also differences over time. He or she conducting an evaluation must be mindful of different phases of polarization or of the formation of the respective manifestations of extremism.

Different settings

The actual effectiveness of deradicalization programs, for instance, has hardly been scientifically evaluated in some countries, even where statistics on deradicalization efforts exist. Geographical areas sometimes bear very unique socio-cultural features shaping and limiting national programs or model projects. Countries, and sometimes areas within countries, will often have different political and legal systems. Comparing P/CVE outcomes, then, must be conducted with an awareness of different levels of social realities. These realities are always complex and, oftentimes, difficult to grasp.

Information-gathering: whom to task?

There is always the issue of sampling, accounts, and veracity. Human sources a researcher talks to might not be aware of the situation as a whole, might have overt or hidden agendas, there may be miscommunication, or sources may be bound by taboos, depending on power structures and other elements of the cultural ensemble. Locals can, but need not, be the best investigators or assistants in conducting an assessment, but neither are outsiders less familiar with a setting.

In some cases, outside interventions might alter the situation in adverse ways, which will require to choose an internal instead of an external evaluator to do the work. Also, language barriers might constitute a potential falsifier of results. In other cases, the best choice is for an external assessor to step in, due to preconceptions of locals available and trained in P/CVE. However, an evaluator from outside, with a specific and different cultural background, may be tempted to ignore differences and specificities or regard them as outliers, even where they are an integral part of societal context. One must thus be open towards existing differences, especially when comparing evaluations across geographic boundaries. An external investigator might also ignore certain features of society he or she considers to be self-evident, but which deserve closer scrutiny.

Pre-acquired cultural competency

An evaluator necessarily needs to be equipped with knowledge of the respective cultural backgrounds of societal groups, whether norms, values, complex political configurations, of the economic realities within, for instance, a given geographical area, or of groups therein. It may not be possible to identify or talk to those best-suited.

Results do not alwas equal desired results. A researcher with the best intentions, then, might overly emphasize, in reporting, his or her own expectations, or may not be equipped with the proper program designs of evaluation which would fit a certain setting. It is insufficient for an evaluator to be mindful of his own cultural background alone. Rather, he or she must be knowledgeable and sensitive of the given culture in focus. On the other hand, he must make judgements on a culture scrutinized without being submerged by the same culture.

An evaluator must also take time to understand and be aware of the subtleties of the objects under investigation, after having acquired passive cultural competence. The evaluator must, moreover, be careful with regard to consensus-seeking with peers as much as with reference groups, although it is very tempting to seek agreement. Utility and the wish for a study to be instrumental to pre-conceived goals should not define the work of an investigator. Isolated, dissonant events and accounts must first be taken into consideration before they can, perhaps, be disregarded. Such instances might be an integral part of a setting, but they might, in the end, be isolated, depending on the situation.

Considering cultural complexity on the ground

Cultural appreciation is of the essence. There is the catchword of Culturally Responsive Evaluation (CRE). However, it all depends on design, implementation, and often requires adaptation. Sometimes, instead of being overly systematic, one has to take new facts into account, follow and analyze based on empathy. At times, methodological strategies must be called into question, in order for them to match a culture.

Albeit, he or she who more or less understands one locale and its history and who possesses ideological competency may not understand another, not only due to specificities and some level of contradictions – but also due to socio-political fluidity. In multicultural settings, especially, complex cultural identifications and cross-community differences might be very difficult to take into account. Odd as it sounds, accuracy is not always helpful. For instance, human sources very eloquent might misrepresent a situation, while accounts of a less eloquent source might be more significant.

Specificities of given settings

Engaging with stakeholders and decision-makers may, in fact, distort findings, when a source narrates what he or she think is expected. Hence, participant observation must be conducted very carefully. Diversity means that aggregate data needs to be broken down, and inherent cultural contradictions be differentiated from actual outliers.

In the end, it might be difficult to compare aggregate data of one setting with that of another setting, based on the results of evaluations. This is the case where countries or regions are too specific and unique. Specific structures might differ so much that while case studies are possible, results from different focal areas, for instance, cannot be compared. The same goes for comparisons across different communities.

Sometimes, the quest for standards is a task impossible to accomplish. A short-term outcome assessment may or may not apply on the intermediate or on the long run, and overly focusing on outcomes and trusted models might, in some instances, be little more than wishful thinking. At worst, structural givens taken for granted during assessment design do not apply at all, and it will be necessary to start out anew, although this is more time-consuming.

Conclusion

In sum, the context of settings, of different sets of input, of case-specific processes, and of the assessment process itself must be reflected and weighed with respect to limitations of the situation on the ground, after designing an assessment program as fitting as possible. Although it is not the easiest way, presuppositions will often have to be altered and adapted. This may mean that the overall findings might be original rather than them following patterns valid in previously investigated contexts, which can, but need not, be positive, with a view of contributions to science or amendments of practitioner standard. Original findings could, in fact, turn out to be useful for the implementation of future targeting and interventions.

An evaluator must be well-chosen on grounds of his or her competency, scientific stringency but, at the same time, his or her openness. He or she must be aware that outcomes cannot always appraised objectively, due to practical implementation. Where comparatibility is limited in ways which will not allow for ‘best practices’ to be identified, there could be the possibility to filter out a set of specific ‘lessons learned.’ There is the possibility to adapt findings to allow for an assessment to be closer to reality, though this takes more effort and time. While the final assessment may be far from what was expected, it might lead to findings on which to base future endeavors of analysis and assessment. There are common denominators in many cases, allowing for comparison, however no focal aspect or area is exactly the same.

Thorsten Koch, MA, PgDip
5 April 2021

Leave a Reply

Your email address will not be published. Required fields are marked *