Using online data to tackle violent extremism is a risk worth taking… if we’re smart about it. Here’s how.

UN Development Programme
5 min readJun 9, 2022

By Angharad Devereux, Heesu Chung and Mailee Osten-Tan

Data. It’s a word that stokes anxiety in some and excitement in others.

While using data from digital platforms to inform preventing violent extremism (PVE) interventions can be a big opportunity, people are rightfully concerned about public privacy and state surveillance. There are human rights implications to monitoring trends in social media use, or even getting hold of this information from companies like Meta, Twitter or Google. Algorithms and big tech business models are often critiqued for their roles in enabling social polarization: one of the very phenomena development practitioners are trying to address. There are also concerns about sharing findings with those who may — inadvertently or purposefully — misuse the information to further stigmatize communities or target those vulnerable to extremist messaging.

But that doesn’t mean we should shy away from data. In fact, there are plenty of successful demonstrations on how the digital space can help provide early warning and early action. For example, analysing digital trends of what populations are saying at different points in time can help not only better understand social sentiment and engagement with extremist narratives but also provide insights on drivers of violent extremism. Data can also help us reach those most affected by telling us where the gaps might be in areas like education, mental health, employment, attitudes towards women, or social cohesion. This information gives us entry points for intervention.

So, how do we go about using data while managing the risks? Here are some important considerations, according to a new piece of UNDP research.

1. Come up with a game plan.

Start by asking yourself, “Why is the internet a relevant space for violent extremist groups? Which groups and communities am I most concerned about? And what online platforms are they active in?” This helps identify what type of information is the most relevant, and can help to decide on an objective, for example, analysing the actors, audiences and trending narratives of hate speech and violent extremist (VE) propaganda. Once the objectives are set, it is easier to identify who, what, when, where, and how you’d like to monitor and analyse data. Is your initiative purely local or is the regional context important? Think about the audience; if a project aims to counter VE narratives targeting youth, take a look at which platforms young people might be spending the majority of their time on.

It is also important to understand the nature of data available. Material that incites violence is often removed automatically by content moderators or algorithms. But monitoring trends in narratives and audiences that follow ‘legal yet harmful’ material can still provide valuable insights on the gender, demographic profile, location and other measurable characteristics of followers. To collect and analyse data at a larger scale, artificial intelligence (AI) tools are increasingly popular but can be resource-heavy and have algorithmic biases. Keep this in mind when picking the right monitoring tool.

2. Ask others for help.

Partnerships are a great opportunity to leverage expertise, technology and resources. The support of others also increases the likelihood of your project’s success as sharing knowledge and experience can help overcome challenges. As much discussion online is highly contextualized, civil society organizations (CSOs) can be invaluable to help collect, analyse and verify data in different languages and dialects. It’s worth remembering that language and trends in everyday speech are changing all the time; CSOs can help us understand the nuances of language often not offered by automated tools. Large tech companies often serve as gatekeepers of the data we need. More needs to be done to ensure responsible partnerships with them, including their practices and content policies’ alignment with human rights norms and standards.

3. Figure out how to identify and mitigate the risks.

Creating risk assessments and monitoring and evaluation plans at the very beginning of designing your project is crucial. Key considerations include protecting “at-risk” individuals and groups from excessive and unwarranted surveillance by removing personal identifiers, minimizing the collection of data that might be deemed unnecessary, and assessing the potential impact of sharing findings. To avoid “data surveillance”, all partners should come up with a risk mitigation plan and practice transparency in their digital research efforts. Partners themselves should also be protected, including from the potential mental strain of monitoring harmful material online and from being physically targeted for their work. Monitoring and evaluation processes can help share lessons learned at each stage of the project. These can help inform strategies to mitigate the observed risks and amplify impact by strengthening methodologies going forward.

4. Don’t forget about offline approaches.

“Radicalization” occurs in a hybrid manner — both online and offline — with one often reinforcing the other. Getting hold of information face to face is still crucial in order to know how to best target PVE programmes and policy. While new technology is always exciting, it’s important not to sideline more traditional methods of data collection, verification and application. Those who match the demographic and geographic factors of “at risk groups” can help test the findings of online-based projects through in-person focus group discussions. When applying data to PVE programming activities, a range of online-based (such as digital literacy skills building and creating online counter-narratives) and offline approaches (such as building youth, religious leaders, education actors, and media’s capacities to build communal peace, tolerance and respect for diversity) are key to addressing radicalization and hate speech. This kind of holistic thinking — addressing both the online and offline spheres of people’s lives coupled with a better understanding of the relationship between digital interactions and offline behaviour through digital ecosystem mapping research — can help make a stronger impact.

Readily sharing information is one important step to helping all to understand how to harness tech innovations in a responsible and sustainable way. To help both practitioners and policy-makers get to grips with data’s possibilities and challenges, UNDP’s policy brief and guidance note on Risked-Informed Utilization of Online Data for PVE and Addressing Hate Speech gathered lessons learned from PVE programmes in seven countries that are using online data in different ways. Click here to read the reports and for more on complementary risk-informed approaches to addressing the growing threat of online radicalization and terrorist content. Click here to watch a recording of the EU-UNDP Dialogue on Responsible Digitalization.

--

--