Image of two brightly colored murals side by side on a dividing wall. One addresses conflict resolution while the other celebrates cultural diversity.

Where’s the Impact? Measuring Results on Countering Violent Extremism Programs

| 3 Minute Read
Peace, Stability, and Transition | Preventing and Countering Violent Extremism
Research and University Partnerships | Solutions Labs

Former chief of party Ryan Smith discusses the challenges of measuring results on countering violent extremism programs and shares what his project learned in Kenya.

On September 29, more than 100 countries and representatives of civil society attended the Leaders’ Summit on Countering ISIL and Violent Extremism at the UN General Assembly to further the global discussion about the importance of combating violent extremism (CVE) around the world. The meeting was preceded by the White House Summit to Counter Violent Extremism in February, which endorsed an action agenda for the community of actors in military, civil society, and government. While global action for CVE has been gaining momentum recently, the U.S. government, and specifically USAID, have been actively thinking about CVE for many years. Just this year, USAID established a Secretariat on Countering Violent Extremism to coordinate CVE policy initiatives and help missions design their CVE activities.

At the project level, CVE programming remains incredibly complicated, especially with determining the results of a program. From 2013 to 2014, I led USAID’s Kenya Transition Initiative (KTI) and experienced these challenges firsthand. The project implemented CVE activities in the Eastleigh neighborhood of Nairobi and five counties along the Coast, supporting individuals, networks, and organizations through a small grants mechanism. Our activities were designed to target the key “push” and “pull” factors that had been identified for each area through an initial baseline assessment and ultimately to promote non-violent behavior among at-risk groups.

When it came to monitoring and evaluation, the challenge was determining the right mix of methodologies for measuring the impacts of the KTI program. Any one particular methodology seemed inadequate. An end-line (conducted at closeout) quantitative perception survey might have showed a change in beneficiaries’ feelings on identity or their behavior in their local community, but it would not have answered the question of causality. Did KTI activities influence the changes? A qualitative analysis may have been able to help determine whether beneficiaries or partners changed their behavior as a result of KTI, but it would have limited ability to show aggregated programmatic impacts. Does the fact that a number of training participants acknowledged changing their behavior mean that KTI activities contributed to community-level change?

In the end, KTI determined that we needed to do both. We designed an evaluation strategy to look at the impact of our CVE activities by aggregating the basic outputs at each program location, conducting an end-line quantitative survey to determine any change in perceived behavior change, and commissioning a qualitative study by an independent research team to trace causality. We believed that this combination of methodologies would help us to triangulate the actual impact of project activities and eliminate outliers better than any particular methodology could on its own. As a result, we were able to cross-check the findings from each study to see if we could attribute change. For example, did the fact that youth surveyed in Eastliegh noted an increased level of receptiveness by local government authorities to listen to their feedback correlate with an actual increase interactions between youth and local authorities? While not perfect, the project was able to draw a number of conclusions by correlating the findings of both studies.

This evaluation exercise taught us that CVE programming is sensitive and that regularized, independent analysis is critical to its effectiveness. It also taught us that measurement needs to be built into implementation at the beginning of the program. While KTI utilized considerable resources on baseline surveys and research, it lacked the ability to evaluate impact at the activity level during implementation. Ongoing impact evaluations or additional pre-activity research might have helped the project identify other sub-sets of target beneficiaries (ex-convicts, religious converts, etc.), which may have allowed the project to have an even greater impact in CVE. Real-time monitoring may also have allowed the project to adjust messaging during an activity to make them resonate better with various beneficiaries.

CVE programming has to navigate incredibly complex, sensitive, and rapidly changing social challenges to reach the right audience with the right message or intervention. To ensure accuracy and timeliness, CVE programs must have the ability to adjust quickly for maximum impact. I believe KTI’s pilot evaluation exercise illustrated this point and shows that using multiple measurement methods to triangulate impact of CVE activities, while more difficult and costly, can be effective.

About Ryan Smith

Ryan Smith is a director in Chemonics’ East and Southern Africa Division and served as chief of party on USAID Kenya Transition Initiative.