A woman being interviewed.

Beyond Indicators: Why We Need Qualitative Monitoring

| 3 Minute Read
Monitoring, Evaluation, and Learning

Do indicators tell the whole story of development? Christopher Gegenheimer explains why going out and listening to beneficiaries is important at every stage of a project.

This week at the American Evaluation Association’s 2015 Conference, Chemonics presents findings from several of our recent projects, including Zambia Communications Support for Health (CSH), Palestinian Authority Capacity Enhancement (PACE), and Philippines Private Sector Mobilization for Family Health (PRISM) 2. These presentations represent how critical evaluation is to gathering the needed evidence to demonstrate the positive changes that our projects helped create.

This year has been designated the “International Year of Evaluation,” and as we are considering all of the new findings, results, and trends emerging in the world of evaluation, a question remains in my head: With the effort, planning, and expense that goes in to evaluations, what can we do to get rich information about our projects’ effectiveness between scheduled evaluations?

One answer that I keep coming back to is monitor our projects by going beyond indicators, incorporating some elements traditionally reserved for evaluation into regular monitoring.

Within our traditional division of monitoring and evaluation for development projects, monitoring data demonstrates what is happening or changing while evaluation helps answer questions about whether or not what we did was effective in achieving our goals. Within this framework, we usually assume that monitoring is done with numbers and quantitative indicators only and that qualitative methods, such as asking open-ended questions to beneficiaries, are useful only for evaluation. From this perspective, the concept of monitoring without relying solely on indicators plays into our collective worry that if there is not a number, nothing happened.

Qualitative monitoring then faces a challenge: If it is not rigorous enough to be considered evaluation and not numeric enough to be an indicator, what’s the point?

First, qualitative monitoring helps us to move beyond our implicit assumption that more indicators means better monitoring. We have all seen, collected, and, dare I say, proposed indicators that measure the number of events, number of person-hours spent in trainings, number of publications drafted, and so on. While these indicators may give us quick numbers and big “results,” they are also built on a foundation of direct cause and effect that rarely holds true when working on long-term social change.

For example, distributing a certain number of toothbrushes will not directly cause an improvement in oral health. There may be social, behavioral, cultural, and structural factors that influence how people use the toothbrushes, or whether they can or will use them at all. In this hypothetical situation, if we were to follow up with some people who received toothbrushes and ask them why they did or did not use them, we could make more informed programming decisions that would allow us to achieve the goal of improving oral health.

Second, the purpose of monitoring is to identify whether we are on the right track to where we want to be. With extra questions added to monitoring forms, variants of the Most Significant Change technique, Outcome Harvesting, and semi-structured interviews, we can complement output figures and get a richer understanding of whether or not our intended results are starting to be observed. The more we can integrate qualitative methods to our routine work, the more we can gain actionable information for modest additional time and effort. These results will not be definitive and will not be the same as a full-on evaluation. But from this basis we can take these insights and adjust interventions, or even our evaluation design, to ensure we are implementing the correct activities and asking the right questions at evaluation time.

In short, there is a gap between the “output/process” indicators that we collect frequently but do not necessarily show results, and the “outcome” indicators that demonstrate change but are collected infrequently, involve a large effort, and may not come quickly enough to adjust interventions. As a supplement to indicators, qualitative monitoring can help to fill this gap by using tools and techniques usually associated with formal evaluations but on a smaller scale and in a more iterative fashion.

This idea is not new and has come up in many places over the years, including in the Most Significant Change Guide and, more recently, in a discussion note issued from USAID on Complexity Aware Monitoring. The key addition these techniques offer is that through a deliberate effort of speaking to those most directly involved in our interventions, we can validate our assumptions, check if logic is sound and, perhaps most importantly, discover what needs to be done differently with time to actually make the change.

Indicator-only monitoring may fit neatly in to a results framework, but it will not always provide the kind of information we need. So go out — gather stories, ask open-ended questions, and engage in a conversation with beneficiaries. You just may be able to create a more complete picture than all of our output indicators combined can.

A professional headshot of Chris Gegenheimer.

About Chris Gegenheimer

Chris Gegenheimer is a director for Monitoring, Evaluation and Learning (MEL) Technology and leads efforts in rolling out new data collection, management, analysis, and visualization technologies across home and project offices. Since joining Chemonics, Chris has been in the MEL department supporting proposal and project teams in Latin America and the Caribbean, Asia, Africa, and…