This post originally appeared on IntraHealth International’s blog VITAL.
A police officer sees a drunk man searching for something under a streetlight and asks, “What have you lost?”
“My keys,” the drunk man says, and they both look under the streetlight together. After a few minutes, the officer asks if he is sure he lost the keys here, and the man replies, “No, I lost them in the dark alley across the street.” The officer asks why he is searching here, and the drunk man replies, “This is where the light is.”
This is just an old joke, of course, but maybe we are not too different from the drunk man in the way we attempt to measure the impact of our development programs.
As development implementers, we might behave just like this when we design our measurement and evaluation frameworks. We need evidence. We need to see results or impact. So we select indicators that are easiest to measure—standing out there in the light—and we shape our interventions around that framework.
How often are we as researchers tempted to track data that are easiest to capture rather than searching in the dark for what we really need to know? Under the streetlight, we examine and document easy-to-see data when the information we really need may not be anywhere nearby or accessible to us.
In reading the new Lancet Global Health Commission report, High-quality health systems in the Sustainable Development Era: time for a revolution, we ask ourselves how we became sidetracked in measuring coverage, utilization, and access to the detriment of quality as measured by patient experience, competent care, and health outcomes. As development practitioners and researchers whose daily work intersects measurement, we see the report as a wake-up call regarding our own observational biases … Read the full article on VITAL.
Blog posts on the Chemonics blog represent the views of the authors and do not necessarily represent the views of Chemonics.