Insights from recent projects #2

We’re posting reflections from our consulting teams with takeaways we hope other evaluators will find useful. This is the second in our Insights from Recent Projects blog series. Read the first blog here.   

Process tracing in evaluation design with Team Karrikin


Using thinking from process tracing in evaluation design

We began drawing on thinking from process tracing to improve how we design evaluations that focus on measuring contributions to an outcome.  Process tracing is a methodology often used to identify the likelihood that a particular action or intervention was or will be the causal mechanism behind an observed or intended outcome. 

Rather than trying to prove causation outright, which is rarely possible for complex social-environmental programs, process tracing helps us look for specific types of evidence that either strengthen or weaken our confidence in a program’s contribution. We’ve applied this thinking to disaster assistance and climate change programs, however, this approach can be applied as a thinking tool in the design phase of any evaluation, to strengthen the thinking. We will also be using contribution analysis combined with process tracing in the synthesis stage of an evaluation to explain how, and under what circumstances, any observed impacts were generated.

How we did this: 

We used three classic process tracing tests to determine what evidence we would expect to see or not see if the program’s theory held true.

  1. The Hoop Test asks us to consider what evidence would we absolutely not expect to see if our theory is accurate. If that evidence does appear, the theory is less likely to be true.
  2. The Smoking Gun Test asks us to consider the unique confirmatory evidence likely to be observed if our theory is true. Failing the test does not lend strong support against our theory, but passing the test does weaken alternative theories.
  3. The Doubly Decisive Test asks us to consider what evidence both confirms our theory and rules out alternatives. This is the evidence unlikely to be observed under any alternative. Passing this test lends strong support for our theory and against alternatives while failing this test does the opposite. This kind of evidence allows us stronger certainty about claims the evaluation makes about contribution to outcomes. [1]

Together, the team workshopped the types of contribution claims were inherent in the program logic and which we’d be likely to need to make with the evidence we were collecting in the evaluation. Then we brainstormed what evidence we’d need to collect in order to test these claims – this included the data we’d need to collect to disprove the logic.

Working through these tests early in the design phase helped us identify what contribution claims would actually be possible to test, which data sources were essential and which were nice to have, and which survey or interview questions would yield the most meaningful (not just descriptive) evidence with which to make causal contribution claims. It sharpened our data collection tools by providing a structured way to think through the data that needed to be collected about other possible causes of outcomes, as well as unique confirmatory evidence about our evaluand.

We also applied the same thinking when developing a Monitoring, Evaluation and Learning (MEL) Framework for another large program. Again, the goal wasn’t to prove causation, but to give the program clear guidance about the evidence likely to appear if the program theory is functioning as intended. This allowed us to embed realistic expectations directly into the MEL Framework.

Drawing on process tracing in the design phase has helped us strengthen our evaluation design, which we anticipate will support better supported claims about contributions, which inform better knowledge about how this kind of program works in a complex environment, as well as more relevant recommendations to guide improvements and future actions.

“As much of our work is in the natural disaster, climate change and environment sector, we chose the name Karrikin. Karrikin’s are compound found in the smoke of bushfires that trigger new plant growth for a wide range of plants. They work by activating the symbiotic fungi in root systems that help stimulate germination and development, especially in tough, nutrient-poor conditions”

Cover image owner: Doug Gimsey 

[1] Befani, B and Mayne J. 2014. ‘Process Tracing and Contribution Analysis: A Combined Approach to Generative Causal Inference for Impact Evaluation’, IDS Bulletin, 45 (6)

Receive our latest news and insights