Is your program logic logical?
The term program logic implies the model you develop is logical. But is it? Often the links between outputs, short-, medium- and long-term outcomes in a program logic model look more like a wish list than something that would logically follow from program activities.
We need to put the logic back into program logic so it can help us design programs that work and effectively evaluate programs. To do this, I think evaluators need to do five things
- Recognise the difference between program logic and theory of change. A program has one logic but can have many theories covering the program overall and different levels of the program logic. For example, in a parenting program, there may be a theory about why improving certain mothers’ parenting skills in a certain way will lead to better parenting, but there may also be a theory as to why distinct approaches to marketing the program will work differently for mothers depending on their circumstances, motivations, media consumption etc.
- Understand and use theories of causality that are useful for different purposes. There are three main theories of causality that evaluators might draw on:
- Often program logics employ a successionist theory of causality where programs are reduced to a series of cause and effect relationships in a chain. But social programs are more complicated than this suggests.
- A configurationalist theory of causality is often most useful for program logics in terms of social intervention. This theory of causality recognises that a range of things need to come together in a ‘causal package’ to bring about a change. It’s like baking a cake: for success, you need the right combination of ingredients, mixed in the right way, and placed in the right context (i.e. an oven at the right temperature).
- If you want to understand how and why change happens, you are best off with a generative theory of causality, as a realist would use. A generative theory of causality explains the underlying mechanism of change, for example, how the ’rising agent’ in the cake works. You wouldn’t generally represent this in the program logic model, but would describe it in a narrative, table or other figure that sits alongside the program logic.
- See programs as arguments with premises and a conclusion. A program seen through the lens of informal logic is an argument. The premises in the argument are the outputs of program activities (as well as our assumptions), and the conclusion is the intended immediate or short-term outcome. In a good argument, if all the premises were true and assumptions held, the short-term intended outcomes would follow with a high degree of probability. Theory can provide an underlying ‘reason to think this will work’. This is a special case of the broader class of ‘warrants’ that provide reasons to accept an argument about a course of action.
- Evaluate the logic of program design, implementation and outcomes. This may require three stages:
- Does it make sense on paper? If all the necessary conditions were achieved and our assumptions held, would these be sufficient to achieve our immediate- or short-term intended outcomes?
- Was it effective? Seek evidence to empirically test premises to determine if the argument is well-grounded. Did these conditions together form a causal package that was sufficient to achieve our intended immediate- or short-term outcome?
- Was it efficient? Was each condition actually necessary, or could the program be streamlined?
- Be realistic about what evaluation can usefully measure. We know that programs and policies are not always fully developed when they are implemented. It is also common that a program will be sufficient for an immediate- or short-term outcome, but will only contribute to longer-term outcomes alongside external factors. Taking a logical approach, we can conduct a very cost-effective evaluation by focusing on the most important parts of a program, given the maturity and current state of knowledge about the value of the program. This will ensure we really do generate evidence and insight to inform decision-making.
The empirical evidence generated by evaluation methods can be used to refine the logic and theories. This can inform not only further implementation of the program under study, but the design and implementation of future programs of a similar nature.