The Evidence Based Policy Summit in Canberra last week brought together staff from federal and state government departments to discuss strategies for strengthening data collection and using evidence to improve policy, programs and practice.
At the summit I ran a workshop on using evaluative thinking for program design. This is an approach that builds on the core concepts in program logic but is more explicit about the ‘logic’.
The pitch for Program Design Logic was:
- A program, at its core, is simply a proposition that a certain course of action will lead to a certain set of outcomes.
- A sound, evidence-based program will make sense ‘on paper’. That is, if we achieve each condition, then we have reason to conclude that, collectively, these will be sufficient for bringing about our immediate intended impact.
- Program Design Logichelps identify flaws in a course of action at an early stage using pre-program evaluation.
- As with many approaches to program logic, the components in our course of action are written as conditions or propositions in the form of a subject and a condition, for example, ‘the client is more aware’ or ‘the program is well-resourced’. Thus, it’s not the actions themselves that we should focus on, but the intended outcome of each action – the ends not the means.
- An evidence-based program must also be evaluated ‘in reality’ – empirically evaluating the extent to which our actions actually lead to each condition (how and under what circumstances) can be achieved using a monitoring and evaluation framework, strategy and series of plans.
- Program Design Logic is the foundational stage in a Design (or Define if it’s already in place), Implement, Monitor, Evaluate, Learn & Adapt process, using lean and agile concepts for continuous improvement.
It is now common practice to develop a ‘program logic’, but it is too easy to make a fanciful program logic – one that shows we develop the policy, we implement the policy and people change their behaviour. If we think critically about whether our activities will be sufficient (given our assumptions) and, in fact, necessary, we will develop more effective programs that are more efficient in their design.
In Program Design Logic, a program, intervention or course of action can be broken down into a series of steps or components that we think are necessary, and when all achieved should be sufficient to bring about a desired result or outcome. As with every action in life, we will inevitably make some assumptions that we are relying on, but about which we are not 100% sure.
At an early stage, the focus for design and initial evaluation of a course of action – or in lean terminology, a ‘minimally viable product’ – is setting out the logic of the intervention as a ‘causal package’ of necessary and sufficient conditions. This is more like a recipe than a ‘causal chain’ in which one action or component is supposed to cause the next.
To build an evidence-based ‘minimally viable product’ we need reasons to think our actions will deliver a causal package, that is, bring about certain conditions that together will be sufficient to bring about an intended outcome and contribute to higher-level outcomes.
It is important that we maintain a realistic focus on what we can achieve, while at the same time always checking that we are in alignment with our long-term vision and ultimate intended outcomes. Technically, this means that in addition to our course of action being sufficient to bring about some outcome, we expect the course of action will contribute to some higher-level outcome: our vision. But it will do so in combination with the effects of numerous external factors over which we have very little influence.
In the workshop we worked to spot the flaws and unfounded assumptions in people’s programs for the purpose of developing feasible and cost-effective ways of re-designing programs to achieve intended outcomes. From multi-pronged approaches to increase the wearing of life jackets, to augmenting a regulatory approach for rehabilitation service providers with consumer education, we had a very productive time and the feedback was very positive.
Program Design Logic provides a tool for evidence-based policy and program design (and re-design) that is much simpler than many expect but requires plenty of thinking to do well. It is really just about having good reasons for each part of your program, being ruthlessly honest and maximising what you really must achieve—rather than accepting wishful thinking about what you hope to achieve.
 In technical terms, each condition is an insufficient but non-redundant (i.e. needed) part of an unnecessary (i.e. there are other ways), but sufficient condition (i.e. the intervention itself). That is each step is an INUS condition (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect), but the program is itself a sufficient condition.  In taking this approach we are explicitly employing a configurationalist rather than successionist theory of causality that seems to be assumed in many linear program logics. It is argued that this approach is useful for program design at the macro level, but understanding micro-level causality may best be achieved using generative theories of causality or a realist approach.  Reasons will include assumptions, warrants or theories – while theory of change is a crucial part of a good program, it is also just one type of evidence needed for an evidence based program and is subordinate to the overall logic of the course of action.