One of the most important considerations in determining a useful approach to any evaluation is the importance of context to the success of the initiative. Context is the battleground on which rival orientations to evaluation must and do engage. The old, the middle-aged and the new traditions of experimental, realist and systems approaches to evaluation all have a place for context. It is their different treatment of context that gets to the the heart of their dispute and suggests where a contingent or fit-for-purpose approach to evaluation might start.
Context may not always be that important in an evaluation. The stronger the intervention relative to all other factors affecting an outcome, the lower the likelihood that context will matter. For example, in a clinical trial involving the administration of a strong drug that reliably affects everyone who takes it might not need much attention to context. But, in a ‘lower dose’ intervention like a staff mentoring program the intervention may be quite sensitive to context – maybe the staff here have recently been burnt by previous programs, or maybe it will lead staff to use the program to find jobs elsewhere rather than improve performance in their current role. Similarly, a hospital reform that must deal with innumerable factors affecting outcomes may need to make context, rather than any initiative designed to affect change, the starting point and most important consideration.
The experimentalist approach to evaluation follows the rationalist position in philosophy and science (Toulmin, 2001). This position sees context as an unwelcome intruder on experiments designed to reveal some underlying truth about the nature of an initiative or action. Here, context is something to be controlled experimentally or partialed out statistically to allow the true ‘platonic essence’ of an intervention to be revealed – an approach found to be useful in some, but not all attempts at generating knowledge in complex systems (Hawkins, 2016).
The realist approach sees context as a co-conspirator or accessory that is an integral part of the causal process. On the realist account, it is the latent, dormant, hidden or abstract causal mechanisms that do the work, not the interventions we observe in the world (Pawson and Tilley, 1997). Causal mechanisms are inferred to arise from the relations among components of structures that make up the world. They exist apart from any intervention that may leverage them (Bhaskar, 2008). Yet these mechanisms will only fire when the context is right, like gunpower will only fire when it is dry. Context on this account is something to be harnessed and must be explained, understood and incorporated in any attempt to ensure an initiative or action has an effect.
The systems approach is rather more enamoured with context than either the cold and clinical experimental intervention, or the affectionate realist focus on mechanisms. To a systems engineer, the context is the starting point. It is the main thing that needs to be understood prior to considering the value of any intervention into it. Here, context is king. It is the system with all its interactions and complexity that is the unit of analysis (Renger 2015). In this approach there is often little use for the search for stable cause and effect relations between system components (Kurtz & Snowden 2003) or even in the search for realist context-mechanism-outcome configurations (Pawson and Tilley, 1997). The focus is understanding a specific context and effort is spent on real-time data collection and decision making to improve the efficiency of the system, and sometimes to change it in fundamental ways.
There are no doubt innumerable other approaches to evaluation where context is important. Empowerment evaluation, participatory evaluation and developmental evaluation are all heavily focused on the context in which they are practiced as is the philosophical position of constructivism more broadly.
What we think about context reveals what we think about interventions and about making changes in the social world. Any initiative and any evaluation of that initiative would do well to consider the strength of that intervention and the importance of context to its success before choosing an approach that is likely to generate useful information for decision making.
Bhasker, R. (2008, first published 1975) A Realist Theory of Science Routledge
Hawkins, A. J. (2016). Realist evaluation and randomised controlled trials for testing program theory in complex social systems. Evaluation, 22(3), 270 285. https://doi.org/10.1177/1356389016652744
Kurtz, C. F., & Snowden, D. J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42(3), 462–483.
Pawson, R. and Tilley, N. (1997) Realistic Evaluation. Sage Publications Ltd.
Renger, R. (2015). System evaluation theory (SET): A practical framework for evaluators to meet the challenges of system evaluation. Evaluation Journal of Australasia, 15(4), 16–28. https://doi.org/10.1177/1035719X1501500403
Toulmin, S.E. (2001) Return to Reason. Harvard University Press.