
Sharing failures strengthens evaluation
We all like to share success stories, but the fact is that we can learn just as much – and sometimes more – from talking about when things don’t go as planned.
This was the subject of the Australasian Evaluation Society NSW event on Wednesday May 30, “Learning from evaluation failures”. The event was run by two experienced evaluators who each shared a previous case of evaluation “failure”, where the client had difficulty accepting the findings of the evaluation.
The cases
Case 1: The evaluator found out that the number of participants who transitioned from institutional care into a support program was zero. When the evaluator presented this finding at the final meeting, the client questioned its accuracy.
Case 2: The evaluator worked with the client from the beginning to identify and agree on which data would be used to measure outcomes, but as the project progressed, the client seemed to value their internal data over external sources. At the close of the project, the evaluator pointed to external data to say that the program objectives were not met. However, the client disagreed and used their internal data to hold to their view.
Evaluators at the session formed small groups to discuss “What could the evaluator have done to prevent or minimise this negative result?”
What could have been done differently?
Gaining acceptance of and action on negative findings is tough. This is unsurprising given the evidence that people tend to accept information confirming their views and refute information that challenges their views.
The key issue identified from both cases was a need to bring people along on the evaluation journey. It appeared that in the first example, the evaluator operated alone, which may have exacerbated the negative reaction at the close of the project. In the second case, the evaluator and client did not stay on the same journey despite their initial agreement. Working in and maintaining partnership with stakeholders is an effective way to prepare them for and ease their acceptance of negative findings, as well as increase their sense of ownership for the project and the next steps needed to create change.
Evaluators identified a range of practical ways to work in partnership with these stakeholders that may have led to more positive project outcomes.
- Communicate regularly and proactively throughout—this can range from formal check-in meetings to an informal understanding to communicate key information as it comes to light. What is important is that there is a shared awareness of the methods being used and key results as they emerge.
- Engage and get endorsement of primary users—engaging senior management decision-makers, seeking to understand their expectations about outcomes, and gaining their endorsement of at the outset can help to reduce risks.
- Understand the context—a key element of utilisation-focused evaluation is an appreciation of the context (political and programmatic) in which an evaluation takes place. The priorities, needs, and preconceived expectations of stakeholders can shape how an evaluation is developed and ultimately accepted. Even with regular and proactive communication, if the program team has a vested interest in the evaluation producing positive results, negative findings can create friction.
- Re-frame negative findings—framing negative or contradictory findings as lessons and opportunities for improvement can help pave a way forward.
- Identify the potential for negative findings at the outset—it is just as important to ask clients what failure would mean and how they would respond as it is to ask what success would mean. This helps to identify expectations, enabling you to frame how you communicate activities and results so that stakeholders feel part of the journey, and are empowered to make changes as a result.
These strategies fit with the findings ARTD Partner, Jade Maloney’s research on evaluation use. However, Maloney’s research also identified that these strategies can fail when working with organisations that lack a learning culture and when findings are politically unpalatable.
The strategies also align with Michael Quinn Patton’s Utilisation-Focused Evaluation. Quinn Patton’s approach provides a framework for evaluators to maximise the intended use of evaluations by users, even where the results of an evaluation may not match what program staff or management expected.
Let’s keep sharing
Evaluators’ candour in telling their stories and allowing other evaluators to consider how we can collectively achieve greater use of evaluations is a positive contribution to evaluation practice. It builds on the growing conversations in the field, such as those seen at the AES 2017 conference in Canberra, and in Kylie Hutchinson’s recent book “Evaluation Failures”.
We’re keen to continue the conversation – this year’s AES Conference will be a great opportunity.
Resources
Hutchinson, K., “Evaluation Failures”, 2018
Patton, M., “Utilization-Focused Evaluation”, 4th Ed., 2008
Patton, M., “Utilization-Focused Evaluation (U-FE) Checklist”, 2013
Ramirez, R., and Brodhead, D., “Utilisation focused evaluation: a primer for evaluators”, 2013