The best horror movies always get sequels! The Australian Evaluation Society’s goosebump raising ‘Evaluation topics that keep you up at night’ open-space interactive session, first attended by Ken in 2019, recently returned for a second haunting! Lia and Ken got front row seats at the 2021 sequel, facilitated by Kath Vaughan-Davies of K2 Strategies and Greg Masters of Nexus Management Consulting. Here are some of the topics that set their teeth chattering, and some of the takeaways to help save evaluators and evaluation commissioners from sleepless nights.
The benefits of this open space session were the different perspectives shared – by both evaluators and people who commission them. The co-created discussion agenda covered lots of terrifying topics, but here are a handful of horrors that keep some of us up at night.
Sometimes clients come to an evaluation with pre-set outcome measures. A critical issue in this discussion was ‘Why should an evaluation be commissioned or completed when outcomes have already been pre-determined?’
From a commissioner’s perspective, this might be because the evaluation is occurring at the formative stage of a program or policy, or the evaluation may not have been considered from the outset of the program. There may be performance indicators or expectations set by the client’s management around outputs, for example a particular agency or department may have a set plan to publish a specified number of success stories as case studies per year on their website. It could arise from a time constraint (e.g. wanting to see program/policy results ASAP), or a cost consideration (the costs of obtaining data not yet available yet may be substantive).
But for evaluators, pre-determined outcomes can lead to sleepless nights!
Having set views about the expected changes or impacts may negatively impact both on the design and implementation of the evaluation itself, as well as on how the findings are received and used. It may lead to the expectation that the evaluation be conducted faster than is possible, or it might mean some stakeholder views are not prioritised. This can make it harder for evaluators to make sound findings and recommendations.
Participants suggested that establishing an evaluation advisory or reference group, to encourage diverse and expert stakeholder input before the evaluation is commissioned, can open the conversation about appropriate and relevant outcome measures.
Reporting on negative results
Reporting on unexpected or negative evaluation findings and results is a recurring bad dream for evaluators and commissioners. (This was one of the topics we discussed both in 2019 and 2021!)
So much blood, sweat and tears goes into researching, planning, and implementing a service, policy or program, and its champions naturally want to see it succeed! A key role for evaluators is bringing people along on the evaluation journey and creating a shared awareness of potentially ‘negative’ findings (read more here).
Participants suggested other practical ways to report negative findings.
- Be transparent, even if the results are negative (or not as great as initially expected).
- Refer to underpinning ethical principles for sound evaluation work.
- Include a limitations section in your reporting to acknowledge challenges or data constraints (for example, if there was less stakeholder engagement than planned, or lower reach into some stakeholder groups).
- Report findings even-handedly. Respect the hard work of people who have designed and implemented the program and, wherever possible, also include positive results or program strengths.
Why bother doing evaluations?
As evaluators, our key goal is providing credible evidence to inform decision making for the public good, so this one about evaluation use really does keep us up a night! (Our Managing Director, Jade Maloney, shares her thoughts on utilisation in this journal article.)
There are plenty of upfront reasons not to bother. Foremost, commissioning an evaluation can be expensive and time consuming. There may be a lack of certainty about why an evaluation is required, what it should cover, and how it will create useful and useable insights. There may be other, less often discussed reasons not to evaluate, too. For example, a program designer’s concern about lack of effect or less than ‘positive’ findings may limit the desire to evaluate. Or there may be uncertainty around how an evaluation’s findings and recommendations could be translated into specific actions.
However, as the group reflected, it’s important to remember that tomorrow’s decision makers may not be the same people as today’s decision makers. A credible evaluation is enduring and may be useful (and translated into action) in later years. From our perspective, ensuring evaluations are useful and useable contributes to growing understanding of why evaluation is important.
It was great to have the opportunity for open discussion among participants from diverse backgrounds. We’d love to hear your thoughts on these topics too, on our LinkedIn page!