Learning from humanity’s collected and often repeated mistakes is at the heart of evaluation, and I’ve found evaluators more willing than most to open up about the things they didn’t do well, or they wished they’d done better in hindsight. In a recent chat, Ralph Renger shared some of his own valuable lessons about applying a project evaluation approach to a systems problem as a young evaluator and his regret that he didn’t know then what he knows now.
This is why I was delighted (but not necessarily surprised) to stumble onto a great read called Evaluation Failures, a collection of tales from 22 evaluators about mistakes made and lessons learned, edited by Kylie Hutchinson. I found it the perfect amalgam of light and informative to make it an excellent bus or lunch break read. Here are a few lessons I’ve extrapolated from these generous storytellers so far.
- There’s no better way to reduce the risk of things going wrong than by having a face to face conversation
This came up again and again. It’s obvious, but even before COVID, an easy one to overlook when people are busy, and distracted with trying to keep a project on track.
- Trust your gut feeling
If something feels like it’s going wrong, it probably feels like that for the evaluation commissioner too. It’s good to raise the concern early, to see if it’s shared, and find a solution.
- Organisational restructures make evaluation very difficult but aren’t uncommon
Staff changes and organisational restructures can have a huge impact on the feasibility of delivering an evaluation to plan, but they aren’t uncommon – especially when an evaluation is multi-year. (One of the stories tells the sad tale of an evaluation hand balled between four internal project managers on the client side, each less engaged than the last.)
It’s good to have a go-to plan if it looks like the evaluation has been lost in re-structure soup. This might include reverting to tip 1 and 2 (meet face to face, talk early about problems), setting up an evaluation working group within the organisation, and including a contingencies budget and timelines, or creating a plan for stakeholder re-engagement (that almost certainly should include face time).
It’s always worth remembering (and communicating to the client) the value of an evaluator in this position, too – there can sometimes be more organisational knowledge in their heads than in a new staff member’s!
- Hold strong to the limits of your role
Amy Gullickson’s wonderful aes22 keynote (reprised here) reflected on maturity being our ability to operate with our own bubble – holding our own tension, emotional being and destiny – but not going outside it into other bubbles.
Likewise, some of these stories reflect a need to maintain a strong boundary around what is and isn’t the responsibility of an evaluator – even (especially!) when we are excited about a program’s possibilities.
“I have finally come to understand it is not the role of an evaluator to solve major program issues or to shore up program staff. If we do, we may be in for a dizzying fall over a cliff ourselves,” Gail Valance Barrington wrote in her chapter ‘The Buffalo Jump’.
- Don’t assume the evaluation commissioner or policy maker has the same view of evaluation
There’s a great quote in the chapter by Rakesh Mohan – I Didn’t Know I Would Be a Tightrope Walker Someday.
“I assumed that policymakers understood the nature of our work and the importance of how we do that work…”
There’s a lot to be said for having that baseline conversation with the client at the beginning of a project about how they view evaluation, and how you as an evaluator see your role. Mismatched expectations are at the heart of several of these tales of evaluation failure.
What lessons can our seasoned evaluators at ARTD share?
Our teams are constantly learning from projects and from each other’s experience. We look forward to bringing you a short series of blogs from our ARTD staff on how they explain evaluation (see lesson 5), and their lessons from challenging projects.