Evaluator Lessons and Challenges – with Jade

How do you describe evaluation?

It depends who I’m talking to – how much they know, their attitude to and experience of evaluation.  I like Scriven’s definition – “the systematic process to determine merit, worth, value, or significance” of a thing (whether that be an organisation, policy or program). I might translate this to helping people to define what’s of value to stakeholders and why, work out how they’ll know if they’re delivering on this value, and look at how they are doing and how they can do better.

Evaluators aren’t always welcomed. While we think we’re great, people can fear us or have had past negative experiences. So, at some point in my career, I stopped calling myself an evaluator. But, a few years back, I realised that this just perpetuated the problem of evaluation being unknown, feared or misunderstood – and what isn’t really evaluation being presented as evaluation.

That’s why now, however I describe it, I try to mentioned values and valuing because they’re at the core of evaluation.

What was your most challenging project and what were the lessons?

There have been many overlapping in the past 15 years. When you’re grappling with tough policy problems, politics and people’s lives, the work isn’t easy and shouldn’t be. But there’s run of the mill difficult and the difficult that changes your practice. That’s why ARTD senior staff have a tradition of sharing their stories of challenging evaluations – to help others avoid the same traps, but also to show that sometimes they’re unavoidable and how to learn the most in these situations.

There was an evaluation early on when I mistakenly assumed the pilot manager expected all our meetings to be “on the record”. When I delivered the report, they didn’t stand by some of their comments, so I had to find different evidence to make the point. From this, I learned two things: to always be explicit; and how to use the “off the record” conversations to interrogate the data and find what might not otherwise have been unearthed, while protecting confidentiality.

Also early on, I was asked to run a short session during a 2-day conference to engage a very large number of national delivery staff with evaluation findings. What I didn’t know was that my session was the only interactive one in the two days, so we ran wildly over time, and I couldn’t get them to stop chatting. From this, I learned the importance of understanding where you sit in an agenda and not taking on the gig if it’s an impossible ask.

I’ve had a project where one of the organisations collecting data breached the confidentiality of that data, and where we were also ordered to release confidential data. Thankfully, we were able to fight this with the support of the HREC that had given us ethical approval. From this, I learned that – while HRECs can be a lot of work and they don’t always understand evaluation – they can also be an important safeguard if things go wrong.

But the most challenging projects are those where the organisations aren’t really ready for what they’ve asked you to do. I’m yet to work out how to convert this into a checklist to ask organisations to consider before we start.

What do you wish you’d known when you started out?

Fortunately, Chris Milne and Wendy Hodge taught me a lot about evaluation when I started.

But I wish I’d known not to get caught up in definitions that people debate, that I’d understood that the process is as important as the product, and that you can never attain “perfect” – it’s always a work in progress.

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.