Evaluation topics that keep us up at night
Photo by Adam Griffith on Unsplash
With evaluation being diverse, complex and evolving, there are many topics that keep us up at night. This seems to be the case whether we’re commissioning an evaluation, conducting an evaluation or having our program evaluated.
NSW AES members met to discuss these topics in a mini unconference session facilitated by Jade Maloney (ARTD Consultants), Ruth McCausland (UNSW) and Kath Vaughan-Davies (K2 Strategies), informed by Open Space Technology.
A key aspect of an unconference is that the participants create the agenda. Following suggestions from audience members, we compiled six topics for discussion:
- communicating critical/ ‘negative’ evaluation findings to clients
- crafting evaluation recommendations for use
- planning for complex statistical analysis and the ethics of this
- managing the scope of evaluations
- identifying value in education contexts
- participatory action research.
The goal was for people to share their personal and organisational challenges, experiences and feedback and to learn from this. No agendas were set, no formal presentations were made, and members could freely move between the small discussion groups if they wished to.
Participants in each group came from a variety of organisations, including government agencies, non-profits, independent consultants and for-profit organisations, but everyone had some experience and interest in evaluation.
Asking questions, sharing experiences (positive and negative) and building on the feedback of others led to discussions going off in many different directions. Sometimes discussions or points raised would come back to the original question or statement and sometimes not (and when it didn’t it was perfectly fine).
Take, for example, the group that discussed communicating ‘negative’ evaluation findings to clients. We covered a lot of ground including:
- the tone and style of language used in evaluation reports and other deliverables
- the impact of the timing of an evaluation
- scheduling a findings’ meeting with your client(s) before formally reporting
- anticipating ‘negative’ findings/ outcomes in advance
- the challenge of working on/ for evaluations that are just regarded as ‘tick the box’ exercises
- engaging with evaluation client contacts whose level of interest/ involvement/ openness in the evaluation can vary.
- making use of different deliverable formats including case studies, videos, 1–2-page findings summaries
- the (often difficult) boundary between evaluation and advocacy
- publishing recommendations to encourage accountability
- making recommendations as they arise to enable continuous improvement
- prioritising outcomes (short vs long term) and resources
- having the relevant Minister to endorse recommendations and ‘launch’ the report, where relevant.
Coming back together as a large group to conclude the session, the diversity of insights shared reflected the diversity of the group. Some reflections that stood out to me were:
- no two evaluations are the same
- a session structured like this gives all participants a voice
- a mix of perspectives were shared by a mix of participants
- the lack of structure could lead to deeper and more dynamic conversations
- the total sum is greater than the individual parts
- common themes/ experiences emerged amongst participants
- help can be requested and provided if needed through the AES network
- the coming together overall reflections process was an important part of the session for embedding learning.
Unconference sessions can inspire follow-up discussions and collaborations among individuals and small groups – to continue discussions around evaluation topics, to explore ways to work together and to ask questions.
You can join the unconference day in the annual AES Conference in Sydney in September 2019. Attendees are encouraged to contribute their ideas, topics and experiences to engage in exciting conversations which they otherwise might not have expected to have.