
A reflection (and thematic analysis) of AES2024
For this year’s Australian Evaluation Society International Evaluation Conference held last week in Melbourne, a truckload of us from ARTD (or, in Victorian parlance, about an AFL side’s worth) made our way down to Melbourne to run workshops, give talks, speak on panels, moderate sessions, connect with the evaluation community and, of course, soak up the knowledge on offer.
Now, let me tell you, there was plenty of knowledge to soak up—too much for any single person to wrap their head around. There were dozens of fascinating presentations, panels and workshops, but unfortunately, they were often in the same time slot. This meant that each day, we were forced to make tough decisions on what to attend.
Fortunately, for those of us experiencing FOMO over missed talks, you could typically rely on someone from ARTD to attend the talk you missed. So, once we were back in the office this week, we were able to debrief and share ideas, insights and invocations that struck us most powerfully. My colleagues described talks that shifted their perspective, that introduced them to new evaluation tools that could clarify our thinking or simplify evaluation tasks, and that reinforced important evaluation principles, particularly those around equity and inclusion.
This reflection session was great, but I still had more questions, so I asked my fellow attendees to email me their two biggest takeaways from the conference. With a trove of qualitative data in hand, my evaluator instincts kicked in, and the next thing I knew, I was conducting an inductive thematic analysis of their reflections.
As I sifted through the takeaways (N=23 of them, to be precise), I found that they naturally clustered into four main themes (pictured in the treemap below):

Use evaluation tools carefully; they can both illuminate and obscure
My colleagues spoke of having their eyes opened to new tools and methods to help structure thinking and to assist with the collection and analysis of data. One colleague spoke of being particularly inspired by tools that draw on traditional knowledge systems, such as sense-making exercises and art workshops.
On the other end of the spectrum, many of us appreciated learning about cutting-edge technologies like generative AI in evaluation. For me, learning about Qualitative Comparative Analysis (QCA), an advanced analytic technique that can help us draw causal inferences from qualitative data, was particularly illuminating.
However, for some of us, the lesson that resonated most profoundly came from June Oscar AO in her opening plenary, which was grounded in Audre Lorde’s powerful insight that “ the master’s tools will never dismantle the master’s house”. She reminded us that the tools we use to make sense of the world inevitably reflect the worldview in which they were created. Our choice of research method, and even embedded within the research methods themselves, are implicit assumptions, biases and omissions, and it is our responsibility as evaluators to surface them.
“The frameworks that inform our thinking, and the assumptions underpinning those frameworks, can either serve to perpetuate inequalities or help address them.”
Consider how impacts are valued by stakeholders
Most of the most animated conversations I had with my colleagues both during and after the conference centred around a set of talks that explored the assessment of value, as distinct from impact, in our evaluations. These ideas, while familiar to many of us thanks to proponents of value for investment analysis such as Julian King, were delivered by the AES presenters in compelling and thought-provoking ways.
The presenters asked us to reflect on how, in our own evaluative practices, we are considering how different stakeholders value the intended (and unintended) impacts of a program or intervention—including funders, recipients, and those involved in implementation. When designing an evaluation and when making evaluative judgments, it is important to ask if we are adequately considering how different stakeholders value a program’s impacts. And even among one set of stakeholders (e.g. service users), how clearly do we understand the relative value they assign to the activities and impacts of a given program? I know that I’ll be continuing to reflect on these questions, particularly as they butt up against the practical realities of conducting evaluations in the real world.
“We should be going beyond simply identifying outcomes and beginning to explore what value those outcomes have to stakeholders, and how that varies across different stakeholders.”
Image: ARTD’s booth activity: ‘What is your instinctive reaction to these terms?’
Comparing research methods requires nuance
Many of my colleagues attended talks and panels about experimental research methods that they described as refreshingly nuanced and thorough, and that successfully avoided the trap of “Method X is good; method Y is bad”. Instead, they were examined in terms of their strengths and limitations, applicability to certain contexts and KEQs, and compatibility with other methods.
For example, one colleague reported leaving a session with a clearer understanding of how supplementing RCTs with qualitative data collection methods can provide critical context for interpreting RCT findings. This approach can reveal factors that explain patterns in the data and highlight limitations in drawing conclusions from those results.
Other colleagues spoke of gaining new perspectives on less well-known research methods, such as journey mapping and participatory research methods, that can not only empower program participants and other stakeholders but can result in efficiency gains for evaluators.
“I enjoyed when people debated on topics and examined different points of view – for example, the role of RCTs in evaluation”
Conference participation strengthens bonds
This somewhat ‘meta’ theme was not the most represented in my colleagues’ key takeaways, but from chats on the conference floor, in taxis, and at dinner tables, I can confidently say it is unanimously held. The opportunities that the conference afforded us to build connections with both the broader evaluation community and with each other were priceless. As one colleague shared, “having the opportunity to build deeper connections with our colleagues in an inspiring environment was the most valuable thing.”
As a first-time AES conference attendee, I left Melbourne with a much clearer sense of evaluation as it exists in the real world through the eyes of an international community of practitioners who believe in evaluation as a force for good. I came away with an ineffable sense that evaluation is, on the one hand, solidly grounded in theory and idealistic principles and, on the other, a deeply human endeavour carefully stewarded by a community of professionals (and old friends) bonded by the belief that listening carefully and thinking critically can make the world a kinder and more equitable place. As an early career evaluator, it confirmed that’s something I want to be part of.
“I return home with a new respect and appreciation for what we do, why, and for all the people we work with.”

Image: ARTD colleagues, Fergus and Simon, enjoying a hot cup of coffee.