We’re excited to see a fantastic array of interesting evaluation topics in the AES23 Conference program, and are pleased to see several of our colleagues on the bill for workshop and conference sessions! We’re keen to catch up with everyone, whether it’s at one of the below workshops or sessions, or playing fun games at our stall, or at one of the many social events at the conference.
See you there!
You can read more detail on workshops and sessions and register by clicking on the links below.
Workshops with ARTD staff and friends
Conference sessions with ARTD staff and friends
with Sharon Marra-Brown and Jo Farmer
In all of our evaluations, we are likely to be recruiting people from the LGBTQIA+ community. People from the community occupy all walks of life (yup, we are everywhere). Whether you are aware of it or not, your recruitment practices and the approach you take to data collection is being assessed by queer participants so that they can decide for themselves if you are safe person to engage with and potentially open up to.
So what can you do to make sure they trust you? And how can you live up to that trust, and be sure that your approach is safe and empowering? This presentation will explore best practices and considerations for engaging in data collection with people from the LGBTQIA+ community.
with Brad Astbury, Jade Maloney, Duncan Rintoul and Scott Bayley
For evaluation to deliver on its promise, organisations need to strengthen their evaluation maturity. Evaluation maturity models provide a roadmap for improving evaluation culture and capability within an organisation. They typically consist of criteria for success and performance levels that guide understanding and action about the planning, conduct and use of evaluation.
However, there is no one-size fits all approach to developing and implementing evaluation maturity models. There are also many challenges that can prevent maturity models from getting off the ground or being applied.
The purpose of this interactive session is to develop capability in using evaluation maturity models.
with Gerard Atkinson and Michael Brooks
Machine learning is the hot topic of 2023, and it seems that every day brings a new application. Beyond the questions of whether the chatbots are going to take our jobs, are fundamental questions around whether machine learning actually works better than humans at certain tasks. For evaluation, one of these tasks is topic classification, a core part of the qualitative researcher’s toolkit.
In this presentation we present the results of an experimental test of machine learning against fully human and hybrid approaches to topic classification. We take a range of approaches from chatbots through to automated processing and classify performance on a range of metrics, including speed, accuracy, and ease of use. The results of our experiment will give you guidance on how and when machine learning can be used to enhance one part of your evaluation practice.
Empowerment or exploitation: the ethics of engaging people with lived and living experience in evaluation
with Sharon Marra-Brown, Jo Farmer, Jade Maloney, Alexandra Lorigan
Involving people with lived and living experience is becoming a normal expectation of evaluation practice in a range of sectors. It has the potential to be a positive, empowering experience for everyone involved and produce outcomes that are more feasible, effective and respectful.
However, as it becomes a routine expectation, there is a risk that involving people with lived and living experience is done without the required thought and attention, resulting in poor experiences and outcomes. In addition, taking a co-design approach to evaluation and engaging with people with lived and living experience in evaluation can be misunderstood by ethics committees, who may take a conservative or paternalistic view.
Taking a considered, ethical and values-based approach to engaging with people with lived and living experience means that we are more likely to create positive and useful processes. This session will provide insights into how to navigate the ethics of engaging with people with lived experience.
What can we learn from Machiavelli and systems science to improve how we facilitate evaluation use?
with Julie Elliott, Andrew Hawkins, Brian Keogh, Kara Scally Irvine
To advance ideas about how we frame and approach evaluation use, this interactive world-café takes the perspective of Machiavelli and systems science. The purpose of this session is three fold. First, we reflect on the incongruity of evaluation use theory and the reality of how contemporary administrative and bureaucratic systems behave. Second, we look through the lenses of Machiavelli and systems science to transform the way that we see evaluation use. Third, together we develop a better explanation of what evaluators can do to facilitate use under uncertain and highly complex administrative and bureaucratic circumstances, including identifying the key choices that evaluators must make in deciding how, under what circumstances and why to facilitate use.