
Agile evaluation, can there be such a thing?
‘Agile’ has emerged as one of the latest buzzwords in Government Departments. But what does it mean to be agile, and can we do more agile evaluation?
On Wednesday 2 May, Florent Gomez from the NSW Department of Finance and ARTD Partner Jade Maloney delivered a free lunchtime seminar in Sydney organised by the Australasian Evaluation Society (AES).
Drawing on ideas from an article by Caroline Heider at the World Bank, Florent introduced the Agile project management methodology, and participants then discussed how this could apply to evaluation, if at all.
So, what does it mean to be ‘agile’ and is there a place for it in evaluation?
Agile originated in the IT world as a project management methodology that uses short development cycles. In 2001, 17 software developers formalised the approach into a set of 12 key principles known as the Agile Manifesto. From the first two principles, it is clear that at its core, the Agile approach is customer-centred, deeply collaborative and constantly adapting. This is in contrast to the traditional ‘waterfall’ approach to IT project management, where the project plan is designed at the outset and then followed in sequence, with little flexibility for customer input.
Another key aspect of the Agile approach is the commitment to speed and efficiency, which Moira Alexander highlights in her article, ‘Agile project management: A comprehensive guide’. According to Alexander, the desire for rapid adaptation and optimal design, requires both simplicity and a high level of self-organisation and accountability by teams.
Though the Agile methodology originated in the software industry and continues to boast an adoption rate of 23%, it has since been used by a number of other key industries. Among these is government, which uses the methodology on roughly 5% of its projects.
Florent became interested in the concept when he joined a government agency, where many projects were delivered based on the Agile methodology and the ‘A’ word was heard everywhere. In his new role as internal Evaluation Manager, the expectation was also to evaluate these projects in an agile way, meaning, within very short timeframes.
In her article, Caroline Heider suggests how the concept of agile could be translated to evaluation practice. In addition to narrowing the scope of the project, she suggested evaluation could be more agile, or efficient, by drawing on:
- existing and standardised datasets
- algorithms for collecting, organising and presenting data
- electronic data collection methods, such as online surveys
- effective project management skills and tools.
Interestingly, most participants agreed that Heider’s suggested approaches to shortening response times are already widely practiced in the evaluation world.
So do AES members see the potential for evaluation to be more agile?
With this grounding, it was turned over to us evaluators to envisage whether we could see the potential for more agility in our work. Specifically, we were asked to consider the benefits, enablers and risks to evaluation being more agile.
Participants agreed that by being more agile, we could make evaluation more focussed, responsive, creative and ultimately, produce more useful products. However, they acknowledged that being able to make evaluation agile would depend on:
- having the necessary IT systems and the skills to use them
- whether the project needs ethics approval, which would limit any potential to change processes
- the level of buy-in and engagement of clients in this particular approach
- having the right structures and processes in place to facilitate such flexibility.
While Heider’s primary message was that there are ways to make evaluation more agile, both she and AES members acknowledged the risk of quality loss. Participants expressed fears that agile had the potential to become too ‘quick and dirty’ to produce meaningful results. They noted that evaluators may risk avoiding slow, but often necessary, methods of data collection in favour of faster, possibly unsuitable, methods. Additionally, participants identified the risk of both scope creep, which affects project budgets, and scope narrowing, which could limit capacity to make well-informed recommendations.
So, where does that leave us?
The workshop generated useful discussion and allowed evaluators to consider how they be more agile without compromising the quality of their work.
Participants identified clear synergies between Agile project management and developmental evaluation—developing a program in real-time through close consultation with program staff—and utilisation-focussed evaluation—conducting evaluations with a focus on intended use by end users.
As firm believers in our own ‘lunchtime learnings’, ARTD looks forward to attending more of these short and engaging lunchtime sessions in the future. You can visit the AES website for a full list of upcoming events.