Making monitoring and evaluation work for NGOs

With governments increasingly contracting NGOs to deliver services, there has been a corresponding expectation that NGOs monitor and evaluate their services and report on the outcomes they have achieved.

With the myriad of competing conceptual approaches to evaluation and difficult-to-decipher jargon, not to mention limited time and resources, this can be challenging for NGOs with limited internal evaluation capacity.

In this context, it is unsurprising that NGOs have questioned whether evaluation can really capture the value of the work they do. Some ask, ‘is evaluation counting the wrong things at the wrong time or for the wrong reasons? And would it be better to spend the time required for data collection on service delivery?’.

As people who do evaluation for a living, it can be quite disheartening to hear these criticisms. We know that evaluation holds great promise for NGOs by enabling learning, supporting accountability (to funders and the community) and building the evidence base, but we understand that without an appropriate approach, evaluation can fail to live up to it’s potential.

To support a dialogue about how evaluation can work for NGOs we presented at the NSW Council of Social Services (NCOSS) NGO Research Forum on 26 June. Here’s what we shared.

Top tips for NGOs

Over 30 years, working with NGOs to monitor and evaluation services funded by government, we’ve identified nine tips to make monitoring and evaluation work. We know that in the context of funding agreements, not everything is within your control. But – by thinking systematically and creatively – you can build in ways to make data work for you.

  • Build in monitoring and evaluation from the outset. Evaluation better support your organisation if build in upfront – not only so you have the data you need to assess outcomes – but so you can inform ongoing improvement as programs are implemented.
  • Get the scale right. Scale your data collection to the scale of your initiative. If your project is small and short-term, don’t make your data collection ‘bigger than Ben Hur’.
  • Keep it simple and stage it. If you are introducing a new monitoring and evaluation system, start small. Find teams with a positive attitude towards evaluation, who might be your champions and work with them. Stage elements of data collection. This approach also gives you time to make tweaks.
  • Pilot before launch. Test data collection with partners and participants so you know your methods are feasible to implement and your questions are meaningful.
  • Collect what is useful. Collect what information is useful to inform service delivery and to make a business case for ongoing funding. Before you add it to a survey or administrative data set, ask ‘how will we use this information?’
  • Keep an eye on the benefits and the burden. Remember your participants are likely asked to complete surveys from multiple organisations, don’t ask more of them than will be useful. Also think about the time required for staff to collect data.
  • Manage the process ethically. This includes gaining informed consent and managing data securely.
  • Understand outputs and outcomes. Make sure your regular monitoring data includes both outputs (for example, number of groups and number of attendees,) as well as outcomes. Otherwise – whether you achieve your outcomes or not – you won’t understand why.
  • When you evaluate, choose an appropriate evaluation approach for your purpose.
Alternative approaches to evaluation

In our experience, traditional formative and summative evaluation are not well-suited to community development and social innovation. Alternative approaches to evaluation have evolved, as evaluators have grappled with their role in empowering communities to take ownership of evaluation and in supporting interventions into complex, adaptive systems. These include developmental, empowerment and principles-focused evaluation.

  • Developmental evaluation addresses many of the concerns identified by community development practitioners about evaluation. It supports development rather than making judgement; it does not require outcomes to be pre-determined but allows for measures to evolve; it engages with system dynamics; and it centres accountability on those driving the initiative, their values and commitments. In a developmental evaluation, the evaluator facilitates regular data-based discussions about what is working and what isn’t and what that means for practice.
  • Empowerment evaluation aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs. In an empowerment evaluation, program staff and community members are in control and the evaluator acts as a critical friend (or coach).
  • Principles focused evaluation is suited to services and supports guided by principles rather than a set program model. In this approach, the evaluator considers whether the principle/s identified for the program: are meaningful to the people they are supposed to guide, are adhered to in practice, and supported desired results.

Continuing the conversation

For the group of NGOs we spoke to, these challenges and opportunities rang true. We’re keen to continue the conversation with NGOs about how they have made monitoring and evaluation work for them.

You might also be interested in these references.

ARTD blog: How can NGOs do evaluation on a shoestring? and Amplifying social impact

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.