Growing the evidence base through evaluation

At a recent Australian Research Alliance for Children and Youth (ARACY) event there was much discussion about growing the evidence base for a sector that encompasses many diverse and complex programs. Government departments and other organisations are increasingly attempting to consolidate evidence for public policies and programs to make it easier to understand their effectiveness and compare them to other options.

Evaluation is key to designing and delivering evidence-based policy and programs. As Brigid van Wanrooy of the Department of Health and Human Services (DHHS) mentioned in her address, in the absence of quality evaluation, it is difficult to determine whether an intervention was well designed, implemented successfully and achieved its intended impact.

Evaluating Programs, the OPEN Way

The push to evaluate often brings up two questions for the program being evaluated. First, how should an evaluation be conducted (the approach and method of collecting and analysing data)? And what will be done with the results (whether and where they will be distributed)?

DHHS is aiming to consolidate and distribute evidence about programs with their Outcomes, Practice and Evidence Network (OPEN). This network will promote research and data collection as well as build a list of programs that have had positive outcomes. The evidence for each program included in the network will be rated from low to high according to a hierarchy of evidence that places Randomised Controlled Trials (RCTs) or a synthesis of multiple RCTs at the top of the hierarchy, followed by quasi-experimental designs, cohort studies, case-control studies, cross-sectional surveys and case reports.

This approach is similar to the What Works Network, which was launched in the UK in 2013 as a national approach to prioritising the use of evidence in policy decision-making. What Works now has ten ‘centres’ focused on different policy areas and over £200 billion in funding. What Works also runs an advice panel that provides free assistance to help civil servants test their policies and programs. In the last five years, the centres have produced more than 280 evidence reviews and run over 100 large-scale RCTs, which have been assisting to transform public services in the UK.[1] OPEN aims to replicate this model for the area of child, youth and family policy in Australia.

While experimental evaluations are certainly an important part of the evaluation toolkit, there is more to evaluation and evidence than RCTs. As Gary Banks of the Melbourne Institute of Applied Economic and Social Research recently pointed out, RCTs are not the only or ‘best’ way of measuring the effectiveness of a public policy or program. [2] At different stages of program development and in different contexts, different methodological approaches will be more appropriate.

RCTs should not necessarily be held up as the ‘gold standard’ rather, as Patton argues, methodological appropriateness (having the right design for the nature and type of intervention, existing knowledge, available resources, the intended uses of the results, and other relevant factors) is critical if evaluation is to maximise its potential for addressing questions about what works, for whom, where, when, how and why. [3]

How can evaluations help to build the evidence base?

Evaluations provide an important function in supporting the development of evidence-based policies and programs and can be used alongside other forms of research. As discussed at the ARACY event, building monitoring and evaluation into a project or program from the outset is paramount. An intuition that a program works will not suffice to demonstrate that a program is effective and prove its worth to funders.

When engaging an evaluator, careful planning is needed to ensure the methodology is tailored, the right questions are asked, data collection procedures are ethical and feasible and that the program can make best use of evaluation learnings.

Evaluators can contribute to the broader evidence base about what works through incorporating the following principles into their evaluations.

  1. Scope—understand the current state of the evidence and where there are gaps that require further research.
  2. Plan—determine ways in which the evaluation could contribute to the evidence base.
  3. Translate—translate findings into actions to help programs improve and grow.
  4. Disseminate—in agreement with program managers, devise a strategy to distribute the information gathered and the insights uncovered in the planning stage. It is important for evaluators to feed into the evidence base by publishing their results and presenting at conferences where possible; as this supports accumulation of knowledge about effective social interventions.

Above all, the evaluation approach and method must be tailored to the nature and stage of program development and its context to ensure that it is fit-for-purpose.

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.