The increasing demand for evidence-based policies and programs
Increasingly, governments committed to ‘evidence-based’ policy making are asking funded organisations to demonstrate the outcomes of their work.
The landmark Their Futures Matter (TFM) reform is a prime example of this, promising that evidence, monitoring and evaluation will drive continuous improvement across all areas of service delivery for vulnerable children and families. ‘Gone are the days that government funds something that sounds nice, without real rigour around outcomes’, said TFM Executive Director, Gary Groves at the 2019 TFM conference in February.
There is broad consensus that public policies and programs should be grounded in evidence. Without evidence, decision-makers would be guided by ideology or instinct. In the Aboriginal policy sector, a commitment to evidence building is further underpinned by a recognition that past policies and programs for Aboriginal people and communities have not often worked, and still, little is known about what does work., This has meant repeating ineffective programs or delivering successful short-term pilots without ongoing funding.
The challenges of western-scientific evidence building
This trend raises important questions about what constitutes ‘evidence’ and who decides, particularly for Aboriginal communities. As Gamble et al. note, ‘there are unwritten rules in policy and service settings that can include narrow ideas about evidence and rigour’.
Conventional western-scientific and positivist approaches to knowledge building have meant quantitative and experimental methods are often held as the “gold standard” of research, and for many years, considered the only form of evaluation with ‘professional legitimacy’.
Certainly, these approaches are appropriate in some contexts, particularly for programs with a linear logic and clear and measurable activities and outcomes, or with ‘settled’ models, rather than those that evolve. They can help us avoid practices that have been tried, tested and found to be harmful or ineffective. But in complex systems working to address complex problems, they can fail to ‘generate the knowledge and learning needed’ for change. There are a range of factors, such as small population sizes and controlling for countless external factors that affect outcomes, that can also make gold standard methodologies near impossible, if not unethical.
In his research on ‘Progressing toward an Indigenous Research Paradigm in Australia and Canada’, Wilson notes that ‘the notion that empirical evidence is more meaningful or sound permeates western thought, but alienates and dissociates many Indigenous scholars’. He explains that the ‘Indigenous research paradigm’ understands knowledge as relational; it looks at the complexity of the connections and relationships of individuals and is shared by all. He critiques the tendency for western research to break phenomena down into small, individual units of analysis and to understand knowledge as something that is gained and owned by individual researchers.
Valuing local knowledge and lived experience
These concerns were raised by Aboriginal organisations at a TFM workshop in May this year, reflecting on the Aboriginal Evidence Building Partnership (AEBP) pilot.
They reflected that narrow and western-centric conceptions of what constitutes ‘evidence’ often neglect other forms of evidence, such as traditional knowledge and the lived experience and expertise of those impacted by and implementing change. These too, are valid and rigorous forms of evidence.
So, what do we do? Gamble et al. argue that we need to draw on a range of forms of evidence—both evidence-based practice and practice-based evidence—recognising and valuing non-traditional sources of knowledge.
The TFM AEBP project provides a platform to do just that, linking Aboriginal organisations with evidence building partners to work together to build organisations’ data collection and evaluation capabilities. While it requires organisations to embed a standardised and validated tool for measuring client wellbeing, such as the Personal Wellbeing Index, it also allows partners to draw on practice-based evidence—listening to local knowledge and expertise to understand how things work on-the-ground and what is meaningful to measure.
Reflecting on the pilot, Aboriginal organisations at the AEBP workshop also noted the importance of working with, not for, communities in defining what success should look like and how it should be measured. As non-Aboriginal researchers, we need to work collaboratively with Aboriginal organisations in the spirit of self-determination and reciprocity—learning from each other and acting as critical friends and supporters, not supervisors. This means:
- building relationships and trust
- recognising and valuing the knowledge gathered and hard work undertaken over many decades before us
- honoring and respecting the advice of Aboriginal people in designing research with communities
- collecting data in a way that is culturally-safe and minimises the burden
- being clear on why the data is being collected and why a particular method has been chosen
- remembering that systems change is slow – time and resources are needed to effectively engage and empower communities and demonstrate (often long-term) outcomes in the context of rapid policy cycles.
To explore these ideas further, I’m looking forward to this year’s AES Conference in September, particularly Tracey Westerman’s keynote address on ‘Without measurability there is no accountability. Why we are failing to gather evidence of what works’, Kathryn Dinh’s session on ‘Buddhist evaluation: thinking outside the box of Western-derived methods’ and the panel on ‘Bringing the voice and knowledge of Indigenous people and communities to evidence building and evaluation in a way that empowers’. Productivity Commission. (2016). Overcoming Indigenous Disadvantage: Key Indicators 2016. Canberra: Productivity Commission.  Stewart, M. & Dean, A. (2017). Evaluating the outcomes of programs for Indigenous families and communities. Family Matters. No. 99, pp.56–65  Tune, D. (2016). Independent Review of Out of Home Care in New South Wales – Final Report.  Gamble, J., Hagen, P., McKegg, K. and West, S. (2019). Evidence for innovation. https://blogs.rch.org.au/ccch/2019/05/06/theme-4-evidence-for-innovation/  Weiss, C.H. (1998). Chapter 1: Setting the Scene. Evaluation: Methods for Studying Programs and Polices. Prentice Hall, New Jersey.  Maloney, J. and Leahy Gatfield, R. (2019). Evaluating Community Development. Journal of Social Work and Policy Studies: Social Justice, Practice and Theory. https://openjournals.library.sydney.edu.au/index.php/SWPS/article/view/13249  Gamble, J., Hagen, P., McKegg, K. and West, S. (2019). Evidence for innovation. https://blogs.rch.org.au/ccch/2019/05/06/theme-4-evidence-for-innovation/  Wilson, S. (2003). Progressing toward an Indigenous Research Paradigm in Australia and Canada. Canadian Journal of Native Education. 27(2), pp. 161.