How do you build trust in evidence in an era of public scepticism?

The worrying decline in trust of public institutions – in Australia and across other western nations – has been a topic of much recent discussion. In a time where evidence and expertise are treated with increasing public scepticism, what then, is the role of evidence-based insights?

The 2018 AMSRS Social and Government Data, Evidence, Insights & Research Conference, held in Canberra on 1 November, saw a range of speakers from government, academic and business backgrounds come together to discuss this issue. It asked, how can government and industry leverage data and evidence to design and deliver more insightful and effective policies and programs? In this context, Dr Rebecca Huntley used her keynote address to call on those who provide evidence-based insights to public institutions, such as ARTD, to consider if the way we do our work increases or decreases public trust in institutions.

Harry Greenwell from BETA – the behavioural economics team within the Department of Premier and Cabinet – continued this theme by discussing the importance of transparency and rigor in the evaluation of experimental interventions. Having a background in experimental psychology and academic research, I was keen to see hear debates and how they apply to the world of evaluation. Greenwell highlighted the problem that, even with the same dataset, different analytical approaches can reveal sometimes strikingly different conclusions.[1] So, how do we ensure that we are being rigorous in our analytic techniques, and that our clients and the public can have faith in the insights extracted from data?

To improve the quality of insights and evaluation, Greenwell suggested following the principles of Open Science. These aim to make the process of data collection and analysis more transparent through:

  1. pre-registering the design and method for data collection
  2. submitting pre-analysis plans to clearly identify the questions of interest and analytical approach before researchers see the dataset
  3. making materials, data, and code freely and publicly available, wherever possible.

These suggestions were drawn from the Open Science movement. I was first introduced to Open Science as a PhD student in Psychology, where a number of well publicised failures to replicate high profile studies[2] left researchers wary of spurious results being published – due to either well-intentioned but statistically misguided analyses or, more nefariously, ‘p-hacking’ where datasets and analytical approaches are manipulated to ensure a result reaches the threshold of statistical significance expected for publication[3]. Open Science advocates acknowledge that reasonable and defensible methodological choices about analytical approaches can lead to significant variation in results, and that greater clarity around analysis can address these issues.

Greenwell explained that BETA tries to commit to the Open Science principles by pre-registering and publishing pre-analysis plans for the trials they run; however, they are not always able to make their data publicly available.

Increasingly, I’ve noticed the principles of Open Science and transparent data analysis become adopted in academic research and I’m interested to see if and how these approaches become adopted by evaluators. Although an Open Science approach is not appropriate for all evaluations, considering these principles when working with experimental or quasi-experimental data could be an exciting way to help evaluators be confident in extracting valid and reliable insights from data.


[1] See Silberzahn et al. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results- used by Greenwell to illustrate of the impact of analytical choice and methods on experimental conclusions.

[2] Amy Cuddy’s research on ‘power poses’ became well known after a viral TED talk. This NYTimes article reports on the impact of the well-publicised failures to replicate her original findings https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html

[3] See https://projects.fivethirtyeight.com/p-hacking/ for a demonstration of how to ‘p-hack’ a dataset

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.