Amplifying social impact

How can organisations improve their evidence-based decision making and, ultimately, their social impact? This was the key focus question in the latest AES professional learning seminar held in Sydney on 21 March.

Lena Etuk of the Centre for Social Impact (CSI) – a collaboration between UNSW, The University of Western Australia and Swinburne University of Technology – presented one approach to answering this question, describing CSI’s Amplify Social Impact project and how they aim to help organisations measure their social impact achievements.

A major focus of the Amplify project is on connecting with and working with organisations working to address social issues in the areas of Housing Affordability and Homelessness, Education, Financial Wellbeing, Social Inclusion and Work. The project has three aspects: developing an evidence base and research agenda for key social issues, engaging and connecting stakeholders to design and implement innovative solutions, and developing an online platform for understanding social problems, measuring social impact and reporting and benchmarking social outcomes.

Lena’s presentation stimulated a lot of debate in the audience.

Outcome measurement

Lena, while acknowledging the significant amount of work and investment going into social purpose programs and initiatives in Australia, identified two related concerns with outcome measurement in the sector.

  1. Organisations aiming to have a social impact often use a wide variety of tools to measure their outcomes.
  2. A large proportion of the tools being used to measure social impact, including population and household level surveys have not been validated.

This means for those of us working in the social purposes sector, we are often unable to compare outcomes across similar services and programs and unable to truly know that we are measuring the outcome we say we are measuring.

CSI have collated a range of instruments and developed a register (called Indicator Engine) of over 700 indicators tailored for use by government, non-government organisations and social entrepreneurs.

Benchmarking

CSI’s intention is to develop a platform (called Yardstick) in which organisations striving to contribute to the same social outcomes can benchmark their performance, learn from one another and potentially replicate elements of successful projects.

The platform will be open to, and is designed to be of benefit to, organisations of all types and sizes, governments, existing networks, academics, social entrepreneurs and other stakeholders working to address social issues.

Anticipated challenges

While the intent is noble, it is likely that organisations and evaluators who choose to use Amplify’s Indicator Engine and Yardstick platform will encounter challenges.

  • What if the indicators don’t meet the needs of the project managers to understand how their intervention is working?
  • How will the identified instruments be further tested with Australian populations, including Indigenous populations?
  • What certainty will users of the platform have about data quality?
  • Is benchmarking the right term? Is the intention to define a certain standard?
  • What other information do you need to understand the drivers of differential outcomes so that benchmarking can serve its purpose of enabling organisations to make improvements?

We believe benchmarking could bring new opportunities and challenges to the sector.

Imagine a hypothetical scenario involving a youth job creation program in a large urban area with a similar program rolled out in a regional/ remote community. An important outcome of both programs is enhanced job satisfaction, so they use the same indicator but achieve different results. From an evaluator’s perspective, the Amplify platform may be useful as it could enable helpful comparisons of data, but it might also fail to collect (accurate) data needed to sufficiently understand contextual drivers of difference.

It’s possible that other actors could use Amplify for different purposes, such as performance monitoring. Having accurate, and validated, data could also be important to funders, who have limited resources but need to make decisions around which program(s) to fund or not fund going forward. The possibility of funders using Amplify to monitor an organisation’s performance may deter some organisations from participating in the process as they may fear that their performance (according to the data) is not sufficient enough to attract additional/ new funding. We feel that benchmarking needs to be done carefully to maintain a collaborative spirit between organisations. It shouldn’t be about who’s outperforming who.

What next?

Despite these issues, there are reasons for evaluators to be excited about the Amplify Social Impact project and CSI’s work.

So far, they have only secured around 50% of their funding target – securing the outstanding funding will enhance their ability to work closely with organisations designing and implementing social purpose programs. In turn, this could create new opportunities for strengthening outcomes measurement.

The Amplify project is likely to generate further discussion around indicators, benchmarking and addressing social issues. This is positive as it could lead to individuals, governments and organisations becoming more aware of these issues, sharing learnings and increasing opportunities for collaboration and replication.

A cynic’s response

ARTD Partner, Andrew Hawkins, believes this endeavour should be entered into with great caution and humility in what can be achieved.

Reliable and valid measures are great, but they don’t ensure reliable and valid measure of outcomes. Most psychometric scales are designed to measure a current state, self-esteem, coping skills etc. These scales are not usually designed to be sensitive to changes as a result of a program, especially those operating in a complex adaptive system (i.e. society) where changes may be non-linear, and for which standardised measures may not be sensitive.

More fundamentally, the method by which measures are taken and the completeness of the data is a huge potential source of error that may well dwarf the benefit of similar measurement tools. What if one organisation is handing out the measurement tool at the end of a session while the program staff are present while some mail it out and get low response rates? What if one has a control group with random allocation to deal with attribution error and another does not? There are innumerable ways that error can seep into pollute so called ‘equivalent’ measures of outcomes. And then, if we follow Rossi’s law – that the more rigorous the measurement the more likely that the result is ‘no effect’ – we could end up funding organisations with less rigorous measurement and programs with easy to ‘measure’ outcomes or relatively ‘easy to change’ cohorts.

While there is merit in providing and using similar reliable and valid measures across interventions, it would be dangerous to create ‘league tables’ of organisations and programs. It would be wholly unacceptable to allow it to happen by publishing data when the actual measures are not taken in a standardised and systematic manner and shown to be sensitive to the changes an intervention is making.

We look forward to attending the AES Professional Learning session on ‘Harnessing the promise and avoiding the pitfalls of machine assisted qualitative analysis’ on 2 May presented by ARTD’s Jasper Odgers and David Wakelin and our partner at Altometer BI.

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.