How might we… do systemic social change?
You might have heard that we are living in a VUCA world—volatile, uncertain, complex and ambiguous.
While social policy interventions and evaluation ‘grew up in the projects’, as evaluation theorist Michael Quinn Patton has noted, there is an increasing recognition of the need to think in systems. There’s also the need for innovation and closer collaboration between design and evaluation (evidenced in last week’s Design and Evaluation converge hosted by the Australian Evaluation Society and Clear Horizon in Melbourne).
Early Saturday morning, when we’d normally be hitting the snooze button before grabbing breakfast, we headed to Social Design Sydney’s ‘How might we…support systemic social change?’ workshop to consider what change makers have learned to date and what’s needed to strengthen our impact.
Dr Andrea Siodmok from the UK Policy Lab highlighted the problem with the metaphor of ‘turning a ship’ for creating change. We – or maybe it’s just me, with my lack of technical understanding of ships, borne of my propensity for sea sickness – think this means change requires a major exertion of energy. But in reality, it is the trim tab on the rudder that creates a break in the water that enables the ship to turn. Our equivalent in the world of design and evaluation is, how might we identify smaller interventions that enable greater change in systems?
Another possibility Dr Siodmok identified is using change to create change; identifying liminal moments, when the system is in a state of transition, to introduce change. This brings to mind the literature on policy windows. Timing is clearly important, as all change is about people and involves politics.
Both she and her colleague, Sanjan Sabherwal, identified the need to work with policy teams as supporters. They also identified the value of bringing people with diverse perspectives into a room to create a more holistic and shared understanding of systems, break silos, and agree on actions. This requires intentionality in designing the conversation—for example, not having people sit around a boardroom table when making connections and identifying possibilities. This is something we have also found valuable in our work supporting the design and evaluation of initiatives to increase inclusion and reduce stigma and discrimination.
The “lightning round” of presentations from local change makers built on and extended these insights.
- Martin Stewart-Weeks described a world in which institutions have not caught up with change, and called for a reconsideration of the theory of the business of government.
- Kerry Graham noted that, in this context, change is not linear and predictable, and we need to get better at spotting patterns, seeing our own power, learning (together and across conflicting perspectives), and staying the course.
- Cameron Tonkinwise suggested that, when we are thinking about design for systems change, we need to move beyond the theory that design enables change by making things simpler, more automatic, and a little more delightful. To support systems change, things may need to get a bit harder for the people who currently have it.
- Sarah Hurcombe reminded us there is no silver bullet. Who is at the centre depends on where you’re standing, and we can draw on Donnella Meadows’ leverage points.
- Tom Dawkins challenged us to develop an ecosystem capable of continuing to spin out innovations as the world continues to change, and to think differently about social innovations at different stages in their life cycle (much like what happens in the start-up cycle). This is important because great ideas can look like bad ideas as they violate our assumptions about the world. Not all of them will work, but we can only know that through testing.With these provocations, we identified four topics for open space conversations. Because I’ve noticed the tension between calls for innovative and evidence-based initiatives—with some grants asking applicants to demonstrate both—I joined the group discussing how we might invest more courageously in early stage innovation. Our group identified a need to:
- fund innovation at different stages differently
- establish a structural and institutional separation for innovation—a brand that can hold “failure” without cracking
- use shared risk models—co-funding by government and community—where appropriate
- share what has been learned and establish a culture of learning.
There were different views about whether government should require all funded initiatives to report on learnings now. From my experience in evaluation, and my reading of the evaluation use literature, evaluation anxiety is a common issue. Individual and organisational receptiveness are key to enabling evaluation use. If we don’t work on shifting the culture as well as the structures around sharing learnings, we could get less than full disclosure in public reporting or, worse, shrinking terms of reference for evaluations. Of course, the relationships between culture and structure and systems are complex and intertwined. But given the evidence is that “failure” can be a step to success in innovation, we need to shift the way we conceptualise failure in this context.
I am looking forward to hearing more from these change makers as they shift systems. If you think the link between design and evaluation is key to shaping the future of evaluation, come along to our open space session at #aes19SYD in September.