Off course: Dealing with unintended consequences in evaluation
Most of us working in the evaluation and impact space are here because we want to contribute to positive change. So, what happens when it turns out our interventions are having a different impact to what’s intended?
At this week’s upcoming Australian Evaluation Society International Evaluation Conference Niketa Kulkarni and Nikki Bartlett will be exploring unintended consequences, and how we as evaluators and social innovators can help better prepare for, identify and respond to these – before they can cause harm. Here, Niketa gives us an introduction to a much bigger conversation we need to have.
What exactly is an unintended consequence in the world of social impact?
When we design projects to improve social and environmental outcomes, we usually work off a Theory of Change . It’s a foundation for an evaluation framework; think of it as a map for how activities will lead to the outcome we want. In it, we make (often very sound) assumptions about the sequence of events or interactions that need to occur to create the change we want to see. But sometimes, one or all of our pathways to change veers off in a different direction, and the project ends up producing behaviours and outcomes that we didn’t anticipate. That’s where we see unintended consequences.
Can you give us an example of unintended consequences?
Let’s say we had a program that was designed to support women’s empowerment in an area with high unemployment and poverty rates. Our Theory of Change might begin with providing skills training in a type of income generation activity that we hope women will attend, which would then be followed up with support for them to develop their own businesses, which we hope leads to greater income and the ability to make their decisions over their life. That’s the empowerment we’re looking for and building into our Theory of Change.
What if having a new source of income resulted in women’s partners having access to more disposable income, using this income to consume alcohol and become abusive as a result? Or if women were forced to work and then had to hand their money over to other family members, resulting in even less say over their lives?
This is a simplified version of some development initiatives’ very real, common unintended consequences.
Does an unintended consequence have to be bad?
Not at all. Using the example above, it might be that with better income, women could afford better food leading to greater nutritional outcomes for themselves and their entire family or community. Or perhaps they could afford school uniforms, leading to greater school attendance and better education for their children.
Unintended consequences can be good, bad or neutral – the common thread is that it’s a change brought about specifically by our intervention, and we didn’t anticipate it.
Likewise, an unintended consequence doesn’t have to affect all or even most people involved in the project. It may only be a minority of people affected negatively or differently, while most participants benefit from the project as expected.
Very often, it’s those groups who are already most vulnerable that tend to suffer from negative unintended consequences. For example, say if in our theoretical women’s empowerment program we failed to design for disabled women to take part. They might find themselves without carers as perhaps their carers have less time for them as they are with the work they’ve acquired through the program, severely affecting their quality of life.
If a program or intervention is still achieving most of its aims, do we need to worry about them?
We have a responsibility to. The Australian Evaluation Society’s Guidelines for the Ethical Conduct of Evaluations require that we look for potential risk or harm to individuals, following the Do No Harm principle adopted by aid and development agencies.
Let alone our ethical and moral obligations – depending on the program, unintended consequences can be extremely serious, even life-threatening.
So how do we look out for unintended consequences, and what do we do when we find them?
As with most things related to impact measurement and evaluation, don’t leave it to the end!
There are simple steps we can take, like making sure we speak to a range of people likely to be impacted by the project about what some of the consequences might be.
The importance of getting participant feedback on the design of the project and evaluation cannot be overstated – especially feedback from the most vulnerable groups and participants, those who tend to be most negatively affected by unintended consequences and whose voices struggle to be heard. We simply don’t know what we don’t know, and we need to check our assumptions and biases.
Likewise, advocating for space at the design stages of evaluation, having the freedom and flexibility to move beyond the tick-boxes of quantitative, Theory of Change based indicators will help us find the unexpected stories of change. This is why we advocate for qualitative, open evaluation approaches that can sit alongside quantitative methodologies – so we can get a more complete and nuanced picture.
What are the challenges of this work?
It takes us back full circle. We all want to see positive change, but it can be difficult to set out to look for potential harm actively. It’s never great to hear that something we’ve invested time, energy and money in is actually having a detrimental impact.
We don’t want to risk losing funding. We don’t want the bad press or responsibility if a program is not working as it *should*. But this is exactly why we need to be aware of unintended consequences and take steps to prevent, find and address them – because then we have the chance to address and adapt what’s not working as expected.
Because our job as evaluators isn’t to measure the change we want to see. It’s to measure the change that’s in fact occurring in people’s lives. And our ultimate responsibility is to them.
Niketa Kulkarni and Nikki Bartlett are presenting a session titled Identifying unintended consequences though inclusive evaluative design at the Australian Evaluation Society’s International Evaluation Conference in Adelaide 28 August – 2 September 2022. Read more