Changing self and system in COVID-19

This article was written by Anna Powell and Alessandra Prunotto

How do you change a system that is in a state of flux – and at times, in chaos?

The systems change initiatives we support at Clear Horizon are now tackling this question. Before COVID-19, the typical patterns of systems were very difficult to shift – in many respects, the systems we live and work in here in Australia have been built over more than 200 years. Now, they move like quicksilver.

The pandemic has required systems change-makers to reorient themselves in systems that are in flux. Change-makers are working out new opportunities and leverage points to affect positive change now. (Let’s not waste a good crisis.)

Of the many potential systems levers in this context, one meaningful area to focus on can be how you yourself show up in the system. While systems change leaders do this all the time, during a time of heightened instability working on yourself helps keeps you anchored to purpose and hold steady through the state of flux. Developmental evaluators can play a crucial role in supporting this reflective and action-orientated process.

You are the system

According to The Water of Systems Change by Kania, Kramer and Senge (2018), systems change requires change-makers to identify the intangible aspects of a system, including relationships, power distribution, institutional norms and constraints, attitudes, and assumptions.

One of these intangibles is mental models, a powerful lever for shifting the conditions holding a system in place. Mental models are ‘Habits of thought—deeply held beliefs and assumptions and taken-for-granted ways of operating that influence how we think, what we do, and how we talk (Kania, Kramer & Senge 2018).

Deeply implicit, mental models are tricky to identify, especially in yourself and those like you. They are also confronting to challenge, because you have to question the power structures that have shaped these mental models – often structures you have benefited from in some way.

But it has to happen. Systems change is ultimately change in people. Mental models are held by individuals, and individuals are fractals of a system. Learning and change on a personal and team level is a part of systems change.

Triple-loop learning for self-reflection

Reflection can be uncomfortable. In a group setting, it needs high levels of vulnerability and trust. This process can be softened by a neutral developmental evaluator playing the role of ‘curious’ friend – someone close enough to create a sense of warmth, but far back enough to identify cognitive dissonance.

These difficult reflections need to be systematic and driven by pertinent questions. Mark Cabaj’s paper Evaluating Systems Change Results introduces a concept called ‘triple-loop learning’ that we’ve found useful to be more deliberate when reflecting on the self in the system.

In our previous blog post, we explained how single and double loops of learning are useful for strategic learning in emergent contexts. Cabaj explains that while these loops deal respectively with learning about what we are doing (implementation) and what we are thinking (strategy/context), triple-loop learning deals with how we are being. It directs us to ask questions about our emotional triggers, our habitual responses, our social norms/dynamics and our individual and shared values and narratives.

Brave government: triple-loop learning in action

The value of this triple-loop learning was highlighted through our work in supporting a government client with their efforts to change the way they work with communities. They were aiming to shift an entrenched power dynamic in how government works with communities and transform ways of working within their own offices.

As our staff built a trusting relationship with this government team, we together identified their positioning of power and authority as a key systems lever. Before COVID-19, we began to help facilitate these reflections on how they were stepping into their power, moving more into the tricky space of personal and team adaptive work. Seeing senior officials confronting their own discomfort was a testament to their commitment to change.

The team identified that their adaptive work and reflection meant that they would be able to take forward new ways of being and working wherever they went in government. By starting to build a tolerance of the discomfort needed to learn and change, they were modelling the learning culture they wanted to see across government. They were moving towards a new mode of ‘being’ in a system that they could take with them no matter which department they found themselves in. A systematic triple-loop reflection process was key to identifying these valuable, yet intangible, moves toward meaningful change.

Adapting programs in a time of rapid change

COVID-19 is a time of flux. In a matter of months, we’ve seen social and economic upheaval across the globe.

In times of uncertainty and transition, change-making organisations need to be agile and responsive. Our survival and our ability to help others depends on it. With a few tweaks, a well-embedded measurement, evaluation and learning (MEL) system can be a vital resource for adapting programs in a time of rapid change.

Theory of change and learning loops

As practitioners, we are well aware of the benefits of MEL for adapting programs, albeit in more stable periods.

Theory of change processes help organisations work out what to do, what to measure, and how to set up the systems for collecting and interpreting data. Most programs use two “loops” of learning to make decisions, to use a concept popularised by Chris Argis in the early 90s.

The first is “single loop” learning, which is related to implementation. This is usually utilised by program staff closest to the action who will frequently reflect on what’s working and what’s not.

The second is “double loop” learning, which is related to broader strategy. This learning process is usually done by the decision-makers at the helm of an organisation, who need to reflect on whether the goals and activities are still relevant to the context and whether their big assumptions still hold true. These double loop decisions happen less often, maybe every few months or twice a year.

When the context is constantly shifting, there is an even greater need to take a learning and adaptive planning approach. In the context of COVID-19, we’ve identified two key changes to make. The first is to shorten the cycles of data collection and feedback. The second is to ramp up your double-loop learning.

Let’s unpack how you’d change your approach to learning and adaptive planning by walking through an example we often use to teach theory of change at Clear Horizon Academy: the carpark model.

An example of adaptive planning: the carpark model

Imagine that you’re the owner of a carpark in the CBD. Before the pandemic, your main challenge was that cars were being stolen and this was discouraging parkers. Your vision for your carpark was that it would be safe and profitable, free from car thieves and full of happy customers.

You had created a theory of change to transform your carpark, and through this process decided that adding CCTV cameras was the one major influencing activity that would increase the number of customers.

A diagram of the theory of change for the carpark model, showing causal pathways from the influencing activities to the broader goals.

The carpark model’s theory of change.

For a while, this had worked. Not only had the CCTV cameras deterred thieves, but they had also attracted more parkers. This had created more eyes in the carpark, creating ‘natural surveillance’ and reducing opportunity for theft. Times were good, and your change program was working.

Then, COVID-19 hit.

Now with everyone working from home, there are a lot less people driving into the city. There’s a hospital nearby, so a few workers are still parking here. But each day there are less customers, and theft is starting to spike.

You realise that COVID-19 has created major changes in your context that are inhibiting your goals. So you decide to shorten your cycles of data collection and feedback, with a focus on double loop learning.

You take the following steps:

Step 1

The first aim is to gather data about this new context, with more regularity. You set up systems for single loop learning: daily reports from the carpark attendants on their observations, as well as pulse surveys for customers to understand what is concerning them. You set up a weekly time to review CCTV footage to try to spot patterns in the thefts and understand their cause.

Step 2

Armed with information, now is the time to use double loop learning to revisit the theory of change. Are your broader goals still relevant? Do we still need a safe and profitable car park in this context?

You determine your goals are still relevant. However, you note there is a need to broaden the definition of safety. From your weekly scans, you realised your customers are concerned about hygiene in the carpark – so you redefine safety in terms of security as well as health.

Step 3

Now you’re moving toward adjusted goals, you need to reconsider your activities. Will they work and be effective in this new context? What might hinder or help each causal pathway to your goals?

You realise that one of your major assumptions is no longer holding up. In a pre-COVID-19 world, you assumed that more people in the carpark would create natural surveillance, discouraging thieves and attracting more customers.

But from your data scans, you find out that a bustling carpark is having the opposite effect! Customers are concerned about being close to others and having more people touching the ticket machines, which increases their chance of catching COVID-19.

Step 4

You come up with additional strategies to work with your revised assumptions. To balance between natural surveillance and social distancing, you prohibit parking in every second car space. You also install contactless payment technology and put up signs highlighting your public health-consciousness. As you begin to implement, you use your new single-loop feedback systems to check whether these strategies are working.

What we can learn from this example

For any organisation trying to continue their important change work, this approach can greatly help us with adapting programs in a time of rapid change like COVID-19. Regularly gathering implementation data and scheduling time to re-assess your strategy will set you up to do your best in this chaotic period.

M&E and international development in COVID-19

The world is moving under our feet. I am sure I am not the only person to feel it. Everyone is struggling to come to grips with how COVID-19 has changed our lives. I work in M&E within international development, and with the border closures, I cannot travel to work with partner organisations.

Even if I could travel, those partners are in emergency mode, responding to COVID-19 in the best way they can. Planned M&E activities are often put on the backburner for more pressing priorities.

When I have thought about the work they are doing, as well as the everyday heroism of nurses, doctors and cleaners, I have struggled at times with my own relevance as an M&E consultant in international development. How can I help during COVID-19?

In this blog, I want to share some of my initial thoughts on how M&E can help international development to respond to COVID-19. I say they are initial as they will evolve over the course of this pandemic. At Clear Horizon, we will continue to share and refine our thoughts on how M&E can support communities to respond, as well as adapt to the challenges and opportunities raised by this seismic change.

Supporting evidence-based decision-making

International development programs are delivering urgent materials such as Protective Personal Equipment (PPE) to partner countries or funding other organisations to do this. However, in the rush to deliver essential materials and services, setting up data systems that provide regular updates on activities, outputs, ongoing gaps and contextual changes are often neglected.

While understandable, setting up M&E systems to provide regular reporting is essential. Decision-making will be at best limited if it is not informed by evidence of what is happening on the ground.

For example, if we understand what programs/donors are funding, where and any ongoing gaps, we can better coordinate with other programs to ensure that we are not duplicating efforts. There is also an accountability element, as our work is often funded by the taxpayer. In a time where people are losing their jobs in record numbers, we have an obligation to report how their taxes are being spent.

We can learn from humanitarian responses on how to develop lean M&E systems. Examples include fortnightly situation reports which outline in one-to-two pages key results, next steps, and any relevant context changes.

There is also an opportunity to draw on digital technology to provide organisations with clearer oversight of their pandemic response. Online dashboards such as the ones we use through our Track2Change platform can provide organisations a close to real-time overview of where they are working, what they are delivering and key outcomes.

Driving learning and improvement

More importantly, M&E will help us to understand what works, in what contexts and for whom. We work in complex environments where there are multiple interventions working alongside one another, along with dynamic economic, political, cultural and environmental factors that interact with and influence these interventions. Non-experimental evaluation methods such as contribution analysis and process tracing help us to understand why an intervention may or may not work in certain contexts due to certain contributing factors, and thus what needs to be in place for this intervention to be effectively replicated elsewhere.

But M&E does not only support learning at set points in the program cycle, such as during a mid-term evaluation, but on an ongoing basis. For example, regular reflection points could be built into programming with program staff coming together, face-to-face or through online video conferencing. In these meetings, stakeholders can reflect on evidence collected in routine monitoring, as well as on rapid evaluative activities such as case-studies on successful or less-successful examples.

During these sessions, program M&E staff would facilitate program teams to reflect on this evidence to identify what is working well or not so well, why and what next. Ongoing and regular reflection also draws on the tacit knowledge of staff – that is, knowledge which arises from their experience – to inform program adaptation. This sort of knowledge is essential in dynamic and uncertain contexts where the pathway forward is not always clear and decisions may be more intuitive in nature.

Clarifying strategic intent

Thinking about your strategy is not necessarily at the top of your mind when responding to a pandemic. However, as the initial humanitarian phase of COVID-19 winds down, and we start to think about how to work in this new normal, program logic (or program theory/theory of change depending on your terminology) is a useful thinking tool for organisations.

Simply put, program logic helps organisations map out visually the pathways from their activities to the outcomes they seek to achieve. Program logic not only helps in the design of new programs to address COVID-19, but the repivoting of existing programs.

COVID-19 will require organisations to consider how they may have to adapt their activities and whether these may affect their programs’ outcomes. For example, an economic governance program may have to consider how to provide support to a partner government remotely and this may mean they have to lower their expectations of what is achievable. By articulating a program logic model, an organisation can then develop a shared understanding of their program’s strategic intent going forward.

You can, of course use logic to also map out and get clear about the COVID-19 response itself. For example, we used program logic to map out one country’s response to COVID-19 in order to identify what areas our client is working in and the outcomes they are contributing towards, such as improved clinical management and community awareness, so that they can monitor, evaluate and report on their progress.

Locally led development

This crisis poses an opportunity for local staff to have a larger role in evaluations. This not only builds local evaluation capacity, but enables local perspectives to have a greater voice.

Typically, international consultants fly in for a two-week in-country mission to collect data and then fly home and write the report. National consultants primarily support international consultants in on-ground data collection.

In Papua New Guinea, we are promoting a more collaborative process to M&E in which international and national consultants work together across the whole evaluation process from scoping, data collection, analysis and reporting. As national consultants are on the ground and better able to engage with key stakeholders, their insights and involvement are essential to ensuring that evaluation findings are practical and contextually appropriate.

But I think we should look for ways to go beyond this approach to one that has local staff driving the evaluation and us international consultants providing remote support when required. For example, we could provide remote on-call support throughout the evaluation process, as well as participate in key workshops and peer review evaluation outputs such as plans and reports.

This could be coupled with online capacity building for local staff that provides short and inexpensive modules on key evaluation topics. At Clear Horizon we are starting to think more about this sort of capacity building through our Clear Horizon Academy, which provides a range of M&E training courses including Introduction to Monitoring, Evaluation and Learning.

COVID-19 screwed your MEL plans? What next.

It might be the least of your worries in this time of COVID-19, but measurement, evaluation and learning (MEL) has a crucial role to play to help organisations adapt.

Understandably, in these last few weeks many social change and government organisations have been focusing on emergency response activities – moving their programs online, offering previously face-to-face services over videoconference or text, and making sure their communities don’t fall through the gaps as services pivot.

But as we settle in for the long haul, now’s the time to turn your attention back to those MEL plans that have gone out-of-date in the space of weeks. Focusing on MEL in a time like this can help you to ensure the changes in your organisation’s operations are still aligned to your broader goals. Or if needed, it can help you re-evaluate those goals entirely.

This is the first of a series of blogs that will explore exactly why MEL is so useful not just in ‘peacetime’, but also during this prolonged global crisis, where planning more than a few weeks ahead seems about as useful as sunbathing on a crowded beach.

In the following posts, we’ll take a more in-depth look at topics such as using M&E to adapt to COVID-19 in international development, why strategic learning is important for leading in complexity, and how to use Theory of Change to reorient your programs.

But in the meantime, here are some ways you can start to refamiliarise yourself with that MEL framework you might have left in the dust at the start of March.

What to do with that out-of-date MEL framework?

  1. Update the frequency and content of reporting. In the early days of a crisis we know it is critical to get the right information to the right people. Find out what data is useful to people, and when, the update your reporting systems. The chances are you may need to provide more regular information for a while at least.
  2. Revisit your Theory of Change. Update it with your activities being done online and look at what this does to your impact pathways and assumptions. You can use your Theory of change like a canvas to explore positive impact pathways under these new conditions.
  3. Update your intermediate outcomes. You may need to be adding new short-term outcomes such as immediate relief, stress levels, material safety, or knowledge of new government benefits.
  4. Update your measures where needed. Instead of tracking attendees at a session you might instead track the number of online visits, chats, etc.
  5. Reconsider your data collection methods. You may need to move to more qualitative methods during this phase of COVID-19 because life for many in vulnerable situations maybe changing so rapidly that there will be many unintended outcomes that your service will contribute to. Use Most Significant Change method or another method to solicit what is changing and how your service has contributed to those changes.
  6. Think about automating data collection. Going online with your programs offers a world of automation opportunity. If you are offering some sort of chat response to your clients or a collaborative space, you can automate the attendees list. You can also send out small surveys that can be automated either through text message or, as we prefer for security reasons, sending people a Microsoft Forms link.

But a word of warning with privacy: make sure that whatever platform you are using to engage your clients is secure. You can read more about digital evaluation and privacy considerations here.

Stay tuned for the next blog post in this series.

Data privacy and the law

10 things that evaluators (and anyone in the not-for-profit sector) needs to know.

As an evaluator (or, frankly, anyone on the frontlines of the not-for-profit sector), our job is to know a thing or two about data privacy. After all, we’re increasingly required to collect impact data for donors and funders about how well our organisations, programs and individuals are performing.

Five messages for the new Australian aid performance framework

At Clear Horizon, we recognised and advocate the importance of the ‘L’ (Learning) in MEL – it is something we are passionate about, working with our clients (user-focused) to use measurement to inform practice. Dave Green and Damien Sweeney, Principal Consultants in Clear Horizon’s international team, contributed to a recent blog along with a number of their peers, identifying key messages to improve the performance framework of Australia’s development assistance.

Read the full blog here!

Let’s bridge the cultural divide: why evaluation and anthropology need each other

Research Analyst Alessandra Prunotto reflects on the Australian Anthropological Society conference, 2-5 December 2019.

This year’s Australian Anthropological Society conference in Canberra was themed “Values in anthropology, values of anthropology”. With this focus on values, you might assume that there would be a panel on evaluation, or at least a paper or two.

But there wasn’t.

How is this possible? I thought. After all, anthropology and evaluation are disciplines that overlap significantly.

Socio-cultural anthropology is the study of human societies in their environments. And evaluation is the practice of ascribing value to an initiative that’s creating change within a socio-environmental system. So at an anthropology conference on values, some debates on the thorny issues in evaluation practice seemed like par for the course.

But this was not the case. Instead, the undercurrent of concern that permeated the conference was the value that anthropology as a discipline has to those outside academia.

I suspect the conference theme of “value” manifested in this way because the majority of attendees were academics or PhD students. I believe that some of these intellectuals are struggling to understand the meaning of their work in a world where the pursuit of knowledge for its own sake is seen as retreating into an ivory tower.

As I listened to the various discussions circulating around “applied” and “public” anthropology, I noted down the themes emerging around the principles and skills that anthropology can offer the world beyond academia.

Looking around at the work we’re doing at Clear Horizon, I believe anthropology has much to offer evaluation.

In what follows, I expand on the ways that anthropological perspectives can help us navigate emerging trends in evaluation practice, as we see change-making initiatives move towards systems-change approaches and as cultural safety becomes essential for evaluation in First Nations contexts.

Anthropology’s superpowers

What are anthropology’s strengths, in comparison with other social sciences? At the conference, I noticed several themes emerging that painted a picture of applied anthropology done well.

Understanding of the other. As the comparative study of human societies, anthropologists intend to understand other people’s perspectives, no matter how radically different. This requires you to listen deeply and attentively, with humility. You aim to learn and empathise before reflecting critically.

Challenging assumptions. With an awareness that your way of being and thinking is one of many, anthropology compels you to question your own assumptions, as well as that of others. Anthropology is a critical discipline that aims to surface the most fundamental aspects of different worldviews and value systems.

Bringing together different worlds. As the goal of anthropology is to bridge cultural divides, it gives you the foundation to bring together different stakeholders, creating a space for different voices and perspectives to come together. It helps you to find zones where mutual understanding can flourish.

Reflexivity and awareness of power dynamics. Sharpening your observation skills and handing you a toolkit of theorists from Fanon to Foucault, anthropology enables you to identify power dynamics and maintain an awareness of your own position within them.

Contextualisation and comfort with complexity. Anthropology aims to understand human behaviour and beliefs within a broader social, historical, environmental and political context. It asks you to embrace messiness and complexity, and avoid reductive explanations.

Anthropology in systems-change evaluation

Across the change-making sector, we’re seeing a move away from programs and towards systems-change initiatives, such as place-based and collective impact approaches.

A number of elements that characterise these approaches heighten the complexity required of evaluation practice. These elements include the increased number and diversity of stakeholders involved, long timeframes, the range of changes initiated and the difficulties in identifying how initiatives contribute to certain outcomes.

As we can see from the list above, anthropological training leaves you well-placed to tackle these kinds of evaluations.

It allows you to bring together diverse stakeholders and facilitate the negotiations between different value systems and worldviews. It helps you to understand and manage the power relations that might exist between these different groups. And sitting with the complexity of the work is a sign that you’re dealing with the reality of change, not boxing it down into something that is both manageable and fictional.

Anthropology in First Nations evaluation

Attitudes in relation to First Nations evaluation are changing. At Clear Horizon, we’re hearing more compelling demands for decolonising evaluation and power-sharing. A merely participatory and inclusive approach to evaluating with First Nations peoples is no longer acceptable – instead, evaluation needs to be conducted by First Nations peoples “as” First Nations peoples, a principle that is underpinned by the rights of Indigenous peoples in the UNDRIP.

Anthropological perspectives have much to offer to help non-Indigenous evaluators collaborating with First Nations peoples navigate these issues sensitively and appropriately.

Most importantly, anthropology can provide a means to understand that hegemonic practices in evaluation and change-making are not the only ways of working, nor are they necessarily the right ones. Furthermore, anthropological perspectives can foreground how histories of colonisation can continue to manifest in power relations and structural disadvantage today. These aspects of anthropology can lend the humility needed to truly shift evaluative practice back to First Nations peoples and communities who have the expertise to evaluate in their own context.

Bridging the divide

It’s clear that anthropology can bring important skills and perspectives to the emerging trends in evaluation practice.

But it seems that evaluation is not on the radar as a potential career for anthropology students. As I’ve highlighted, it was not even mentioned at the most recent annual conference, despite the focus on “value” and a more public anthropology. And from my own experience, having graduated from an anthropology degree just last year, I’d never heard evaluation mentioned as a potential career path.

What can we do to bring evaluation and anthropology together?

One option might be a partnership, or at least more dialogue, between the Australian Evaluation Society and anthropology departments. Local chapters of the AES might have the potential to run careers information talks, mentorships, or even graduate placement programs.

Perhaps advertisements for evaluation positions could specifically mention that anthropology graduates are welcome in the list of disciplinary backgrounds accepted.

But the skills and ideas of anthropology should not just be limited to those who studied it at university. There could also be an opportunity for anthropologists to run training for evaluators who come from disciplines more removed from anthropology, to upskill them for these emerging areas of work.

Evaluation and anthropology have the potential to achieve more impact together. Evaluation can benefit from the nuanced, critical perspectives anthropology can bring. And anthropologists are seeking ways to make themselves useful outside universities.

Let’s reach out.

Resilient Sydney

Clear Horizon’s Sustainable Futures team are working with the Resilient Sydney Office to develop an M&E Framework for the five-year Resilient Sydney Strategy.

The Strategy aims to strengthen Sydney’s capacity to prepare for, respond to and recover from disaster, whilst ensuring all of Sydney’s communities can access opportunities to thrive. The Strategy aims to effect change across the systems of the city to achieve these objectives, and is being delivered through collaborative initiatives, underpinned by a collective impact model for systemic change.

With the Strategy’s focus on systemic change, collaboration and collective impact, the Sustainable Futures team have been developing an M&E Framework informed in part by the Place-based Evaluation Framework (Dart, 2018)  This will ensure the Strategy’s M&E will work with the phased nature of systems change, and across the different scales of change.  In addition, to align with the collective impact model used, the Framework distinguishes between the work and outcomes of the backbone organisation (i.e. the Resilient Sydney Office) and those of the broader partnership.

Working with the Resilient Sydney Office on this M&E Framework has been a really exciting opportunity for our team for a number of reasons.  The first is the clear alignment in the passion and vision of our team for driving real and positive change.  The second is that the complexity that the Strategy is dealing with demands that we continue to innovate, test and refine our M&E approaches, to ensure they remain useful and fit-for-purpose, and can meaningfully engage with the complexity of evaluating influence on systems change.  We are thoroughly enjoying the challenges this project has thrown at us and are excited to see where it goes next!

Reflections on transferring to the international evaluation space

Kaisha Crupi is a consultant at Clear Horizon and has recently made the move from the domestic Sustainable Futures team into Clear Horizon International. Below is a reflection piece from Kaisha, discussing her learnings and observations in her new role.

Before joining Clear Horizon 18 months ago, I had only a small taste of the international working world. However, since I made the full transition into ‘Clear Horizon International’, affectionately known as CHI (pronounced chai), I feel as if I have jumped into the deep end with each team member supporting me to learn how to swim, and to swim fast. I also feel that the rest of the organisation is on the sideline cheering me on. Below are my reflections and learnings from the last few months of being part of the CHI team.

Working in the international space is tricky

When I was in the domestic team I was exposed to many different industries, types of work and ways of working and met so many people who are passionate about their work through workshops, interviews and product development. Now starting to work in the international space, I am learning about things outside my home country, and in countries that I am working in (and not necessarily living in). Trying to understand different cultural customs, to work around language barriers, across different time zones and to understand different political systems and social contexts is proving to be quite tricky. I am learning a lot more, am asking a lot of questions and reading wider than my easily accessible news. I am also being kind to myself – I know that I am not expected to know everything and am not putting pressure on myself to do so, especially in a short amount of time!

The work is the same, but different

When I first joined the CHI team, I thought this would be great to learn a whole new skill set and challenge myself even further by learning something different and working in a different way. To my surprise, my first task when I joined the team was to conduct a document review against the monitoring and key evaluation questions for an upcoming workshop, which is something that I had finished doing for a domestic project a week earlier to feed into a report! The questions were similar and the way to go about it was the same. The only thing (which was quite a big thing, mind) was that the language and jargon was different, and instead of talking about an area or region, the project was focusing on a whole country! My biggest challenge in joining the team so far is getting used to all the acronyms in reports and discussions with my peers and our clients. I am slowly getting there; though someone should quiz me in the early stages of next year.

Understanding complex challenges

By going to a destination for an international holiday versus going for work, you learn about a country in a very different way. There is the saying that you should not discuss politics when you are in polite company – this is very different in the international working space, particularly when working in design, monitoring and evaluation. You learn about a country’s context on a more granular level, ask the difficult political questions and try to understand the country as much as you can, as fast as you can, especially whilst in-country. I have been exposed to the complex ways of working, what the client deals with and small spot fires they must put out on a day-to-day basis (which are quite different than in the domestic space). These issues also do not have quick-fix solutions. There is at time a feeling of helplessness – now that you know about this information, what are you going to do with it? I believe that doing design, monitoring and evaluation work helps with this, as knowledge is power and can be a communication tool to change someone’s life for the better.

I feel very fortunate to have landed where I am today. Not many people can say that they have ended up with their dream job, especially in such a great values-driven organisation in a very niche area. I have a great team of people supporting me and putting up with my insurmountable amount of questions and reflections, whilst also looking back fondly at my time in the domestic team, where I was able to build my strong foundational knowledge and be supported in every direction. I am looking forward to continuing to swim out toward the horizon and reflecting on the successes and challenges that are happening in such a complex world.

2019 Changefest: Evaluation Tools session

Evaluation Tools session, with Monique Perusco (Jesuit Social Services and Working Together in Willmot), Skye Trudgett and Ellise Barkley (Clear Horizon)

Our session started with a contextual summary of the work and characteristics of ‘Together in Willmot’, a collaborative social change effort in Mt Druitt involving The Hive, Jesuit Social Services, service providers, schools and many other partners. Clear Horizon is working with Together in Willmot as an evaluation partner. Our shared approach to learning and evaluation responds to the challenges of evaluating systems change and place-based approaches, and is tailored to the phase, pace and strengths of the collaboration. We introduced the process for evaluation we are undertaking, which has involved training in Most Significant Change Technique and local data collection which will feed into building a theory of change and then an evaluation plan. We are planning next year to do co-evaluation focused on the efforts and outcomes to date.

During the session we looked at examples of some Most Significant Change stories so far collected as part of this work.

Most Significant Change (MSC) technique was developed by Jess Dart and Rick Davies. Together Jess (Clear Horizon’s founder and CEO) and Rick authored the User Guide in 2005, and MSC is now applied to innumerable contexts worldwide. MSC story based method that can be used for participatory monitoring and evaluation. The process follows a simple interview structure that can generate a one page change story. It is participatory because many stakeholders are involved both in deciding the sorts of change to be recorded, and in analysing the data. MSC uses stories as a form of data collection. Stories are collected from those most directly involved, such as project participants and field staff. Stories are usually collected by asking a simple question such as ‘during the past year, what in your opinion, has the been the most significant change for participants as a result of this program? Stories are then collected, and stakeholders sit together to analyse the stories, at this time, participants are asked to select the story that represents the most significant change for them. The process of selecting the most significant story allows for dialogue from project stakeholders about what is most important. This dialogue is then used as evaluation data to create knowledge about the project and what it is achieving. 

We also covered concepts and tools for evaluating systems change and place-based approaches from the Place-based Evaluation Framework  and Place-based Evaluation Toolkit, which was commissioned by Commonwealth and Queensland governments last year and is a leading guide for evaluation in this context. We introduced the generic theory of change for place-based approaches and ‘the concept cube’ that shows the multiple dimensions of evaluation in this context. Clear Horizon worked with TACSI and CSIA to lead the co-design of the framework and have been working with government, community, philanthropy and non-government partners to test, apply and progress these learning, measurement and evaluation approaches.