Three classic evaluation mistakes – and what we learned
We’ve all been there. The dread of knowing you’re about to deliver unwelcome news – and that you’re at least partly responsible. The annoying inner voice that reminds you that you should have trusted your gut and raised your concerns earlier.
For those working on initiatives aiming to deliver positive social or environmental outcomes, an evaluation either going wrong or unearthing something wrong can feel like a personal failure. And sometimes that fear of failure can hinder the transformative potential that is the true superpower of MEL (Measurement Evaluation and Learning). Mistakes happen and create fertile ground for growth.
Here three evaluation experts share some of their biggest MEL mistakes, and what they learned – so hopefully you too can benefit from their experience.
Mistake 1. Trying to neatly tie up a MEL plan with a bow
Lee-Anne Molony, Clear Horizon
I was trying to complete a Measurement, Evaluation and Learning (MEL) plan by end of the contract date – a reasonable expectation – but it effectively locked us into an implementation plan we weren’t completely ready for. It also presumed that the team already had evaluative thinking capability to adapt as necessary. In short, we didn’t have enough information on what we didn’t yet know for the plan to be flexible enough to support and respond to lessons and new insights.
As this client was a government agency, the focus of the evaluation was on accountability and reporting back to stakeholders on the progress being made and results, as opposed to gathering data for program learning and improvement. As such, the plan we developed was perhaps too rigidly focused on this, and didn’t lend itself to be adaptive as is could be to what emerged during the data collection and evaluation.
I learned …
I’ve learned not to jump in too quickly, and to make people feel safe enough to focus on learning as much as on accountability. This means leaving space in MEL plans to call out what we don’t yet know, and to be intentional about using MEL as an active tool for program learning beyond just reporting back.
Mistake 2. Overplanning in uncertain environments
Jess Dart, Clear Horizon
When I first started working in Developmental Evaluation – where you’re working in the early stages of an initiative and you’re also contributing to the design of it – I did what I usually do and developed a comprehensive and detailed MEL framework.
I specified everything: the logic, the evaluation questions, the indicators. But then the project drastically changed, and I had to completely redo the plan. And then it changed again, and I had to redo the plan again. And again. And I ended up using all of the budget on planning as opposed to actually doing the evaluation itself. There was none of it left by the time we got to the point of collecting data, let alone analysing the results.
I learned …
I’ve learned not to do big, detailed plans in uncertain environments. Now I do concept notes instead. When the project changes (as it usually will), these concepts can stay pretty durable, and I only nut out the detail when absolutely necessary.
Mistake 3. Trying to do too much at once
Samiha Barka, Launch Housing
My colleague and I were tasked with developing a MEL framework for a small but ambitious program in South East Asia. We developed a beautiful, very detailed MEL plan, where we identified all the data we wanted collected, every piece of information we thought could be useful to the program.
The team running the program was small. What at first seemed feasible, not to mention beautifully comprehensive, became difficult to implement for the team.
They couldn’t keep up with the data collection. They also didn’t have the right skills to effectively collect, collate and analyse all this data so that it could influence decision-making. We ended up with a lot of data but not enough information that meaningfully contributed to their understanding of progress.
When this became obvious, we worked with the program team and the funder to identify which information was critical for the program versus information that was ‘good to know’ but didn’t have a clear use.
I learned …
Evaluation and impact measurement is a journey for organisations, teams and people. Be mindful of the resources (funding, people, skills) the program has earmarked for evaluation.
When developing MEL plans, make sure these are proportional to the resourcing and skills available. Prepare yourself to manage expectations with the funders or leaders about what breadth and depth of evaluation the team can deliver with the resources and skills they have.
Feeling like you’d need some solid guidance on how to go about measuring your impact? Learn how to measure the impact of a project, program or organisation with Complete Guide to Measurement, Evaluation and Learning (MEL).