DFAT Evaluation of Investment Monitoring Systems
At Clear Horizon, we have been grappling with how to effectively – and efficiently – improve the monitoring, evaluation and learning of programmes. Over the many years of experience, and across the range of programmes and partners we work with, one thing remains abundantly clear: the quality of the monitoring is the cornerstone for effective evaluation, learning and programme effectiveness. In the international development sector, which has some quite large investments that operate in extremely complex environments, monitoring remains even more important.
At the end of 2017, Byron’s new year’s resolution for 2018 was to “dial M for monitoring”, and to put even more emphasis on improved monitoring systems. Having conducted stocktakes of MEL systems across a range of aid portfolios, and being involved in implementing or quality assuring over 60 Department of Foreign Affairs and Trade aid investments, really clear messages about what works and what doesn’t have emerged. This culminated in the presentations at the 2018 Australian Aid Conference and 2018 Australian Evaluation Conference, where Byron and Damien presented on how to improve learning and adaptation in complex programmes by using rigorous evidence generated from the monitoring and evaluation systems.
So we at Clear Horizon welcome the findings and recommendations in DFAT’s Office of Development Effectiveness Evaluation of DFAT Investment Monitoring Systems 2018. Firstly, we welcome the emphasis on improved monitoring systems for investments – it is essential to improving aid effectiveness. Secondly, we strongly agree that higher quality MEL systems are outcome focused, have strong quality assurance of data and evidence, and where the data services multiple purposes (i.e. accountability, improvement, knowledge generation). Thirdly, that partners and stakeholders that have a culture of performance oversight and improvement are essential – this needs to continue to be fostered both internally and externally.
To achieve this, as recommended, it is essential that technical advice and support is provided to programme teams, investment managers, and decision makers. This need not be resource intensive, and must be able to demonstrate its own value for money. However, what is extremely important in this recommendation is that the advice is coherent, consistent and context specific. Too often we see a dependency on the programme team providing a singular generalist M&E person required to provide a gamut of advice – covering a range of monitoring approaches, evaluation approaches, different sectors, and sometimes even different countries. Good independent advice often requires a range of people providing input on different aspects of monitoring, evaluation and learning – a reason at Clear Horizon why we have a panel of MEL specialists, with some focusing on evaluation capacity building, others on conducting independent evaluations, or those building MEL systems.
Standardising expectations and advice across aid portfolios of what constitutes good monitoring, evaluation and learning that is fit for purpose is essential for all of us. We have been fortunate enough to be involved with developing different models of providing third party embedded design, monitoring and evaluation advice. The ‘Quality and Improvement System Support’ approach provides consistent technical advice across entire aid portfolios, such as what has been developed for Indonesia; ‘Monitoring and Evaluation House’ in Timor Leste in partnership with GHD is based on a neutral broker approach to improving the use of evidence in programme performance; and the ‘Monitoring and Evaluation Technical Advisory Role’ in Myanmar places a stronger emphasis on supporting programme teams through technical and management support.
This report echoes our belief that more monitoring and evaluation is not necessarily the answer, but rather collaborating to do it better and breeding a culture of performance is ultimately what we are striving for.