This is a guest post by Aly Miehlbradt. Aly is sharing her thoughts and experiences on monitoring and results measurement in market systems development projects. She highlights the Donor Committee for Enterprise Development (DCED) Standard for Results Measurement and its inception as a bottom-up process and draws parallels between the Standard, her own experiences, and the recently published Synthesis Paper of the Systemic M&E Initiative.
In one of Marcus’s recent blog posts, he cites the SEEP Value Initiative paper, “Monitoring and Results Measurement in Value Chain Development: 10 Lessons from Experience” (download the paper here), as a good example of a bottom-up perspective that focuses on making results measurement more meaningful for programme managers and staff. Indeed the SEEP Value Initiative was a great learning experience, and is just one example of significant and on-going work among practitioners and donors aimed at improving monitoring and results measurement (MRM) to make it more useful and meaningful. The DCED Results Measurement Standard draws on and embodies much of this work and, also, promotes it. In fact, the lessons in MRM that emerged from the SEEP Value initiative came from applying the principles in the DCED Results Measurement Standard.
Like the SEEP work, the development of the DCED Standard was a bottom-up initiative by programme managers and other practitioners with donors, and continues to be informed by practitioners and donors in private sector development. As a consequence, the Standard is very compatible with and, in fact, explicitly includes many of the issues raised in another bottom-up initiative, the SEEP and MaFI Systemic M&E Initiative.
It’s useful to look back a bit on this. The 2006 Reader for the ILO Annual “BDS Seminar” titled “Implementing Sustainable Private Sector Development: Striving for Tangible Results for the Poor” (get more information here) focused on systemic market development and put forth a vision for a well-functioning market system (Page 16) that highlights many of the same issues currently being discussed in MaFI and the Systemic M&E initiative. This vision drew from programmes and promoted further work on systemic market development. At the 2008 Seminar, a practitioner came up with the idea of a standard in results measurement. Now, in 2013, we have started to build a rich repository of programmes using a systemic market development approach, as well as MRM systems to support them.
While the DCED Results Measurement Standard is applicable to various types of PSD [Private Sector Development] programmes, many aspects appropriate to systemic market development are explicitly included in the Standard because of the contributing practitioners from the market development community. Thus, the Standard provides a framework and practical guidance for programmes to operationalize credible and meaningful MRM systems. It encourages MRM systems that build on thorough and regular analysis of the context, assess systemic changes and the extent to which they benefit poor people, and promote the use of that information to innovate and improve programme strategies and interventions.
I’ll give some examples:
- The DCED Results Measurement Standard encourages the use of results chains to help programme managers and staff “unpack” the concept and strategies of their programmes, building on their understanding of dynamic systems. The Systemic M&E Synthesis paper explains well the benefits that many practitioners find in using results chains to help them conceptualize, share and discuss how they aim to influence systems to include and benefit poor people. Most of the results chains I have seen are not strictly linear. They show, in simplified form, various relationships and how programme managers and staff expect different factors to work together to promote inclusive, pro-poor growth. Programme managers and staff are also using multiple layers of results chains, at the level of components, sectors and interventions, which helps them both to see the broad picture of change and to delve into the details. I have seen this approach help programme managers and staff to better explore complex systems and think through how programme activities can positively influence those systems.
- Tools that aim to help managers in dynamic contexts must also be dynamic. The Systemic M&E Synthesis notes that results chains “need to evolve as the understanding of the project evolves.” The DCED Results Measurement Standard also includes this point: “The results chains are regularly reviewed to reflect changes in the programme strategy, external players and the programmes circumstances.” Inclusion of this point in the Standard encourages programmes to regularly review their (imperfect) understanding of dynamic systems, to reflect on how the systems are changing and how the programme has influenced them, to learn from that reflection and use it to revise their thinking and actions. In programmes, I have seen this essential feedback loop help managers and staff to change their strategies and tactics to better influence systems to benefit poor people.
- Both the DCED Standard and the Systemic M&E Initiative emphasize the importance of gathering qualitative information. The DCED Standard includes an explicit point on gathering qualitative information on “various levels of the results chains.” I have found that gathering qualitative information is fundamental to enabling programme managers and staff to improve their understanding of dynamic systems and the extent to which they are influencing them.
- The Systemic M&E Synthesis encourages programmes to gather information not just on “impacts on poor people” but also on the deeper changes in systems that can lead to widespread and sustainable inclusion of and benefits for poor people over the long term. The DCED Standard explicitly promotes assessing results at all levels, not just impact on poor people. Assessing change at all levels is critical to improving programmes. In the past, programmes were often encouraged to only assess changes up to the output or outcome level. However, poor people are a part of the systems we aim to influence and, in fact, benefiting poor people is the rationale for most aid. It is only through assessing the extent to which poor people are or are not benefiting that we can understand if the system is changing in desired ways. I have been encouraged by programmes’ experiences with assessing changes at all levels including impacts on poor people. Programme managers and staff report that this depth of understanding helps them to design better interventions and modify those interventions to better influence systems and maximize benefits for poor people.
- The vision for a well-functioning market system articulated in the 2006 ILO Reader highlights the importance of resiliency and responsiveness. This point is as critical now as it was then. The Systemic M&E Synthesis encourages practitioners to think about sustainability as the adaptability and resilience of systems in ways that continue to include and benefit poor people. The DCED Standard explicitly includes assessing sustainability and encourages programs to project their expected results to or beyond the end of interventions. Using these guidelines, programme managers and staff are assessing such issues as the changing incentives of various system stakeholders, market stakeholders’ intentions and actions to continue or adapt new business models and changes and trends in relationships among market stakeholders. Programmes are also actively tracking changes in systems and using this information to project out expected benefits for poor people beyond the end of the programme. This longer-term thinking helps programme managers to prioritize changes that will benefit poor people well into the future.
- The Systemic M&E Synthesis states that current M&E systems often focus exclusively or mostly on “direct” results and do not encourage understanding and monitoring of broader systems. However, the DCED Results Measurement Standard explicitly encourages “Capturing Wider Changes in the System or Market” as well as including broader systemic change in results chains and in reporting. I have found that programme managers who start to assess systemic change in a simple way, adapt and develop tools to more effectively understand those changes over time.
- The Systemic M&E Synthesis highlights the difficulty of understanding causality or attribution in systemic programmes. The DCED Results Measurement Standard does explicitly include estimating attributable change, but encourages a balanced and sensible approach to assessing causality. Too often in the past, I have seen programmes take credit for changes that it is unlikely they had anything to do with. At the same time, I have also seen programmes that probably did positively influence systems but did not assess if they did and how. Assessing the extent to which and how programmes influence systems and then how those systems affect poor people must be part of our thinking. A balanced and sensible approach to assessing causality encourages people – programme managers and staff, other practitioners and donors – to ask “Why” did things change. Asking “why” is essential to improving our understanding.
- Owen Barder, a well-known proponent of complexity theory from the Center for Global Development, argues that recognizing complexity provides a powerful reason to improve results-based management. Both the DCED Results Measurement Standard and the Systemic M&E Initiative emphasize the importance of integrating results measurement into program management. This is the crux of what makes MRM useful and meaningful for practitioners and donors – using our learning to maximize poverty reduction. The DCED Results Measurement Standard includes four points specifically on the involvement of staff and management in the results measurement system and the use of MRM to improve programmes. For example, one point is: “The programme has a clear system for results measurement that ensures that findings are used in programme management and decision-making.” Over the last several years, I have seen the development and implementation of concrete and practical systems that regularly feed information on results into decision-making. I have also seen programme managers and staff actively cultivating an organizational culture that supports learning and using lessons to improve programmes. These systems and culture enable programmes to operationalize results-based management.
The practitioners involved in the SEEP Value Initiative reported that the MRM systems they established, based on the framework of the DCED Results Measurement Standard, were useful both for management and for articulating their results to stakeholders. Their MRM systems provided the essential feedback that enabled them to assess innovative interventions and either improve them if they showed promise or stop them if they did not. The DCED Results Measurement Standard is not a silver bullet that will enable programmes to effortlessly achieve effective MRM. However, it is a practical framework that embodies key basics and encourages programmes and donors to invest in MRM systems that are embedded in and support effective programme management. This approach can help programme managers and donors to improve programme design and implementation as well as to communicate their results (at various levels – not just impacts on poor people) more clearly. The DCED Results Measurement Standard also helps strike the balance mentioned in the conclusion to the Systemic M&E Synthesis paper between “accountability on one side and flexibility and quick response on the other.”
More thinking, dialogue, testing and improving on how to assess changes in systems and how to use that information effectively to improve programmes is certainly needed. Let’s ensure that this work builds on the successes and lessons already achieved.
Alexandra Miehlbradt is director of Miehlbradt Consulting Ltd. and has twenty years of experience in pro-poor enterprise and market development. Aly has provided technical assistance to a wide range of organizations in market research and program design, market facilitation and results assessment. She also has extensive experience as a trainer and facilitator. Over the last five years, Aly has been providing technical leadership in the on-going global effort to improve monitoring and results measurement in private sector development programmes. She has helped a number of organisations develop or improve internal monitoring and results measurement systems and is a lead technical consultant to the Donor Committee for Enterprise Development on the DCED Results Measurement Standard.