Tag Archives: m&e

Refining the Complexity aware Theory of Change

When I wrote my last post about experimenting with new structures for a complexity aware Theory of Change (ToC) in Myanmar, I had a few elements in place, but still some questions. Going further back to an earlier post, I was clear that differentiating between clear causal links for complicated issues and unpredictable causalities for complex ones is critical. I have been thinking about that a lot and last week I have taught a session on monitoring in complex contexts and I think I have found the final piece of the puzzle. Continue reading

Monitoring and Results Measurement: ideas for a new conceptual framework

In this post I want to share with you an idea of a new conceptual framework for monitoring and results measurement (MRM) in market system development projects. To manage your expectations, I will not present a finished new framework, but a model I have been pondering with for a while. The model is still in an early state but it would be great to harness your feedback to further improve on it. Indeed, what is presented here is based on everything I have learned in recent years from a large number of practitioners that contributed to the discussion on Systemic M&E, my work in the field, but particularly also from the guest contributions here on my blog by Aly Miehlbradt and Daniel Ticehurst and the intense discussions on MaFI. The ideas are strongly based on the learning from the Systemic M&E Initiative and also apply the seven principles of Systemic M&E, although I am not doing this explicitly. Continue reading

Results Measurement and the DCED Standard: a commitment to move forward

We now need to start a constructive discussion on how a truly systemic Monitoring and Results Measurement (MRM) framework could look like (as Evaluation does not play a big role in the current discussions, I am adopting the expression MRM and avoid the M&E). In this post, I will take up the discussions on MRM and the DCED Standard for Results Measurement from the two guest posts by Aly Miehlbradt and Daniel Ticehurst and will add from a discussion that runs in parallel on the very active forum of the Market Facilitation Initiative (MaFI). I will also add my own perspective suggesting that we need to find a new conceptual model to define causality in complex market system. Based on that, in my next post, I will try to outline a possible new conceptual model for MRM. Continue reading

Guest Post: Daniel Ticehurst with a critical reply on the DCED Standard

Daniel TicehurstAfter submitting a long comment as a reply to Aly Miehlbradt’s post, I could win Daniel Ticehurst to instead write another guest post. Daniel’s perspective on the DCED Standard nicely contrasts with the one put forward by Aly and I invite you all to contribute with your own experiences to the discussion. This was not originally planned as a debate with multiple guest posts, but we all adapt to changing circumstances, right?

Dear Marcus and Aly, many thanks for the interesting blog posts on monitoring and results measurement, the DCED standard and what it says relating to the recent Synthesis on Monitoring and Measuring changes in market systems.

Continue reading

Guest Post: Aly Miehlbradt on the DCED Standard and Systemic M&E

Aly MiehlbradtThis is a guest post by Aly Miehlbradt. Aly is sharing her thoughts and experiences on monitoring and results measurement in market systems development projects. She highlights the Donor Committee for Enterprise Development (DCED) Standard for Results Measurement and its inception as a bottom-up process and draws parallels between the Standard, her own experiences, and the recently published Synthesis Paper of the Systemic M&E Initiative.

In one of Marcus’s recent blog posts, he cites the SEEP Value Initiative paper, “Monitoring and Results Measurement in Value Chain Development: 10 Lessons from Experience” (download the paper here), as a good example of a bottom-up perspective that focuses on making results measurement more meaningful for programme managers and staff. Indeed the SEEP Value Initiative was a great learning experience, and is just one example of significant and on-going work among practitioners and donors aimed at improving monitoring and results measurement (MRM) to make it more useful and meaningful. The DCED Results Measurement Standard draws on and embodies much of this work and, also, promotes it. In fact, the lessons in MRM that emerged from the SEEP Value initiative came from applying the principles in the DCED Results Measurement Standard.

Continue reading

Syntesis paper out now: Monitoring and measuring change in market systems

I am really happy to announce the publication of the Synthesis Paper of the so called ‘Systemic M&E’ initiative. The paper is the synthesis of conversations that started in MaFI in June 2010 and a series of online and in-person conversations that took place in the second half of 2012. It captures the voices of practitioners, academics, donors and entrepreneurs who are trying to find better ways to monitor and evaluate the influence of development projects on market systems and learn more, better and faster from their interventions. Continue reading

Spotting ’emerging patterns’ to report on changes

In a training on evaluating projects I attended a while ago, a representative of the Swiss Charity HEKS presented their results measurement (RM) system. The presentation caught my immediate attention and interest since HEKS is using principles of complexity theory as a basis for their RM framework. Based on this rather experimental framework, the organization published a first ‘effectiveness report‘ in March 2011. I want to present some of the interesting features of the RM system, based on the effectiveness report.

HEKS acknowledged when building their RM framework that development takes place in complex and dynamic systems with the consequence that the behavior of such systems is largely unpredictable and, thus, effects of interventions also hard to predict.

This challenging perspective implies a different understanding of cause and effect. Connected to their environment, living systems do not react to a single chain of command, but to a web of influences.

As a consequence, HEKS does not base its projects on rigid impact logics and impact chains, but they are conscious that

HEKS cannot always objectively trace the effects of its actions, but can make its intentions, input and observations transparent.

As a consequence, HEKS’ particular approach focuses on the changes observed and experienced by different stakeholders involved at several levels of their projects.

The focus is more on the significance than on the quantification of such changes for the people who experience them. HEKS herewith takes a path different from strict measurement and hard data collection. Its aim is to grasp and understand the changes in the purpose, identity and dynamics that hold and drive the systems it gets involved – rather than to measure their ever changing dimensions.

Subsequently, HEKS’ method is to adopt a bird’s-eye view, look for ‘emerging patterns‘ and try to interpret them. Qualitative data is collected on three levels, i.e., the indivudual level, the project level and the programme level through methods like ‘Most Significant Changes’, monthly newsletters and annual reports focusing on observations of different level staff as well as a two days workshop for compilation.

Nevertheless, HEKS defined 10 key indicators that are collected for all countries they are active in. These indicators are for example number of beneficiaries, income increase, yield increase, etc.

For me, this is a very interesting approach and it resonates very well with the discussion on ‘experiential knowledge and staff observation’ of the GROOVE network that I mentioned in my last post. Also the staff observation have as an implicit goal to grasp emerging patterns of positive changes in the system the project tries to influence in order to amplify this change.

Owen Barder, on whose presentation on evolution and development I wrote in my last post, is asking for more rigorous evaluation of project impacts in order to be able to see what works and what doesn’t. Is the RM framework proposed by HEKS rigorous enough to comply with Owen’s demand? After all, HEKS’ approach is not using result chains at all, although they are one of the mainstays of results measurement – at least according to the DCED Standard on Results Measurement. Are the 10 universal indicators enough? And what about the attribution of the changes and emerging patterns?

When I read through the four patterns that were described by the HEKS effectiveness report, I see that they are very much focused on the community level – naturally, since this is where also the focus of interventions lie. Here is an example:

Pattern 1: Sustainable development starts with the new ways in which people look at themselves. Women especially become a driving force in the development of their communities.

Or another one:

Pattern 2: People who are aware of their rights become players in their own development. They launch their initiatives beyond the scope of HEKS’ projects.

The question that immediately pops up in my mind is: What are the consequences of the projects’ actions on the wider system, beyond the community? What are the ripples that the successful projects have throughout the wider system, e.g. in the market system or the policy environment? Or even more fundamentally: Can we achieve changes in the wider system by focusing on the community level? What additional interventions are needed?

There are still many open questions, but for me, HEKS is making a huge and courageous step in the right direction.

Using principles from evolution in development

Recently, I listened to a presentation by Owen Brader titled “What can development policy learn from evolution“. I want to briefly summarize here my main insights from the presentation and put some thoughts down.

Here some insights from his presentation:

  • Experience tells us that a simplistic approach based on pre-canned policy recommendations that were gained through technical analyses and regressions simply doesn’t work. The reality is much more complex.
  • What is called “almost impossible problems” or “wicked problems”, i.e., problems we face in complex systems, are solved through evolution, not design.
  • For evolution to work, it requires a process of variation and selection.
  • In the development work today there is a lot of proliferation without diversity and certainly not enough selection.
  • Especially missing are feedback loops to establish what works and replicate it while scaling down on things that don’t work.
  • One specially important feedback are the needs, preferences and experiences of the actual beneficiaries. Due to too little efforts spend on rigorous impact evaluation and too much on process and activity evaluations, this feedback loop often doesn’t work. The direct feedback on the citizen themselves should be better taken into account: “People care deeply about whether or not they get the services they should be getting.”
  • The establishment of better and more effective feedback loops as a crucial ingredient to improve program effectiveness: “We have to be better in finding out what is working and what is not working”.
  • In evolutionary words: we should not impose new designs, but rather we should try to make better feedback loops to spur selection and amplification.
  • But as a direct consequence, we also need to acknowledge things that don’t work, i.e., failures, and adopt and adapt what is working. On international policy level, there are no necessary mechanisms to replicate success or kill of the failures.

These insights remind me a lot of a discussion I was involved recently with a group of international development organizations that are working together in a network called the GROOVE. The discussion was about ‘integrating experiential knowledge and staff observations in value chain monitoring and evaluation’. In the discussion that was held during a webinar, two important insights were voiced that correspond with Owen’s points above:

  1. Staff observations can add a lot of value to M&E systems in terms of what works in the field and what doesn’t.
  2. There is a need for a culture of acknowledging and accepting failures in order to focus on successful interventions.

Now, what does this mean if we have – for example – to design a new project? Firstly, I think it is important that the project has an inception period where a diversity of interventions can be tested. But we also need an effective mechanism to assess what impact these interventions have – if any. Now there is the problem of time delays – often, the impact of the interventions are delayed in time and might become apparent too late, i.e., only after the inception period. Especially when we base our M&E on hard impact data, we might not be in a position to say which intervention was successful and which wasn’t. Therefore, we need to rely on staff observation and perceptions of the target beneficiaries. Again, a very good understanding of the system is necessary in order to judge the changes that happen in the system.

As already Eric Beinhocker describes in his book “The Origin of Wealth”, evolution is a very powerful force in complex systems. Beinhocker defines economy as a complex system as he writes: “We may not predict or direct economic evolution but we can design our institutions to be better or worse evolvers”. I think that the same goes for our development systems. We cannot predict or direct evolution in developing countries, but we can support  the poor to become better evolvers. This has also strong implication on our view on sustainability, but I’m already sliding into the topic for another post.