We now need to start a constructive discussion on how a truly systemic Monitoring and Results Measurement (MRM) framework could look like (as Evaluation does not play a big role in the current discussions, I am adopting the expression MRM and avoid the M&E). In this post, I will take up the discussions on MRM and the DCED Standard for Results Measurement from the two guest posts by Aly Miehlbradt and Daniel Ticehurst and will add from a discussion that runs in parallel on the very active forum of the Market Facilitation Initiative (MaFI). I will also add my own perspective suggesting that we need to find a new conceptual model to define causality in complex market system. Based on that, in my next post, I will try to outline a possible new conceptual model for MRM.
We need a commitment to move forward together
I am fascinated by the level of engagement in the discussion around measuring results of market and private sector interventions. Discussions are going on both here on my blog and also on MaFI. I feel, however, that the discussion does not advance very much. It is kind of stuck in documenting the virtues and shortcomings of the DCED Standard. Now it is time to move on and see how we can improve monitoring and results measurement in market systems development.
The DCED Standard is a good place to start
Reading again through Aly Miehlbardt’s post, it seems to me that the Standard in itself can cater to many of the principles outlined in the Systemic M&E Initiative. It encourages practitioners to develop a clear impact logic; it encourages managers to revisit this logic regularly and adapt it if necessary, so the logic can evolve over time; it explicitly promotes the collection of qualitative data and the assessment of results at all levels, not only at the level of the poor; it encourages capturing wider changes in the system or market; and it encourages a balanced and sensible approach to assessing causality and attribution (or, one could argue, contribution). There are indeed good examples of projects that base their MRM frameworks on the Standard and show innovative solutions to the challenges of measuring change in market systems. As could clearly be seen by some of the comments following Daniel Ticehurst’s guest post, practitioners appreciate the standard. In his comment, Jim Tomecko for example states that the standard “provides us a collection of best practices which can be applied in both large and small projects and saves us the task of reinventing the wheel in every new PSD project.”
The discussions that led to the Standard, such as the 2006 ILO Annual “BDS Seminar”, were in my opinion extremely important to the field. A discussion about a better way to measure results of systemic interventions is indeed needed. The DCED Standard is, however, not the final answer. As can be seen in the current discussions, many aspects of the Standard are still debated. Two aspects feature most prominently in these discussions. The first is the question of how to use impact logics. The Standard promotes results chains as main method of designing impact logics, but essentially results chains are not fundamentally different from logframes or other causal models. An opinion that is also voiced by Daniel Ticehurst in his guest post. Essentially, impact logics are a conceptual model of how we are intending to achieve change in a given system. Ideally, the impact logic is developed in collaboration with local stakeholders to include their perspective. The second aspect often criticized in the Standard are the universal impact indicators.
A new perspective on causality is needed
Let me address the first point, the impact logic, and leave the discussion about the universal impact indicators for later. I think the discussion around impact logics is more crucial and once resolved also caters to the aspect on the universal impact indicators. As the discussions during the Systemic M&E Initiative clearly showed, impact logics are appreciated by many practitioners. They are seen as a way to make explicit why we do what we are doing. They help us to think through whether the intended intervention can actually lead to the intended outcomes. So why do I think that impact logics as they are currently applied are not working? There are two reasons.
Firstly, they are often not very well implemented. Daniel makes a compelling argument in saying that “many of the disadvantages of logframes had little to do with any intrinsic weakness, more a history of mis-use.” This is probably also true for impact logics in general. Experiences from the field show that many field practitioners have problems formulating a cohesive results chain without huge gaps in the logic. In this sense, the question might be asked whether the voiced critique is less the problem of the Standard as such and more of the (lack of) capacities to implement the Standard ‘correctly’ combined with the still dominant incentive structure by the donors for quick and clearly attributable results? Jim Tanburn, the coordinator of the DCED, states in a post on MaFI that “Clearly there is a danger that the Standard encourages box-ticking, rather than serious thought. This is probably more a function of the existing incentives in the industry, than the Standard per se; the rewards go to those who report big numbers, rather than to those who approach their work in a spirit of honest enquiry, seeking to be as effective as possible.”
In my view, however, the problem runs deeper than the mis-application of the Standard. Hence, the second reason for my critical stance regarding current application of impact logics is that the Standard hasn’t really embraced how causality works in complex systems. The prevailing paradigm is to think that one event leads to another, like a billiard ball pushing another into a specific direction. If we look into the literature of complex systems research, we can, however, see that this might not be the case for most of the applications relevant to us (see box below). Hence, building causal chains with boxes and arrows illustrating how one change will automatically lead to another might not be able to satisfactorily illustrate how change is actually happening and proliferating in complex systems (and how we can support this). It is like trying to build a two dimensional model to describe a three dimensional world. Although this can be helpful initially, pretty soon we should consider adding the third dimension and all the nuances that come with it – like architects that always build a three dimensional model before they build a house. The focus on the over-simplified model of causality is mainly going back to our mechanistically trained minds (I’m talking about what we could call the ‘western’ mind, in contrast to the ‘eastern’ mind that seems to have a better grasp on complex interactions).
The models of linear progression of change will not represent reality in its entirety, so the control we seem to have over the succession of change will remain an illusion and we might miss important information that would help us to achieve real change. Of course by knowing the system well and having a lot of experience as market development practitioners, we are able to identify a number of stable and recurring cause-and-effect relationships. But even if we can possible predict one or maybe two outcomes, we don’t know how the system might change. Causalities can quickly change in complex systems. Hence, with our limited model we are partly flying blind, just seeing the blue print we developed in the beginning of the program. Opening our eyes, however, would enable us to see reality and adapt our interventions to it.
Many may argue that the chains are a simplification of reality, a simplification of the complexity in market systems. That these simplifications are the only way to manage the complexity of reality. There are two sorts of simplifications, though. The ones that are built on how our mechanistically trained mind is imagining the reality, and the ones that are built on a real picture of the complex interconnections of reality (for an example of the latter, have a look at this very short TED Talk by Eric Barlow). I know that simplifications are needed, but they should not contradict reality, they should just make it more graspable. Simplifications need to be on the right side of complexity. In the case of results chains or logframes, they do in my view often contradict reality by providing us with an illusion of control over causality.
In the next post I will try to sketch out a proposal for an alternative approach to MRM that is both compatible with the appreciated method of impact logics, but also open to new insights on causality in complex systems.