Results Measurement and the DCED Standard: a commitment to move forward

We now need to start a constructive discussion on how a truly systemic Monitoring and Results Measurement (MRM) framework could look like (as Evaluation does not play a big role in the current discussions, I am adopting the expression MRM and avoid the M&E). In this post, I will take up the discussions on MRM and the DCED Standard for Results Measurement from the two guest posts by Aly Miehlbradt and Daniel Ticehurst and will add from a discussion that runs in parallel on the very active forum of the Market Facilitation Initiative (MaFI). I will also add my own perspective suggesting that we need to find a new conceptual model to define causality in complex market system. Based on that, in my next post, I will try to outline a possible new conceptual model for MRM.

We need a commitment to move forward together

I am fascinated by the level of engagement in the discussion around measuring results of market and private sector interventions. Discussions are going on both here on my blog and also on MaFI. I feel, however, that the discussion does not advance very much. It is kind of stuck in documenting the virtues and shortcomings of the DCED Standard. Now it is time to move on and see how we can improve monitoring and results measurement in market systems development.

The DCED Standard is a good place to start

Reading again through Aly Miehlbardt’s post, it seems to me that the Standard in itself can cater to many of the principles outlined in the Systemic M&E Initiative. It encourages practitioners to develop a clear impact logic; it encourages managers to revisit this logic regularly and adapt it if necessary, so the logic can evolve over time; it explicitly promotes the collection of qualitative data and the assessment of results at all levels, not only at the level of the poor; it encourages capturing wider changes in the system or market; and it encourages a balanced and sensible approach to assessing causality and attribution (or, one could argue, contribution). There are indeed good examples of projects that base their MRM frameworks on the Standard and show innovative solutions to the challenges of measuring change in market systems. As could clearly be seen by some of the comments following Daniel Ticehurst’s guest post, practitioners appreciate the standard. In his comment, Jim Tomecko for example states that the standard “provides us a collection of best practices which can be applied in both large and small projects and saves us the task of reinventing the wheel in every new PSD project.”

The discussions that led to the Standard, such as the 2006 ILO Annual “BDS Seminar”, were in my opinion extremely important to the field. A discussion about a better way to measure results of systemic interventions is indeed needed. The DCED Standard is, however, not the final answer. As can be seen in the current discussions, many aspects of the Standard are still debated. Two aspects feature most prominently in these discussions. The first is the question of how to use impact logics. The Standard promotes results chains as main method of designing impact logics, but essentially results chains are not fundamentally different from logframes or other causal models. An opinion that is also voiced by Daniel Ticehurst in his guest post. Essentially, impact logics are a conceptual model of how we are intending to achieve change in a given system. Ideally, the impact logic is developed in collaboration with local stakeholders to include their perspective. The second aspect often criticized in the Standard are the universal impact indicators.

A new perspective on causality is needed

Let me address the first point, the impact logic, and leave the discussion about the universal impact indicators for later. I think the discussion around impact logics is more crucial and once resolved also caters to the aspect on the universal impact indicators. As the discussions during the Systemic M&E Initiative clearly showed, impact logics are appreciated by many practitioners. They are seen as a way to make explicit why we do what we are doing. They help us to think through whether the intended intervention can actually lead to the intended outcomes. So why do I think that impact logics as they are currently applied are not working? There are two reasons.

Firstly, they are often not very well implemented. Daniel makes a compelling argument in saying that “many of the disadvantages of logframes had little to do with any intrinsic weakness, more a history of mis-use.” This is probably also true for impact logics in general. Experiences from the field show that many field practitioners have problems formulating a cohesive results chain without huge gaps in the logic. In this sense, the question might be asked whether the voiced critique is less the problem of the Standard as such and more of the (lack of) capacities to implement the Standard ‘correctly’ combined with the still dominant incentive structure by the donors for quick and clearly attributable results? Jim Tanburn, the coordinator of the DCED, states in a post on MaFI that “Clearly there is a danger that the Standard encourages box-ticking, rather than serious thought. This is probably more a function of the existing incentives in the industry, than the Standard per se; the rewards go to those who report big numbers, rather than to those who approach their work in a spirit of honest enquiry, seeking to be as effective as possible.”

In my view, however, the problem runs deeper than the mis-application of the Standard. Hence, the second reason for my critical stance regarding current application of impact logics is that the Standard hasn’t really embraced how causality works in complex systems. The prevailing paradigm is to think that one event leads to another, like a billiard ball pushing another into a specific direction. If we look into the literature of complex systems research, we can, however, see that this might not be the case for most of the applications relevant to us (see box below). Hence, building causal chains with boxes and arrows illustrating how one change will automatically lead to another might not be able to satisfactorily illustrate how change is actually happening and proliferating in complex systems (and how we can support this). It is like trying to build a two dimensional model to describe a three dimensional world. Although this can be helpful initially, pretty soon we should consider adding the third dimension and all the nuances that come with it – like architects that always build a three dimensional model before they build a house. The focus on the over-simplified model of causality is mainly going back to our mechanistically trained minds (I’m talking about what we could call the ‘western’ mind, in contrast to the ‘eastern’ mind that seems to have a better grasp on complex interactions).

The models of linear progression of change will not represent reality in its entirety, so the control we seem to have over the succession of change will remain an illusion and we might miss important information that would help us to achieve real change. Of course by knowing the system well and having a lot of experience as market development practitioners, we are able to identify a number of stable and recurring cause-and-effect relationships. But even if we can possible predict one or maybe two outcomes, we don’t know how the system might change. Causalities can quickly change in complex systems. Hence, with our limited model we are partly flying blind, just seeing the blue print we developed in the beginning of the program. Opening our eyes, however, would enable us to see reality and adapt our interventions to it.

Many may argue that the chains are a simplification of reality, a simplification of the complexity in market systems. That these simplifications are the only way to manage the complexity of reality. There are two sorts of simplifications, though. The ones that are built on how our mechanistically trained mind is imagining the reality, and the ones that are built on a real picture of the complex interconnections of reality (for an example of the latter, have a look at this very short TED Talk by Eric Barlow). I know that simplifications are needed, but they should not contradict reality, they should just make it more graspable. Simplifications need to be on the right side of complexity. In the case of results chains or logframes, they do in my view often contradict reality by providing us with an illusion of control over causality.

In the next post I will try to sketch out a proposal for an alternative approach to MRM that is both compatible with the appreciated method of impact logics, but also open to new insights on causality in complex systems.

Causality in Complex Systems

Alicia Juarrero writes in this paper that “when dealing with hierarchical systems that are self-referential and display inter-level effects, the notion of causality must be reconceptualized in terms other than that of the billiard ball, collision conception that is the legacy of mechanism.” She continues arguing that since in complex systems the causation between system level influences and individual behavior “is clearly not the gocart-like collisions of a mechanical universe, the causal mechanism at work between levels of hierarchical organization can better be understood as the operations of constraint.” These constraints can be context-sensitive, i.e. they co-evolve and change with the behavior of the individual actors. Context-sensitive constraints can for example be informal social rules that guide the behavior of businesses or gender relations that do not allow women to travel to the market place or to keep the money they earn there. It must therefore be a program’s aim to influence the landscape of constraints and individual behaviors in a way that has an effect on the entire system, rendering it more favorable to achieve our development goals. To use a metaphor introduced by Dave Snowden: a program is modulating the influence of the constraints to shape the system, similar to magnets that are influencing a pile of iron filings (Read Dave’s corresponding blog post). Due to the modulations that happen due to the inherent dynamism of and co-evolution in the system (i.e. by magnets that someone else controls or that change at random), linear causation between our modulation and a particular outcome cannot be established. Hence, Snowden recommends that “one of the things you have to start to show donors is how their funding had a positive impact on the area but you can’t show actual attribution because it’s dependent on other things that are going on at the same time.” (quoted from the Systemic M&E Synthesis paper).

2 thoughts on “Results Measurement and the DCED Standard: a commitment to move forward

  1. Bart Doorneweert

    Hi Marcus,

    Great post, and I’m looking forward to the next one where you address causality. My experience is that we need some guidance to intuitively sense out causality. I see a lot of overstated impact logics, skipping through a few degrees of causality: “my words on paper -arrow-> game changing effect”. Attempting to separate between outputs, immediate, intermediate outcomes, and impact and ultimate impact and such is pseudo-guidance. A tool that challenges the robustness of a causality logic is definitely something missing in the project designer’s toolbox.

    In addition to your discussion about the DCED framework, I would also like to add that logics, and measurement should be geared to support process of validating on the fly. All PSD projects on day 1 are faith-based initiatives that require experiments to figure out what works and what doesn’t. In the end a PSD project should contribute to validation of elements to new business models, or create new business modes as a whole, in my opinion. The process of M&E should thus be geared to validation, and that means tailoring indicators to the specific problem that is being addressed with specific targeted segments of users. Abstracting to wider development objectives as the ultimate purpose of measurement, like income and job creation will NOT improve business, which by its nature is an entity that is mainly good at economizing on addressing specifics. In other words we should first validate the business model that will deliver impact, before we claim in our projects that x amount of impact will be delivered at the end.

    Reply
    1. Marcus Jenal

      Hi Bart

      Thanks for your comment. I very much agree with what you say. The point you make in the second part about validity is crucial and has not been explicitly tackled very much in the debate so far. This goes into the direction of the first issue we identified in the Systemic M&E Synthesis paper: excessive focus on our effects on the poor.

      Nevertheless, people want to see the ultimate impact on the level of the beneficiaries (the poor) and want to feel that our interventions are (at least partly) the cause. Can we deliver there more than an illusion of causality and attribution (or contribution)? For some a rhetorical question, I know.

      Reply

Leave a Reply