A bottom up perspective on results measurement

Thanks to my engagement in the ‘Systemic M&E’ initiative of the SEEP Network (where M&E stands for monitoring and evaluation but we really have been mainly looking into monitoring), I have been  discussing quite a bit with practitioners on monitoring and results measurement and how to make monitoring systems more systemic. For me this bottom up perspective is extremely revealing in how conscious these practitioners are about the complexities of the systems they work in and how they intuitively come up with solutions that are in line with what we could propose based on complexity theory and systems thinking. Nevertheless, practitioners are often still strongly entangled in the overly formalistic and data-driven mindset of the results agenda. This mindset is based on a mechanistic view of systems with clear cause-and-effect relationships and a bias for objectively obtained data that is stripped from its context and by that rendered largely meaningless for improving implementation.

We will soon be releasing a paper that synthesizes four months of various discussion events around systemic monitoring and results measurement. But as this paper is not at hand yet, I will use another great publication to illustrate my point about the quality of bottom-up proposals for improved monitoring systems, which was also published by the SEEP Network. The paper is titled “Monitoring and Results Measurement in Value Chain Development: 10 Lessons from Experience“.

As the title suggests, the paper presents 10 lessons from the experience of SEEP’s Value Initiative, which provides a 3-year grant and technical assistance to four demonstration programs in Kenya, India, Indonesia and Jamaica to advance and build capacity in urban value chain development. This also included the installment of strong monitoring and results measurement (MRM) systems. In the paper, the authors transform the 10 lessons into 10 tips for value chain practitioners. All of these 10 lessons basically try to make the MRM system more meaningful for the project management and stuff and at the same time involve all stuff in MRM activities. The focus of the MRM system is clearly on learning and improving the project and much less on the generation of data to satisfy donor requirements in reporting. This is a huge step forward from the formalistic and rigid type of monitoring and evaluation that is still part of many a donor’s requirements towards their projects.

To bring MRM/M&E practice even a step further, we can use insights from complexity sciences. I particularly have a very ambiguous relationship with the results chains and the type of indicators that are used to define targets, both part of what is currently praised as ‘good practice’ in MRM and for example integral part of the Donor Committee for Enterprise Development (DCED) Standard for Results Measurement.

What are good targets, what are good measures?

In order to measure results, projects often use quantitative indicators, especially on impact level, i.e. the level were the ultimate impact is intended, this is mostly at the level of the poor. The DCED Standard for example defines three universal impact indicators: income increase, jobs created, and people reached. From systems thinking we know that changing or setting specific goals for a system is a strong means to influence systems. Donella Meadows points out that “If the goal is defined badly, if it doesn’t measure what it’s supposed to measure, if it doesn’t reflect the real welfare of the system, then the system can’t possibly produce a desirable result.” (Meadows 2008, p. 138) Meadows uses the following example to illustrate her point: “If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending.” (ibid) Further, she cautions: “Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result.” (ibid, p. 140) Dave Snowden often makes the same point by quoting Professor Marilyn Strathern’s variation of Charles Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.”

This means that our targets should not be confused with our measures, though often they are. Many a project’s target is to increased incomes, the number of jobs, etc. These are measures that give us some information on the state of poverty. But they should not be a project’s targets. A project’s target should be to reduce poverty. In current ‘good practice’ of results measurement, targets are defined based on indicators, i.e. measures. That is a fundamental flaw in the current way to build measurement frameworks which reflects also on how projects are planned.

Our target is poverty reduction, it should be defined by a bottom-up and contextual definition of poverty and the changes needed to overcome it rather than by universal indicators deprived of every contextual relevance. Our real targets should be changes in behavior, not changes in numbers. Numbers can then tell us whether these changes in behavior actually led to the type of poverty reduction that we wanted to see or not (e.g. reduction in income poverty), but these numbers must not be our prime targets. Maybe the system finds a way of reducing poverty that does not show in our figures but leaves the people better off in any case or the changes in numbers only show after a longer period when the project already stopped measuring. Changes in behavior and perception of the people themselves (e.g. through using narrative methods) can tell us whether the project is on the right track and the achievements worthwhile. Hence, impact indicators should be kept broad and open to capture unforeseen positive (or negative) changes in the quality of life of the target population.

Causality in complex systems

Causality in complex systems does not work in a linear way. Causalities in complex systems are often described as “causal spread” or “context sensitive constraints”. Juarrero (2010) writes for example: “The connectivity and interaction required for complex systems to self-organize, and which provides them with their contextuality and causal efficacy, are best understood in terms of context-sensitive constraints not classical billiard-ball-like (efficient) causality.” This different way of causality in complex adaptive systems deprives us of the ability to use simple linear cause-and-effect logic to attribute changes to project interventions. Although we might be able to see changes in quantitative indicators along the results chain and on the level of the poor, we would still not be able to fully understand how the project interventions actually lead to this, because the real underlying cause is more complex than a linear chain of events and what we see mere correlation instead of causation. Only if we understand how something works, we can bring it to scale. Results chains give us the illusion of understanding because some values of indicators that we defined along the chain change in the right direction.

The topic of causality in complex systems will surely lead to another few posts as I have a list of articles on the topic waiting to be read.

References:

Alicia Juarrero (2010): Complex Dynamical Systems Theory. Cognitive Edge. http://cognitive-edge.com/uploads/articles/100608%20Complex_Dynamical_Systems_Theory.pdf [accessed last on 24.01.2013]

Donella H. Meadows (2008): Thinking in Systems: A Primer. Edited by Diana Wright. Chelsea Green Publishing. White River Junction, Vermont.

4 thoughts on “A bottom up perspective on results measurement

  1. Bernard DuPasquier

    Definitely looking forward to reading the upcoming SEEP paper on systemic monitoring and results measurements. Methods like MSC applied with various stakeholders on different levels in a project are useful monitoring instruments that contribute to learning and improvement.

    Reply
  2. Bart Doorneweert

    Like Jeff Bezos says: be stubborn on your vision, and flexible in the details. I think we should try to understand the underlying model that will deliver the impact we desire, rather than focus blindly on delivering the impact. That will make space for the right kind of search questions. Doing both search and delivery at the same time is likely to jam the innovation process.
    Great post!
    Bart

    Reply
    1. Marcus Jenal

      I totally agree with you. And the important conclusion here is that only if we understand how change happens we can bring it to scale. If we just produce impact, scaling will remain a challenge.

      Reply
  3. Pingback: Guest Post: Aly Miehlbradt on the DCED Standard and Systemic M&E | marcus jenal

Leave a Reply to Marcus JenalCancel reply