Tag Archives: impact

Three Considerations when Measuring “Success” in Development Cooperation: A Conversation with Zenebe Uraguchi

This post was written by Zenebe Uraguchi and originally published on the Helvetas Inclusive Systems Blog. It is reposted here with permission.

Two years ago, Zenebe Uraguchi of Helvetas had a conversation with Rubaiyath Sarwar of Innovision Consulting on how fixation on chasing targets leads programmes in development cooperation to miss out on contributing to long-term and large-scale changes. In March 2019, Zenebe met Marcus Jenal in Moldova. Marcus thinks a lot about how complexity thinking can improve development.

This blog is a summary of their dialogue on three thoughts that development practitioners who apply a systemic approach need to consider when measuring success in terms of contributing to systemic change.

By systemic change, we mean changes in the dynamic structures of a system – rules, norms, customs, habits, relationships, sensemaking mechanisms or more generally: institutions and world views – that shape the behaviours or practices of people – individuals, businesses, public sector agencies, civil society organisations, etc.

ZU: Programmes that apply a systemic approach in development cooperation often struggle to measure the systemic change they effect. A couple of years ago, Michael Kleinman wrote in The Guardian, arguing that, “the obsession with measuring impact is paralysing [development practitioners].” Without being obsessed with measurement, I believe development programmes will require a measurement system that’s right-sized and appropriate in scope and timeframe for effectively measuring impacts.

MJ: For me, the challenge is how to find a broader way of effectively showing successes when attempting to stimulate systemic changes. This means not reducing our success factors to the measures that we know how to measure. We need to keep an eye on how our role is contributing to broader change, for example, by using different methodologies and appreciating the perspectives these provide us with. This’ll, for sure, help in demonstrating how a programme contributed to meaningful change in a system. A programme will need to weave different sources and types of evidence into a coherent story. Of course, we need to also make it clear in the story we tell that there’re other factors that influence the change programmes have contributed to.

ZU: In my recent reflection about the Market Systems Symposium in Cape Town, I emphasised the concern that the evidence on impact of programmes that apply a systemic approach is thin. Among others, one of the key challenges is the tension between short-term and long-term results. Can such a tension be managed or reconciled?

MJ: This tension exists in most programmes that apply a systemic approach. On the one hand, there’s a requirement for showing results within a given time frame (e.g. partners have taken new ways of working and showing successes in terms of investment and job creation). This often requires programmes to use incremental interventions with no transformational effect. On the other hand, programmes will also need to invest in more fundamental, long-term systemic changes (e.g. changes in how different institutions interact, improved participation in labour markets).

The key point here is whenever we design interventions or prepare sector strategies, we need to pay attention to explaining how we expect changes to happen and in what sequence. In other words, we need to explicitly state which changes we expect in the short term, in the medium term and in the long term. By categorising the effects of our interventions in this way, I think it’s possible to come up with different types of indicators appropriate for the different stages. Indeed, in using such ways of measuring changes, programmes should work with donors and their Head Offices to manage expectations and tell the narrative on how they expect changes to happen over time.   

ZU: Many development programmes operate in complex and dynamic contexts. I’m aware that adaptive management can sometimes be viewed as an excuse for “making things up as programmes go along”. Yet, the point I’m trying to make is that the context can shift quickly, and strategies need to be adapted continuously. This means that monitoring and results measurement needs to adapt to such changes. For example, having access to reliable and timely information through an agile monitoring and results measurement system is crucial.

MJ: I agree with your point. Development practitioners are still evaluating programmes that work towards facilitating systemic changes by using methods that aren’t adjusted to the shift to a more systemic way of doing development. The evaluation methods are following a “traditional” model adapted to show direct effects (clear discernible cause-effect links, linear effects, lack of feedback loops). For me, this’s to a certain extent unfair towards programmes that take a systemic approach. So, we need to ask ourselves two questions: “what success means” and “how we measure success accordingly” for programmes that work towards systemic change. Only then it’s reasonably possible to show whether an initiative has been successful or not. An immediate follow-up question then needs to be: how can this be done? There’re good examples of methodologies that’re able to capture systemic effects in the evolution community and to a certain extent also in the social sciences.           

ZU: Systemic approaches aren’t entirely new. The approach puts lessons from decades of development work into a set of principles and frameworks to guide development programmes in their design, implementation and results measurement. If this is the case, then why’re we still struggling to figure out how to effectively measure success (on a system level) in development cooperation? Or is it the case that “development isn’t a science and cannot be measured”?

MJ: As I said above, perhaps it isn’t due to the lack of ability to show these changes but the adoption of the appropriate methods in our field. Oftentimes we start development initiatives with a good intention to change or improve a system. We’re then soon confronted with the question: “how are we going to measure such a change?” As we naturally default to the good practices and standards that are used in our field (or are even forced to use them by our donor), which’re still predominantly based on a linear logic, we’re automatically only measuring the effects that can be captured by such methods: direct effects. This, in turn, again affects the way we design our interventions, or the way we work with partners to stimulate systemic change.

It’s a circular logic, you see: our focus will be on achieving targets defined through measures that we intend to use to measure success with – and if these measures aren’t systemic, our focus will not be on systemic change. This’s what I call “the self-fulfilling prophecy” of measuring changes in development cooperation. Let me provide you an example:

ZU: Great points, Marcus. So, what do we make of our conversation? I see three key messages regarding measurement of success: first, the measures we choose will define or at least influence the way how we work, second, we need to choose the right ways of measuring success that’s in line with the kind of approach we use, and third, the importance of learning from recent experiences of evaluating success in the wider evaluation community.

MJ: That’s a good summary. Let me explain these three takeaways a bit more.  

A bottom up perspective on results measurement

Thanks to my engagement in the ‘Systemic M&E’ initiative of the SEEP Network (where M&E stands for monitoring and evaluation but we really have been mainly looking into monitoring), I have been  discussing quite a bit with practitioners on monitoring and results measurement and how to make monitoring systems more systemic. For me this bottom up perspective is extremely revealing in how conscious these practitioners are about the complexities of the systems they work in and how they intuitively come up with solutions that are in line with what we could propose based on complexity theory and systems thinking. Nevertheless, practitioners are often still strongly entangled in the overly formalistic and data-driven mindset of the results agenda. This mindset is based on a mechanistic view of systems with clear cause-and-effect relationships and a bias for objectively obtained data that is stripped from its context and by that rendered largely meaningless for improving implementation. Continue reading

Flipping through my RSS feeds

Google ReaderAfter three weeks of more or less constant work, I’m finally having some time to have a look at my RSS feeds. After the first shock of seeing more than 3000 new entries, containing over 100 unread blog posts, I just started reading from the top. Here a couple of things I found interesting (not related to any specific topic):

SciDevNet: App to help rice farmers be more productive – I don’t know about the Philippines, but I haven’t seen many rice farmers in Bangladesh carrying a smartphone (nor any extension workers for that matter).

Owen abroad: What are result agenda? – An interesting post about the different meanings of following a ‘results agenda’ for different people, i.e., politicians, aid agency managers, practitioners, and (what I call) ‘complexity dudes’. I’m not very satisfied with Owen’s assessment, though, because I think he is not giving enough weight to the argument that results should be used to manage complexity. I think to manage complexity, we don’t need rigorous impact studies, but much more quality focused results regarding the change we can achieve in a system and the direction our intervention makes the system move.

xkcd: Backward in time – an all time favorite cartoon of mine, here describing how to make long waits pass quickly.

Aid on the Edge: on state fragility as wicked problem and Facebook, social media and the complexity of influence – Ben Ramalingam seems to be back in the bloggosphere with two posts on one of my favorite blogs on complexity science and international development. In the first post, he explores the notion of looking at fragile states as so called ‘wicked problems’, i.e., problems that are ill defined, highly interdependent and multi-causal, without any clear solution, etc. (see definition in the blog post). Ben concludes that the way aid agencies work in fragile states needs to undergo fundamental change. He presents some principles on how this change could look like from a paper he published together with SFI’s Bill Frej last year.

In the second piece Ben looks into the complex matter of how socioeconomic systems can be influenced, and how this can be measured, by giving an example of Facebook trying to calculate its influence on the European economy and why its calculations are flawed. The basic argument is that one’s decision to do something is extremely difficult to analyze and even more difficult to trace back to an individual influencer. Also our decisions and, indeed, our behavior, are complex systems. One of the interesting quotes from the post: “Influentials don’t govern person-to-person communication. We all do. If society is ready to embrace a trend, almost anyone can start one – and if it isn’t, then almost no one can.”

Now, to make the link back to Owen’s post mentioned above on rigorous impact analyses: how can we ever attribute impacts on a large scale to individual development programs or donors if we cannot measure the influentials’ impact on an individual’s behavior? I rather like to think of a development program as an agent poking into the right spots, the spots where the system is ready to embrace a – for us – favorable trend. But then to attribute all the change to the program would be preposterous.

Enough reading for today, even though there are still 86 unread blog posts in my RSS reader, not the least 45 from the power bloggers Duncan Green and Chris Blattman. I’ll go and watch some videos now of the new class I recently started on Model Thinking, a free online class by Scott E Page, Professor of Complex Systems, Political Science, and Economics at the University of Michigan. Check it out: http://www.modelthinking-class.org/
For people with less time, a couple of participants are tweeting using #modelthinkingcourse