Three Considerations when Measuring “Success” in Development Cooperation: A Conversation with Zenebe Uraguchi

This post was written by Zenebe Uraguchi and originally published on the Helvetas Inclusive Systems Blog. It is reposted here with permission.

Two years ago, Zenebe Uraguchi of Helvetas had a conversation with Rubaiyath Sarwar of Innovision Consulting on how fixation on chasing targets leads programmes in development cooperation to miss out on contributing to long-term and large-scale changes. In March 2019, Zenebe met Marcus Jenal in Moldova. Marcus thinks a lot about how complexity thinking can improve development.

This blog is a summary of their dialogue on three thoughts that development practitioners who apply a systemic approach need to consider when measuring success in terms of contributing to systemic change.

By systemic change, we mean changes in the dynamic structures of a system – rules, norms, customs, habits, relationships, sensemaking mechanisms or more generally: institutions and world views – that shape the behaviours or practices of people – individuals, businesses, public sector agencies, civil society organisations, etc.

ZU: Programmes that apply a systemic approach in development cooperation often struggle to measure the systemic change they effect. A couple of years ago, Michael Kleinman wrote in The Guardian, arguing that, “the obsession with measuring impact is paralysing [development practitioners].” Without being obsessed with measurement, I believe development programmes will require a measurement system that’s right-sized and appropriate in scope and timeframe for effectively measuring impacts.

MJ: For me, the challenge is how to find a broader way of effectively showing successes when attempting to stimulate systemic changes. This means not reducing our success factors to the measures that we know how to measure. We need to keep an eye on how our role is contributing to broader change, for example, by using different methodologies and appreciating the perspectives these provide us with. This’ll, for sure, help in demonstrating how a programme contributed to meaningful change in a system. A programme will need to weave different sources and types of evidence into a coherent story. Of course, we need to also make it clear in the story we tell that there’re other factors that influence the change programmes have contributed to.

ZU: In my recent reflection about the Market Systems Symposium in Cape Town, I emphasised the concern that the evidence on impact of programmes that apply a systemic approach is thin. Among others, one of the key challenges is the tension between short-term and long-term results. Can such a tension be managed or reconciled?

MJ: This tension exists in most programmes that apply a systemic approach. On the one hand, there’s a requirement for showing results within a given time frame (e.g. partners have taken new ways of working and showing successes in terms of investment and job creation). This often requires programmes to use incremental interventions with no transformational effect. On the other hand, programmes will also need to invest in more fundamental, long-term systemic changes (e.g. changes in how different institutions interact, improved participation in labour markets).

The key point here is whenever we design interventions or prepare sector strategies, we need to pay attention to explaining how we expect changes to happen and in what sequence. In other words, we need to explicitly state which changes we expect in the short term, in the medium term and in the long term. By categorising the effects of our interventions in this way, I think it’s possible to come up with different types of indicators appropriate for the different stages. Indeed, in using such ways of measuring changes, programmes should work with donors and their Head Offices to manage expectations and tell the narrative on how they expect changes to happen over time.   

ZU: Many development programmes operate in complex and dynamic contexts. I’m aware that adaptive management can sometimes be viewed as an excuse for “making things up as programmes go along”. Yet, the point I’m trying to make is that the context can shift quickly, and strategies need to be adapted continuously. This means that monitoring and results measurement needs to adapt to such changes. For example, having access to reliable and timely information through an agile monitoring and results measurement system is crucial.

MJ: I agree with your point. Development practitioners are still evaluating programmes that work towards facilitating systemic changes by using methods that aren’t adjusted to the shift to a more systemic way of doing development. The evaluation methods are following a “traditional” model adapted to show direct effects (clear discernible cause-effect links, linear effects, lack of feedback loops). For me, this’s to a certain extent unfair towards programmes that take a systemic approach. So, we need to ask ourselves two questions: “what success means” and “how we measure success accordingly” for programmes that work towards systemic change. Only then it’s reasonably possible to show whether an initiative has been successful or not. An immediate follow-up question then needs to be: how can this be done? There’re good examples of methodologies that’re able to capture systemic effects in the evolution community and to a certain extent also in the social sciences.           

ZU: Systemic approaches aren’t entirely new. The approach puts lessons from decades of development work into a set of principles and frameworks to guide development programmes in their design, implementation and results measurement. If this is the case, then why’re we still struggling to figure out how to effectively measure success (on a system level) in development cooperation? Or is it the case that “development isn’t a science and cannot be measured”?

MJ: As I said above, perhaps it isn’t due to the lack of ability to show these changes but the adoption of the appropriate methods in our field. Oftentimes we start development initiatives with a good intention to change or improve a system. We’re then soon confronted with the question: “how are we going to measure such a change?” As we naturally default to the good practices and standards that are used in our field (or are even forced to use them by our donor), which’re still predominantly based on a linear logic, we’re automatically only measuring the effects that can be captured by such methods: direct effects. This, in turn, again affects the way we design our interventions, or the way we work with partners to stimulate systemic change.

It’s a circular logic, you see: our focus will be on achieving targets defined through measures that we intend to use to measure success with – and if these measures aren’t systemic, our focus will not be on systemic change. This’s what I call “the self-fulfilling prophecy” of measuring changes in development cooperation. Let me provide you an example:

ZU: Great points, Marcus. So, what do we make of our conversation? I see three key messages regarding measurement of success: first, the measures we choose will define or at least influence the way how we work, second, we need to choose the right ways of measuring success that’s in line with the kind of approach we use, and third, the importance of learning from recent experiences of evaluating success in the wider evaluation community.

MJ: That’s a good summary. Let me explain these three takeaways a bit more.  

Should we fix wrong behaviour?

I have spent the last three days at the Market Systems Symposium 2019 in Cape Town. I really enjoyed the event. Besides meeting good friends, there was a great number of practitioners with astonishing accumulated experiences on how to implement Market Systems Development (MSD) projects. There was also a good number of donor staff, which provided their perspectives on the challenges of implementing adaptive and learning programmes – unfortunately the European donors were largely absent (with the exception of one participant from the Swiss Agency for Development and Cooperation), most of the donor staff were from USAID. The majority of the active participants were those that are working on further developing the approach,  innovate within their project and generally try to make market development more effective – it was really exciting to hear what they figured out and what they are struggling with. There was a good energy around during the three days. But I have also picked up some concerning trends, particularly around the growing intent of MSD projects to change market actors’ behaviours.

Continue reading

Advancing my systems change typology: considering scaling out, up and deep

Recently I started a series on the development of a typology of systems change (the two previous articles are here and here). In this post, I want to introduce the concepts of ‘scaling out’, ‘scaling up’ and ‘scaling deep’ developed by scholars of social innovation. I want to link these concepts to my earlier thinking around the systems change typology and update it based on the new insights from this literature. At the end I will also voice a little critique on innovation-focused approaches to systems change.

‘Scaling out’ refers to the most common way of attempting to getting to scale with an innovation: reaching greater numbers by replication and dissemination. ‘Scaling up’ refers to the attempt to change institutions at the level of policy, rules and laws. Finally, ‘scaling deep’ refers to changing relationships, cultural values and beliefs.

Continue reading

Accompanied learning — an alternative to the ‘know-it-all’ consulting model

My company is a consulting firm and on my CV I call myself a consultant. Consultants are experts that are hired to bring solutions to a problem or improve the functioning of a mechanism, process or organisation. They are expected to have all the answers and are paid by somebody to give them the right answers to their questions or solutions for their problems.

When I work with organisations and teams on complex challenges, I often do not feel comfortable in this role as a consultant or expert. Too often, I do not know the answers or solutions. Too often, I have felt that moment of panic in the plane on the way to a client that I do not really know what to tell them, that I do not have the answers they are hoping to get from me. As I have said and written before, intervening in complex systems is not about fixing things, like fixing an engine. Complex systems are evolving interconnected systems. Understanding these interconnections and shifting the context is a more appropriate approach to change. This always needs to be based on a deep sense of understanding the local context and continuous mutual learning. Continue reading

There are no root causes in complexity

I have never been very comfortable with the concept of root causes. I do see the need to go below the surface and not just look at the ‘symptoms’. Yet, it seems to me that the concept of root causes – one problem causing one or a number of symptoms – is at odds with the idea of complex systems, where patterns emerge as a result of a number of different interconnected and interdependent elements and structures.

The idea or root causes is linked to a linear cause-effect kind of thinking. This often plays out as follows: development agents going into a country, observing an undesirable pattern or symptom, doing some analysis to find a root cause, fixing it, and assuming the symptom will disappear – a linear causal chain is assumed from the root cause to the symptom. This is also the reason why many projects use results chains – chains of boxes and arrows indicating steps in a causal chain from the root cause to the symptom.

The problem with this type of thinking is that it does not reflect how the world really works. Still, this is how development generally approaches complex problems. Complexity thinking offers a different way of thinking about intractable or ‘messy’ issues such as getting stronger and more inclusive economies. One concept in particular seems helpful to replace the linear causal logic from root causes to symptoms: the concept of modulators. Continue reading

Systemic change typology – some doubts

In my last post I shared my thinking around a possible typology of systemic change that could help us come to grips with the different concepts and ideas that are connected to systemic change. I have received some feedback on the post and I have done some more thinking that I would like to share in this follow up post.

Concretely, I want to share my thoughts about the usefulness of systemic change as a concept on a more fundamental level, based on the obvious fact that systems change anyway, also without purposeful interventions.

Continue reading

Attempt at a typology of systemic change

Systemic change has been a frequent topic on this blog – as it is in my work. After running after the perfect conceptualisation of systemic change for many years, this post is inspired by my realisation that there may be different ways to look at systemic change – all correct in their own right. I have discussed systemic change with many colleagues and friends and I have always tried to reconcile different views on the concept, only now realising that they might not be reconcilable. So here an attempt of a typology of systemic change (initially differentiating two types) – nothing final, just trying to put my thinking down in writing.

A warning in advance. This article is rather conceptual and I’m introducing some models that might be new to my readers (but then again, I have done that before). I’m trying to sort through recent reading in my mind to better understand the types of systemic change. This should not stop you from reading it of course! I would be happy to discuss this with anybody!

Continue reading

Systemic change and system’s health – thinking out loud

As external development agents, we cannot create impacts with all the qualities we want them to have: sustainable, inclusive, gender-equal, etc. We can only work with and through the system, so these qualities become an inherent part of how the system does things. Let’s say we call a system ‘healthy’ when it is creating these qualities we would like to see (although I’m not sure ‘healthy’ is the best term, it sounds a bit judgemental, but it has been used by others before). The question is how does a healthy system look like that is more likely to deliver impacts with the desired qualities? And how can we improve the health of a system?

There are various bodies of knowledge, all rooted in systems and complexity thinking, that give us some ideas to help answer this question. They all answer them from a different perspective and some are clearly limited in scope while others claim universality. I want to introduce three sets of principles or maybe sets of favourable behaviours here. Continue reading

Harnessing the power of complexity in development

This post first appeared on the BEAM Exchange blog.

Market systems are complex adaptive systems and market systems development is a complex task.

This abstract statement reflects the reality market systems practitioners encounter every day: market systems are dynamic with rich interactions between a large number of diverse actors. Changes in these systems are difficult to predict and development interventions often, if not always, lead to unintended consequences. Continue reading

Want to learn about complexity? Join me for a unique expedition through complexity in development

I’m really excited to announce a new training course that I have put together with Tony Quinlan from Narrate, which will be starting in September. As readers of my blog know, I have applied concepts and principles of complexity to my work in international development for a long time. In this course, I will share these concepts, principles and experiences I have made and accompany the participants to make sense of their own experiences and create new experiences in applying complexity concepts.

Here the brief blurb for the course:

Harnessing the power of complexity in development – An extended, unique expedition through complexity approaches to enhance agile, adaptive and appropriate work in dynamic and uncertain development contexts.

This course gives you a unique opportunity to gain experience and expertise in complexity, through a guided journey covering the fundamentals of complexity. Projects taken from participants’ real-world situations (not hypothetical case studies) will be used to apply these principles, teams being mentored all the while by two expert practitioners.

Continue reading