Before venturing into an outline of a results measurement framework I want to point out two blog posts by Duncan Green on aid in complex systems that are very well written and to the point.
The first one asks the question of “how to plan when you don’t know what is going to happen?” Starting point is the obvious dichotomy that is also driving our discussion about measuring results:
The crucial point is that most political, social and economic systems [are complex]. Yet the aid business insists on pursuing a linear model of change, either explicitly, or implicitly because a ‘good’ funding application has a clear set of activities, outputs, outcomes and a MEL [Monitoring, Evaluation, and Learning] system that can attribute any change to the project’s activities – a highly linear approach.
Duncan then proposes a few bullet points that are crucial when planning in situations of uncertainty. The list gives in my view a brief but very comprehensive idea of how planning in development projects should look like with important consequences also for monitoring and evaluation. Besides the two aspects that I have mentioned before in my writing, i.e. fast feedbacks and an evolutionary approach, there is one aspect I want to highlight that appears in various forms in Duncan’s list:
Focus on problems, not solutions: The role of outsiders is to identify and amplify problems, but leave the search for solutions to local institutions.
This point was for me once again very obvious when I recently reviewed a project document (or ‘business plan’ as it is nowadays called for economic development programs). The document featured a very thorough analysis of a number of economic sectors. Based on that, it proposed a number of solutions to the identified constraints, mainly the introduction of new business models to overcome exclusion of marginal farmers from markets. The focus was on the one hand an improved input and knowledge transfer system, and on the other hand better linkages between the farmers and buyers. These solutions were planned in a very detailed fashion and implementation spelled out to the last activity, starting with identifying private sector partners (lead firms) to champion the approaches.
So in effect, this is sort of the contrary to what planning in complex systems is proposing. The role of a project would rather be to amplify the problem (of the marginalization of small farmers from a market perspective – and the potential opportunities that are lost through that). Then the project should convene relevant stakeholders and set the stage for them to search for solutions that make sense in the local context. This is supported by another point in Duncan’s list:
Convening and Brokering: Get dissimilar local players together to find solutions – the outsiders’ job is to support that search.
Another perspective on this is to acknowledge that there are no ‘best practices’ to solve problems in complex systems. We should rather search for rules of thumb that we can apply in different contexts:
Rules of thumb, not best practice toolkits: I am told that the US marines do not go into combat brandishing Oxfam toolkits and online resources on best practice. They operate on rules of thumb – take the high ground, stay in communications and keep moving. They improvise the rest. Aid workers on the ground operate far more like this than our project reports admit. If we were honest about it, we could have a better discussion on how to improve those rules of thumb.
This is also reflected in my earlier writing on intuition and heuristics.
Finally, Duncan also touches on the results discussion:
The current approach to measuring results favours linearity. But rejecting results altogether is the wrong approach – both because even those who recognize the central role of complex systems still want to know if they’re doing any good, and because the results people control the cash. No results, no funding. We need to get much better at ‘counting what counts’, and reclaim the idea of ‘rigour’ for qualitative and other methods better suited to complex systems.
In the second post, Duncan writes about a meeting he was attending where Matt Andrews presented his approach called Problem Driven Iterative Adaptation (PDIA). The approach is mainly focusing on institutional reform. Again, the discussion touches on results:
There was a good discussion on what constitutes ‘results’. Good PDIA-type work in developing countries requires a rapid feedback loop of results, but of a different kind to those typically demanded by the aid business. Developing country politicians want to know what’s happening with their money, what has been learned, what has worked and what hasn’t, and how the project has responded. They don’t need the (often bogus) certainty and data demanded by aid planners.
This week, I am in Kenya looking at the Monitoring and Evaluation framework of a big market development program here. I hope to learn more on the practical application of some of the principles for systemic M&E, which will also feed into the promised post sketching out an MRM framework.