After submitting a long comment as a reply to Aly Miehlbradt’s post, I could win Daniel Ticehurst to instead write another guest post. Daniel’s perspective on the DCED Standard nicely contrasts with the one put forward by Aly and I invite you all to contribute with your own experiences to the discussion. This was not originally planned as a debate with multiple guest posts, but we all adapt to changing circumstances, right?
Dear Marcus and Aly, many thanks for the interesting blog posts on monitoring and results measurement, the DCED standard and what it says relating to the recent Synthesis on Monitoring and Measuring changes in market systems.
My comments are based on seeing the bottom-line purpose of public funded aid (which is a short term and minor contributor to development) as inducing processes among invariably complex systems in which the poor live and on which their livelihoods depend.
They are also premised on the understanding that the main function of management is to deliver a package of support that stimulates systemic change. I get the legitimacy of the Market Development approach as being based on the assumption that more accessible and competitive markets enable poor people to find their own way out of poverty by providing more real choices and opportunities. The way the approach positions itself to directly supporting markets (as opposed to delivering services directly to the poor) has real consequences for the purpose of monitoring and its requirements for comparative analysis, thus the results agenda. Of primary importance is being able to understand how and to what extent the presence of the programme is stimulating lasting changes in and among market systems (ie, Outcomes) – and that their services and products do become more responsive and attuned to the needs of the poor. The guidance on M&E for M4P programmes makes this clear: it cautions against prematurely attempting to assess higher level ‘impacts’ that will compromise the systemic rationale of a market development approach.
It follows, therefore, that the monitoring processes and what is monitored focuses on how and to what extent this is happening. It is not the responsibility of the manager to measure/test the hypothesis or idea on which the investment was made: markets who are more responsive and whose services and products are better attuned to the needs of the poor (the systemic change) will make for a meaningful and lasting contribution to poverty reduction (the developmental change). We all agree in principle that the skills and processes necessary for good monitoring should be an integral part of management. Based on this, the object of any standards or support to aid this is to help those responsible for managing programmes to work together in delivering the objective of monitoring: to drive better, not just measure and report on, results. It is less that programme based staff do not have the skills to monitor impacts, it’s more to do with that it’s not their responsibility in the first place.
Lessons in monitoring
If there have been any major lessons learnt over the last twenty odd years in monitoring I would argue these are:
- Often in development, and private sector or market development is typical, the most important things are unknown or unknowable. Given the complexity of systems and how they interact with each other (eg, market, household and government, including the donors lest we forget) unexpected outcomes, good or bad, can matter almost as much as what programmes themselves are intended to do or achieve. No results chain, theory of change or logframe (yes, sometimes folk, like DFID, have all three) however ‘well’ developed for aid programmes, can predict human behaviour and decisions. You could argue that Indicators, in the context of monitoring, matter less than assumptions we all make about the poor and the markets we endeavour to facilitate.
- Repeating history and re-inventing wheels is happening as evidenced by the cockroach policies being re-adopted by many donors in their current quest for results, including impacts. By cockroach policies, I mean those that were flushed away 25 years ago but that have returned: no discipline as to the real and practical differences between M&E (nb, their different objectives, primary users and responsibilities); how measuring results is a more involved and challenging task than delivering them; and how the naivety of those who accept payment to measure and report on impacts is surpassed only by those who believe the ‘reports’, and especially before and/or at the end of typical implementation periods. As much as the networks are the real drivers of systemic market changes, not the facilitator it is the enterprises they support that create the jobs and increase incomes. It’s their result not the aid programmes. Why do we still think it is sensible and possible to measure and report on this whilst we also claim how complex and uncertain a place it is we work in?
- It is analytically impossible to establish causal relationships between, for example, agriculture related aid and agricultural production/productivity and incomes by the end of most aid programme’s implementation periods. Furthermore, such attempts are often associated with appreciable opportunity costs: they fail to or compromise the need to take into account what motivates beneficiaries, what they value and thus compromise the principle of downward accountability.
- The monitoring of results imposed by donors regarding impact variables emasculate the capacity of programme management and donor agency management to manage delivery, aggravates that of in-country partners, is resource-intensive and likely to produce disappointing results.
- Performance is almost universally made worse by the specifications, regulations and targets by which organizations and programmes have been obliged to comply. Targets, for example, tend to create unintended but predictable perverse consequences leading to diversion of resources away from the real work in order to ‘game’ the targets. Monitoring of conditions itself does not lead to better decisions or performance – for example, collecting data on farmer yields does not tell you how to do a better job in facilitating systemic change. What management decision will this inform? The test of a good measure is: can it help in understanding and improving performance ie, making better decisions on who the programme supports and how?
With these in mind, I wanted to raise two points: 1) on the adequacy cum advantage of results chains, intrinsic features of which you and others claim, as being better suited to dealing with uncertainty, complexity and systemic change than logical frameworks; and 2) another on how ‘complying’ with the DCED standard really will enhance prospects for managers to better stimulate, rather than just measure and report on, change – the purpose of monitoring.
The adequacy of Results Chains
I have read the arguments for having what are called Result Chains instead of, say, Logframes: they provide the necessary nuance and details needed by managers; they are seldom linear, they allow stakeholders to critically think through and reflect on the intervention process, they clarify assumptions and produce a story that can be updated. I have also seen examples of what they look like (eg, the USAID training for vets in Cambodia, the Seed Programme in the DCED ‘Walk Through’ and the PrOpCom Tractor Leasing Project in Nigeria). I do not think the claims made about the supposed advantages of results chains over logframes hold for the reasons given. Read here an explanation in the mid-1990’s of logframes and how they were put together by GTZ.
My view is that many of the disadvantages of logframes had little to do with any intrinsic weakness, more a history of mis-use. I know I am in the minority yet I like the rigour of logframes in how putting them together should aim to, in the context of market systems development programmes:
- make explicit the performance standards programme managers are accountable to its clients (market systems) for as defined by them in delivering support (in qualitative and quantitative terms) that reflect their needs and circumstances (ie, output indicators);
- research, through dialogue with clients, the assumptions about their capacities to respond to using and/or benefiting from these products and services in their role of providing services to the poor (dubbed beneficiaries) most of which invariably relate to the state and influence of wider systems; and
- observe, listen to and diagnose how, why and to what extent, these service providers – and others – respond to facilitation through being more responsive and attuned to the needs of the poor, and how this will inevitably vary (ie, the systemic changes).
I would argue that logframes, theories of change and results chains have more in common than many would admit to insofar as they share two challenges: 1) they attempt to predict how other will respond to and benefit from aid interventions in institutional environments that are complex and subject to many ‘jinks’ and ‘sways’; and, in doing so 2) need to wrestle with establishing a consensus among many different stakeholders (the donor, the programme management, the facilitators, the markets and the poor) as to what this story looks like.
The focus and adequacy of the DCED Standard
It is telling that neither the DCED standard nor the synthesis paper [Synthesis Paper of the Systemic M&E Initiative] differentiate between M and E. Moving on, and from a monitoring perspective, I have always been interested in finding out how and with what kind of decisions they need to make on better stimulating systemic change. My guess is that not all of them necessarily know. However, listening to the views and perceptions of those they support (the system ‘players’) in order to understand them and the systems they work in better must help improve the management job they do. If you’re trying to help someone get somewhere, you best understand where that someone is coming from.
On the focus of the standards, I have one query. Is the main test or objective sought by monitoring and reporting on ‘results’ really about focussing effort on understanding systemic changes in market ‘behaviours and relationships’? According to the DCED standard it is not. This is where the standards appear to differ from the modest guidance on M(and E) produced by the Springfield Centre. I find this surprising, and for two reasons: 1) Some of the folk at the Springfield Centre, I understand, contributed to the standards. 2) It appears that the standards are diluted and disappointingly come up with Universal Impact Indicators. That the guidance claims how these will facilitate an adding up of impacts seems rather simplistic and I question the validity and usefulness of trying to do so. But then perhaps this illustrates a difference between PSD and Market Development approaches? Indicators need to be specific to a given decision situation. Considerations of scale, aggregation, critical limits, and thresholds, etc. will also be situation dependent, and situations are complex and constantly changing.
Clearly, however, someone thought having a set of Impact more important and useful than say developing an equivalent list on systemic change. Reverting to the habits of typical results based or even impact monitoring, I believe, is disappointing. This assumes managers need to make decisions based on values of impact indicators and thus find useful information on beneficiary impacts. It also begs a broader question: is the responsibility of those who manage PSD programmes to measure and report on such impacts? I thought it would have been to focus on, in the example of the seed programme in the ‘walk through’, the assessment of the seed input companies and the seed retailers in how they value the quality of the support offered by the programme and how they subsequently responded in working together to service and retain their farmer clients (and make money from doing so). For programme staff to then go ahead and measure the physical and financial yields based on some form of incremental gross margin analysis is particularly silly, sucks up enormous resources and is unnecessarily distracting. Such effort is associated with significant opportunity cost and typically divorces those responsible for monitoring from the main function of management. There is an ambiguity in the standards regarding who will benefit from having a system in place and how that passes some audit.
On the adequacy of the standards, a few things struck me on reading the ‘walk through’ and the associated template for the auditor’s report.
- Given the one purpose of the standards is to inform management decisions, this need to be identified first and then the design of indicators to reduce critical decision uncertainties can follow.
- I could find no reference to the need for managers among the 8 elements of what makes for a good ‘system’ in listening to the clients of the programme clients and/or their clients, the ultimate beneficiaries. Surely any system that sells itself on serving the information needs of managers needs to acknowledge the importance of the perspectives of those they are paid to support. Is there any other way in assessing the quality and relevance of the support? Experience in development aid has shown that many indicators that people are currently using have no information value, and indicators that have high value are frequently not being measured. Surely, the key immediate uncertainties facing programmes are the responses to the facilitation or the intervention. In this context, it would be useful to mention indicators that combine both a quantitative (how much) and qualitative (how well) assessment of the programme’s performance and the responses to this (the change process) among market systems.
- The ‘control’ and compliance points listed across the template for the auditor’s report miss out on the most important test of all: does compliance with the 8 elements make for improved results? The basis for whether a system meets the requirements assumes that having a results chain, measuring indicators and reporting on them is sufficient. On this point the guidance is not entirely clear: in one breath it talks about the need for reaching and convincing an external and sceptical audience; in another it says how the results measurement systems should serve programme management. It seems to assume both users are interested in and find useful the same information.
- There is insufficient attention to and focus on assumptions throughout. The elements in the standards are driven solely by the need for measuring and attributing change. This is despite the useful ‘role’ monitoring can have in managing complexity through diagnosing assumptions the results chain makes explicit.
- On the issue of attribution, the standard seems to carry much of the freight of impact evaluation: the difficulty of isolating indicators and dealing with confounding factors through establishing a valid counterfactual. Such requirements for comparative analysis sit uncomfortably with the brief of a management-focussed system. This is more the domain of those responsible for evaluating impact; that is as opposed to those responsible for delivering systemic change with the help of a management-focused system. Their main interest in making comparisons is different to impact evaluation: it is about understanding how the markets differ in their responses.
Daniel is a Project Director at HTSPE. He does evaluations but his passion is in supporting others set up balanced monitoring processes that enable managers to listen well to clients or beneficiaries to inform decisions they need to make so as to improve performance – the object of monitoring. While no sector specialist, his work has focused on private sector development, agriculture and food and nutrition security and some on infrastructure and governance.
He has worked exclusively for 23 years in Monitoring and Evaluation at local, sector, national, global and corporate levels. Fourteen years have been on long term assignments in mainly African Countries (Zimbabwe, Malawi, Uganda, Tanzania, Lesotho, South Africa and Ghana). During this time he supported a variety of organisations in helping them develop and assess their portfolios: from donors such as the World Bank, IFC, EC, AusAID, DFID, USAID and Danida to National Governments (including the UK’s former Department of Trade and Industry) and Local and International NGOs.
His long term experience has informed practical approaches to developing learning and accountability mechanisms to investors, funders, and those they seek to support communications. Through these, he has played leading influencing and mentoring roles in developing strategies, the design and implementation of impact and performance frameworks and methodologies. An important feature of his approach lies in helping others shape practical ways in interpreting and using the information they value to make decisions in ways that do not aggravate capacities in managing investments and delivering the change. He works with and adapts various approaches such as activity based costing, beneficiary assessments, client satisfaction surveys and balanced scorecards. He also likes to keep things simple so avoids using too many prefixes such as process, participatory, results-based to explain what makes for good monitoring processes.