Guest Post: Daniel Ticehurst with a critical reply on the DCED Standard

Daniel TicehurstAfter submitting a long comment as a reply to Aly Miehlbradt’s post, I could win Daniel Ticehurst to instead write another guest post. Daniel’s perspective on the DCED Standard nicely contrasts with the one put forward by Aly and I invite you all to contribute with your own experiences to the discussion. This was not originally planned as a debate with multiple guest posts, but we all adapt to changing circumstances, right?

Dear Marcus and Aly, many thanks for the interesting blog posts on monitoring and results measurement, the DCED standard and what it says relating to the recent Synthesis on Monitoring and Measuring changes in market systems.

My comments are based on seeing the bottom-line purpose of public funded aid (which is a short term and minor contributor to development) as inducing processes among invariably complex systems in which the poor live and on which their livelihoods depend.
They are also premised on the understanding that the main function of management is to deliver a package of support that stimulates systemic change. I get the legitimacy of the Market Development approach as being based on the assumption that more accessible and competitive markets enable poor people to find their own way out of poverty by providing more real choices and opportunities. The way the approach positions itself to directly supporting markets (as opposed to delivering services directly to the poor) has real consequences for the purpose of monitoring and its requirements for comparative analysis, thus the results agenda. Of primary importance is being able to understand how and to what extent the presence of the programme is stimulating lasting changes in and among market systems (ie, Outcomes) – and that their services and products do become more responsive and attuned to the needs of the poor. The guidance on M&E for M4P programmes makes this clear: it cautions against prematurely attempting to assess higher level ‘impacts’ that will compromise the systemic rationale of a market development approach.

It follows, therefore, that the monitoring processes and what is monitored focuses on how and to what extent this is happening. It is not the responsibility of the manager to measure/test the hypothesis or idea on which the investment was made: markets who are more responsive and whose services and products are better attuned to the needs of the poor (the systemic change) will make for a meaningful and lasting contribution to poverty reduction (the developmental change). We all agree in principle that the skills and processes necessary for good monitoring should be an integral part of management. Based on this, the object of any standards or support to aid this is to help those responsible for managing programmes to work together in delivering the objective of monitoring: to drive better, not just measure and report on, results. It is less that programme based staff do not have the skills to monitor impacts, it’s more to do with that it’s not their responsibility in the first place.

Lessons in monitoring

If there have been any major lessons learnt over the last twenty odd years in monitoring I would argue these are:

  1. Often in development, and private sector or market development is typical, the most important things are unknown or unknowable. Given the complexity of systems and how they interact with each other (eg, market, household and government, including the donors lest we forget) unexpected outcomes, good or bad, can matter almost as much as what programmes themselves are intended to do or achieve. No results chain, theory of change or logframe (yes, sometimes folk, like DFID, have all three) however ‘well’ developed for aid programmes, can predict human behaviour and decisions. You could argue that Indicators, in the context of monitoring, matter less than assumptions we all make about the poor and the markets we endeavour to facilitate.
  2. Repeating history and re-inventing wheels is happening as evidenced by the cockroach policies being re-adopted by many donors in their current quest for results, including impacts. By cockroach policies, I mean those that were flushed away 25 years ago but that have returned: no discipline as to the real and practical differences between M&E (nb, their different objectives, primary users and responsibilities); how measuring results is a more involved and challenging task than delivering them; and how the naivety of those who accept payment to measure and report on impacts is surpassed only by those who believe the ‘reports’, and especially before and/or at the end of typical implementation periods. As much as the networks are the real drivers of systemic market changes, not the facilitator it is the enterprises they support that create the jobs and increase incomes. It’s their result not the aid programmes. Why do we still think it is sensible and possible to measure and report on this whilst we also claim how complex and uncertain a place it is we work in?
  3. It is analytically impossible to establish causal relationships between, for example, agriculture related aid and agricultural production/productivity and incomes by the end of most aid programme’s implementation periods. Furthermore, such attempts are often associated with appreciable opportunity costs: they fail to or compromise the need to take into account what motivates beneficiaries, what they value and thus compromise the principle of downward accountability.
  4. The monitoring of results imposed by donors regarding impact variables emasculate the capacity of programme management and donor agency management to manage delivery, aggravates that of in-country partners, is resource-intensive and likely to produce disappointing results.
  5. Performance is almost universally made worse by the specifications, regulations and targets by which organizations and programmes have been obliged to comply. Targets, for example, tend to create unintended but predictable perverse consequences leading to diversion of resources away from the real work in order to ‘game’ the targets. Monitoring of conditions itself does not lead to better decisions or performance – for example, collecting data on farmer yields does not tell you how to do a better job in facilitating systemic change. What management decision will this inform? The test of a good measure is: can it help in understanding and improving performance ie, making better decisions on who the programme supports and how?

With these in mind, I wanted to raise two points: 1) on the adequacy cum advantage of results chains, intrinsic features of which you and others claim, as being better suited to dealing with uncertainty, complexity and systemic change than logical frameworks; and 2) another on how ‘complying’ with the DCED standard really will enhance prospects for managers to better stimulate, rather than just measure and report on, change – the purpose of monitoring.

The adequacy of Results Chains

I have read the arguments for having what are called Result Chains instead of, say, Logframes: they provide the necessary nuance and details needed by managers; they are seldom linear, they allow stakeholders to critically think through and reflect on the intervention process, they clarify assumptions and produce a story that can be updated. I have also seen examples of what they look like (eg, the USAID training for vets in Cambodia, the Seed Programme in the DCED ‘Walk Through’ and the PrOpCom Tractor Leasing Project in Nigeria). I do not think the claims made about the supposed advantages of results chains over logframes hold for the reasons given. Read here an explanation in the mid-1990’s of logframes and how they were put together by GTZ.

My view is that many of the disadvantages of logframes had little to do with any intrinsic weakness, more a history of mis-use. I know I am in the minority yet I like the rigour of logframes in how putting them together should aim to, in the context of market systems development programmes:

  • make explicit the performance standards programme managers are accountable to its clients (market systems) for as defined by them in delivering support (in qualitative and quantitative terms) that reflect their needs and circumstances (ie, output indicators);
  • research, through dialogue with clients, the assumptions about their capacities to respond to using and/or benefiting from these products and services in their role of providing services to the poor (dubbed beneficiaries) most of which invariably relate to the state and influence of wider systems; and
  • observe, listen to and diagnose how, why and to what extent, these service providers – and others – respond to facilitation through being more responsive and attuned to the needs of the poor, and how this will inevitably vary (ie, the systemic changes).

I would argue that logframes, theories of change and results chains have more in common than many would admit to insofar as they share two challenges: 1) they attempt to predict how other will respond to and benefit from aid interventions in institutional environments that are complex and subject to many ‘jinks’ and ‘sways’; and, in doing so 2) need to wrestle with establishing a consensus among many different stakeholders (the donor, the programme management, the facilitators, the markets and the poor) as to what this story looks like.

The focus and adequacy of the DCED Standard

It is telling that neither the DCED standard nor the synthesis paper [Synthesis Paper of the Systemic M&E Initiative] differentiate between M and E. Moving on, and from a monitoring perspective, I have always been interested in finding out how and with what kind of decisions they need to make on better stimulating systemic change. My guess is that not all of them necessarily know. However, listening to the views and perceptions of those they support (the system ‘players’) in order to understand them and the systems they work in better must help improve the management job they do. If you’re trying to help someone get somewhere, you best understand where that someone is coming from.

On the focus of the standards, I have one query. Is the main test or objective sought by monitoring and reporting on ‘results’ really about focussing effort on understanding systemic changes in market ‘behaviours and relationships’? According to the DCED standard it is not. This is where the standards appear to differ from the modest guidance on M(and E) produced by the Springfield Centre. I find this surprising, and for two reasons: 1) Some of the folk at the Springfield Centre, I understand, contributed to the standards. 2) It appears that the standards are diluted and disappointingly come up with Universal Impact Indicators. That the guidance claims how these will facilitate an adding up of impacts seems rather simplistic and I question the validity and usefulness of trying to do so. But then perhaps this illustrates a difference between PSD and Market Development approaches? Indicators need to be specific to a given decision situation. Considerations of scale, aggregation, critical limits, and thresholds, etc. will also be situation dependent, and situations are complex and constantly changing.

Clearly, however, someone thought having a set of Impact more important and useful than say developing an equivalent list on systemic change. Reverting to the habits of typical results based or even impact monitoring, I believe, is disappointing. This assumes managers need to make decisions based on values of impact indicators and thus find useful information on beneficiary impacts. It also begs a broader question: is the responsibility of those who manage PSD programmes to measure and report on such impacts? I thought it would have been to focus on, in the example of the seed programme in the ‘walk through’, the assessment of the seed input companies and the seed retailers in how they value the quality of the support offered by the programme and how they subsequently responded in working together to service and retain their farmer clients (and make money from doing so). For programme staff to then go ahead and measure the physical and financial yields based on some form of incremental gross margin analysis is particularly silly, sucks up enormous resources and is unnecessarily distracting. Such effort is associated with significant opportunity cost and typically divorces those responsible for monitoring from the main function of management. There is an ambiguity in the standards regarding who will benefit from having a system in place and how that passes some audit.

On the adequacy of the standards, a few things struck me on reading the ‘walk through’ and the associated template for the auditor’s report.

  • Given the one purpose of the standards is to inform management decisions, this need to be identified first and then the design of indicators to reduce critical decision uncertainties can follow.
  • I could find no reference to the need for managers among the 8 elements of what makes for a good ‘system’ in listening to the clients of the programme clients and/or their clients, the ultimate beneficiaries. Surely any system that sells itself on serving the information needs of managers needs to acknowledge the importance of the perspectives of those they are paid to support. Is there any other way in assessing the quality and relevance of the support? Experience in development aid has shown that many indicators that people are currently using have no information value, and indicators that have high value are frequently not being measured. Surely, the key immediate uncertainties facing programmes are the responses to the facilitation or the intervention. In this context, it would be useful to mention indicators that combine both a quantitative (how much) and qualitative (how well) assessment of the programme’s performance and the responses to this (the change process) among market systems.
  • The ‘control’ and compliance points listed across the template for the auditor’s report miss out on the most important test of all: does compliance with the 8 elements make for improved results? The basis for whether a system meets the requirements assumes that having a results chain, measuring indicators and reporting on them is sufficient. On this point the guidance is not entirely clear: in one breath it talks about the need for reaching and convincing an external and sceptical audience; in another it says how the results measurement systems should serve programme management. It seems to assume both users are interested in and find useful the same information.
  • There is insufficient attention to and focus on assumptions throughout. The elements in the standards are driven solely by the need for measuring and attributing change. This is despite the useful ‘role’ monitoring can have in managing complexity through diagnosing assumptions the results chain makes explicit.
  • On the issue of attribution, the standard seems to carry much of the freight of impact evaluation: the difficulty of isolating indicators and dealing with confounding factors through establishing a valid counterfactual. Such requirements for comparative analysis sit uncomfortably with the brief of a management-focussed system. This is more the domain of those responsible for evaluating impact; that is as opposed to those responsible for delivering systemic change with the help of a management-focused system. Their main interest in making comparisons is different to impact evaluation: it is about understanding how the markets differ in their responses.

Daniel is a Project Director at HTSPE. He does evaluations but his passion is in supporting others set up balanced monitoring processes that enable managers to listen well to clients or beneficiaries to inform decisions they need to make so as to improve performance – the object of monitoring. While no sector specialist, his work has focused on private sector development, agriculture and food and nutrition security and some on infrastructure and governance.

He has worked exclusively for 23 years in Monitoring and Evaluation at local, sector, national, global and corporate levels. Fourteen years have been on long term assignments in mainly African Countries (Zimbabwe, Malawi, Uganda, Tanzania, Lesotho, South Africa and Ghana). During this time he supported a variety of organisations in helping them develop and assess their portfolios: from donors such as the World Bank, IFC, EC, AusAID, DFID, USAID and Danida to National Governments (including the UK’s former Department of Trade and Industry) and Local and International NGOs.

His long term experience has informed practical approaches to developing learning and accountability mechanisms to investors, funders, and those they seek to support communications. Through these, he has played leading influencing and mentoring roles in developing strategies, the design and implementation of impact and performance frameworks and methodologies. An important feature of his approach lies in helping others shape practical ways in interpreting and using the information they value to make decisions in ways that do not aggravate capacities in managing investments and delivering the change. He works with and adapts various approaches such as activity based costing, beneficiary assessments, client satisfaction surveys and balanced scorecards. He also likes to keep things simple so avoids using too many prefixes such as process, participatory, results-based to explain what makes for good monitoring processes.

7 thoughts on “Guest Post: Daniel Ticehurst with a critical reply on the DCED Standard

  1. Murray

    Brilliant commentary from Daniel. The more we can grasp the difference between monitoring and evaluation, the better development will be able to acheive results. The key to all is summed up in one sentence: “how measuring results is a more involved and challenging task than delivering them”. We very much have the cart before the horse.

    Reply
  2. Jim Tomecko

    I read Aly’s piece and Daniel’s reply and without going into long orations I think that he is “shortchanging” the standard. I agree that development is complex and that it is hard to ‘sort out’ causalities but whether you use logframes or results chains or theory of change methods it is the job of the development practitioner to determine which applications are most likely to have the impact to which we are being asked to contribute.

    The standard is simple a tool to help us save time and energy. The standard provides us a collection of best practices which can be applied in both large and small projects and saves us the task of reinventing the wheel in every new PSD project. Of course you can apply the tool to varying degrees of competence and this is the logic behind having some form of external audit as a means to provide objective feedback on the quality of application.

    While I would agree that the standard may not be perfect and that some may misuse it, it is nevertheless the best thing out there for helping projects to measure their results in a cost effective and efficient way.

    Reply
  3. Nabanita Sen

    Dear Daniel,

    Enjoyed reading your post.

    I think though that you might have misunderstood some critical points regarding the Standard and I would like to try and clarify those points, while making reference to the sections referring to the points I disagree with.

    1. ‘It is telling that neither the DCED standard nor the synthesis paper [Synthesis Paper of the Systemic M&E Initiative] differentiate between M and E…’: I disagree with this statement, in fact it was quite the contrary. The concept of making a Standard to list the minimum elements needed for credible results measurement was conceived in 2008 because amongst other reasons, practioners felt that they needed a channel to measure their results credibly and communicate them. It was felt that often external consultants who are flown in for a limited time struggled to fully understand what was going on and thus missed out essential things to measure given the complexity of programmes. Thus the idea was also that monitoring using some best practices as listed on the Standard would also help in doing the groundwork to later facilitate an evaluation. It does not replace evaluation, which also sometimes asks broader questions. (Please refer to 2011 Reader for results measurement: http://www.enterprisedevelopment.org/page/download?id=1734)

    2. ‘Given the one purpose of the standards is to inform management decisions, this need to be identified first and then the design of indicators to reduce critical decision uncertainties can follow…’ I agree with this and thus would like to clarify that while ‘Managing the system for results measurement’ comes as the eight element in the Standard, in practice all programmes working with the Standard use it as a managerial tool to design interventions and manage them, so management decisions are made while designing results chains, defining what’s important to measure and how to do so. In other words, when a programme is drafting the results chain, it is already thinking through the steps of change that need to occur, in order to get desired results. Thus it becomes a review tool through which it can track progress and use the findings to feedback in decision making to improve implementation and ensure desired results. For example, a programme might have some headline indicators such as say increased yield, income of beneficiaries to report on in their logframe. However the Standard asks for a programme to articulate and measure the steps of changes that might happen in between, and this is where it helps inform management decisions: Are behavior changes taking places? Are people adopting new services introduced by programme? Would these changes be sustainable?

    3. ‘…In this context, it would be useful to mention indicators that combine both a quantitative (how much) and qualitative (how well) assessment of the programme’s performance and the responses to this (the change process) among market systems.’ I think somehow you have the perception that the Standard doesn’t encourage the use of qualitative data gather full-heartedly. In fact it is the opposite which is why in the Standard version VI, it is labeled as a Must to gather qualitative information, because indeed qualitative information is essential to explaining why change has taken place, the nature of change, the sustainability aspect, etc.

    I agree with you on how it might be dangerous to attempt simple aggregation of universal impact indicators (or other such common indicators) across programmes. It indeed poses the danger of getting wrongly interpreted (volumes cannot be expected to be the same across Timor Leste and Bangladesh!). However I believe one of the ideas behind coming up with these three indicators was the fact that increasingly PSD community has been facing the pressures for accountability from donor, tax-payers, parliaments, media. So the Standard was designed to help managers set up a system that provides information for management and also helps them meet their accountability requirements by having some quantifiable headline indicators. I will share a quote by David Cameron, UK Prime Minister, 2011, ‘“Without being hard-hearted, we will also be hard-headed, and make sure our aid money is directed at those things which are quantifiable and measureable … so we really know we are getting results… That quantifiable, measurable outcome shows people back in Britain the true value of our aid commitment…”

    Best regards,
    Nabanita Sen.

    Reply
  4. Mary McVay

    Thanks, Marcus, for hosting and Aly, Daniel and the rest for your posts/comments. As Director for the Value Initiative, and with 25 years of M&E experience, I have struggled with the issues raised here, including:
    -Balancing monitoring and evaluation goals, workload, costs: balancing the drive to “get on with the job” with the need to know and to prove to outsiders that the job with worth the money taxpayers are “investing”
    -Which tools (results chains, logframes, etc) help achieve the goals.
    -The high demand among practitioners and donors for common methods, despite the diversity of programs and contexts.
    -The practical need for “good enough” information, while managing that internal voice that wonders: does this information mean anything, or are we pretending?
    -How to compare results and therefor learn objectively from different programs around the world.

    The standards do not resolve all of these issues, but the exercise – and the DCED system that the global network has produced under Aly’s leadership – is an enormous leap forward over what we were doing before. At international conferences highlighting best practices, I would wonder out load, is this “best practice” or just the “best told story.” Given the work we do, we will never achieve the clarity of MFI performance measurement standards, but the DCED standards and the “system” of auditors goes a very, very long way in the direction of practical, useful, accurate results assessment that meets the goals of multiple stakeholders. Many of the issues Daniel raises remain challenges that need to be addressed as the system evolves and is implemented. But, the DCED standards are a great foundation. The development of the system is modeled on and effectively resembles similar processes in the microfinance and social enterprise fields: MFI performance standards (used on the Microfinance Information Exchange), poverty outreach tools, social performance standards, and social return on investment measurement tools.

    What would I change? To me the biggest gap in the standards is the measurement of poverty outreach. Poverty outreach tools, such as the Progress Out Of Poverty Index (PPI, Grameen Foundation), have become fairly simple and are increasingly part of the microfinance industry. They are somewhat effective in assessing the portion of people benefitting who are “poor” by established global standards, and are being applied in some situations to assess the extent to which people are moving out of poverty. As our industry is called “making markets work for the poor,” we are in danger of being unable to back-up this claim without poverty measurement. The MFI industry has suffered very public critique because of a) not measuring and, then, b) once they started measuring, not actually reaching the “poor” or the “very poor.” Just as the DCED standards are not perfect, the PPI is not perfect, but is it potentially “good enough.”

    Glad to be having the conversation!

    Best wishes,

    Mary McVay
    Consultant, Enterprise Development Kiosk

    Reply
  5. Daniel

    Dear Jim, Nabanita and Mary,
    Many thanks for your comments on my response to Aly’s Blog. Let me respond to the points you raise and I did not mean to short change the standards. Of course, I agree with you discretion should be given on whether it is a logframe, a resutls chain or a theory of change. The point I made is that the standards are clear on how results chains are a better bet than logframes.
    Both Mary and Jim claim how the standards are the best there are and represent a big leap forward compared to what was there before. I interpret this as just how fractured M&E is across different sectors and how short a memory many donors have. Much of the guidance on the standards (ie, the steps) offer little value add to that produced in 1995 by GTZ on Logframes and in a World Bank publication 2004 “Ten Steps to A Results Based Monitoring and Evaluation System” by Jody Zall Kusek Ray C. Rist. Both remarkably similar in purpose and nature to the guidance produced by the DCED and its 8 elements. I am not saying you inadvertently re-invented wheels rather it just raised eyebrows for me regarding the efficiency of effort: donors have paid for similar guidance.
    Nabanita disagrees with me on three points.
    The first of these is on clearly differentiating M&E. Sorry, but I’ve re-read and still hold by what I said earlier. There is no mention, thus no discipline, in the guidance as to how and on what basis Monitoring is different to Evaluation regarding their objectives, responsibilities and requirements for comparative analysis. It just wanders around generic reference to monitoring; your response? Monitoring throws up some information useful for evaluation. That it?
    The next one concerns how the standard generates information that is useful. I agree with you; and a great point on how steps cum processes leading up to yield increases are of more immediate use from a management perspective. However, the standards floats around what I understand to be where they (through step # 5) really could add value: the ‘impact’ of the programme in how and to what extent it has induced change in the system’s relationships, dynamics and structures. It is probably me mis-understanding, but I thought the strategy was about systemic change. Based on this, I got confused reading through steps 2, 3 and 4 and then reading about capturing wider changes in step 5. The centrality of these (system) changes should be weaved into section 2 (such changes are a type of result – and here are some outcome indicators), section 3 (and here are some ways to measure these) and section 4 (and this is how you can overcome some of the factors that confound ways to estimate how the programme contributed to these system changes in the face of how the approach seeks contamination through spill over effects.
    The issue on quantitative and qualitative was a mis-understanding. I did try to place this in the context of learning about the quantitative and qualitative dimensions of the programmes’ performance, not change per se) in supporting its clients (be they those who provide services to farmers and her families and/or farmers themselves). Given how step 8 is about a process or system that generates useful information for “day-to-day decision-making” I was surprised to see no mention about how useful it would be to learn about how programme clients view and value the support. You and others who penned the standards are making heroic assumptions. How is it possible to describe and explain steps 1-8 without mentioning information that comes from listening to clients (people who provide services and/or those who make and enforce regulations) or beneficiaries (farmers) about their views of the relevance of the support and how well it was provided?
    I thought Mary’s point on poverty outreach was excellent – more effort in testing the overarching assumption that changes induced by facilitating markets and those who make and enforce regulations will necessarily result in markets that work for the poor. The M&E synthesis talks to this issue as one of its principles. Often in development the intended beneficiaries are not always those who benefit from changes in systems who are ‘faciliated’ and encouraged to be responsive and attuned to their needs. We can and should test this assumption by, for example, enquiring as to how the systemic changes are associated with equivalent changes in their client base. Are the poor able to respond to markets some think to have become more inclusive? Or are service providers simply benefitting from such changes in providing better services to their existing client base? It is these ‘changes’ in among market system processes that we should concern ourselves with. Measuring yields and resulting net changes in incomes among the poor is compelling. However, encouraging managers to do so flies in the face of lessons, hard learnt of some 30 years ago.
    In sum, my words and your responses to them capture the constraints cum tension we are all under: the wrestle with how some M&E ‘experts’ put their art in the face of crude political demands politicians and civil servants place on programmes. These are most often based on assumptions they make or want to believe about what their constituents in Europe and North America want to hear. It is possible to complement the objectives of learning and accountability into monitoring processes. However, the DCED standards fall short and deliberately so: they are driven by upward accountability (not downward that begins to broach learning) and falls succour to unfettered demands for evidence of poverty impacts from those who provide aid. This is made patently clear by Nabanita (in supplying the quote from the current UK Prime Minister). It is the privilege of consultants to challenge and contest such naivety. Failing to do so can inadvertently undermine, rather than strengthen, the case for aid, including enterprise development.

    Reply
    1. Mary McVay

      Firstly, I want to say thanks to Daniel for engaging in this dialogue. As you can see, there are some core “defenders” of the standard on this forum, and I am one of them, so I appreciate you treading into these waters. Aly is probably best placed to explain who some decisions have been made in balancing “quality” of data/process over “efficiency” of data gathering and reporting so as not to burden programs with too costly M&E. A few points from my experience working with Aly and 4 value chain development partners to implement the standards:

      1) Differentiating M&E … when working at the “systemic” level, in practice what happens in most programs is that there is a “pilot” stage of any intervention (process improvement technology, a “package” of inputs/services/market linkages delivered to farmers via informal traders and lead farmers, etc.); value chain development programs need to judge the value of the pilot initiative, including which technology or which package creates the most impact/value for the target enterprise. With impact data in hand about pilot initiatives, the program then invests in stimulating widespread market ‘uptake” of the innovation. At that point, the standard recommends a focus on measuring scale and measuring impact only on a sample of clients. So, one reason the typical “Evaluation” function of measuring impact gets mixed up with “monitoring” in the standards, is that in practice, assessing short-term impact is critical to managing good market development. The standards could be more clear about the rational – indeed the standards are around “results” so an explanation of how this is different than typical M&E and how they link would be helpful. If anything, the standard errs on the side of providing information to help make decisions. Finally, in terms of “burdening” market development programs with “evaluation” … let’s remember that – unlike MFIs – market develop programs main function is to create impact and they do not have a mandate – typically – of becoming financially viable.

      2) Leap forward: indeed, I have felt comfortable with the standards because they do build upon existing practice. What is the “leap” is multiple donors and practitioners agreeing on standards. Common standards have been critical to MFI and other industry advancement and sorely lacking in our “field.”

      3) The challenge of measuring systemic change is significant. The standards start with a focus on scale: let us at least COUNT the enterprises benefitting from BOTH direct and indirect (system-level) change in a standard way … i.e. not family members, etc. Let us account for employment in a standard way. It is a start … your points about qualitative analysis are important. What I can say is that in our work we complimented the quantitative data, which is more easily standardized, with qualitative data. I am in favor of the standards including more of this kind of analysis, although it is hard to “audit” practices in this realm. SUggestions would be welcome for e at least!

      Thanks,

      Mary McVay
      Enterprise Development Kiosk

      Reply
  6. Daniel

    Dear Mary,

    Thanks and the purpose of my engaging was that Marcus thought it would provoke a useful dialogue. In many ways I think this thread is great and am learning alot from reading your responses together with those of the others. I have recently read on MaFI an interesting thread on what responses Market Dev Programmes can lever or catalyse among different stakeholders, incoluding the poor. Let me try and ground my thoughts on this.

    Some of the basic questions I find useful that we developed at a meeting some time back that monitoring can help managers answer included:

    @ What are the key factors that determine clients’ decisions to use or not to use interventions?

    @ Are there any elements of the service delivery that contribute to the adoption and retention of the support on offer?

    @ Do service providers identify and respond to the different and changing needs among the their client group?

    @ How do users and non-users of interventions perceive the choices available to them – are they aware that choices exist?

    @ To what extent does your ‘system’ provide space and discreation to listen and take action on how your clients value your service?

    The discussion was about how a donor programme can induce upfront investments into the supply and selling of validated R&D products (in this case apple tree saplings). Following an analysis of the value chain, the programme identifies where, among whom and how current behaviours (among growing and supplier enterprises and those responsible for overseeing regulations) and lack of knowledge (sellers awarness of the demand for saplings, growers awareness of their rights and market intelligence) constrain particpation of the poor in and what they get out this market. And let’s say that the clients of those who provide the service of selling the saplings are grower co-ops.

    I would argue that those responsible for managing the programme would need to make decisions with information that answers the following broad inter-related questions:

    1) Of the members of the cooperatives and other players in the value chain identified as champions and/or blockers to what extent do their opinions of the programme’s performance in supporting them vary – How relevant is the suprot and how well is it provided?

    2) How do those mentioned above vary in their responses to the presence of the programme – Are new and/or different types of relationships emerging, are individuals or groups behaving in ways expected and not expected, is trust being developed, are people now listening to each other, for example?

    3) In what ways do these changes benfit them as individuals – what are the propects for these processes to outlast facilitation or are the incentives robust enough to be sustained?

    4) How many saplings are being sold to how many, who are they – client profile – where are they and how does this vary over time and outlet (assuming there is more than one)?

    5) How do the cooperatives who buy from the outlets rate their performance on three counts: quality of service, reliability and quality of product and price?

    The above should generate a rich mix of quantitatvie and qualitatvie data. In my experience ‘da to day decision making’ is rarely informed by numbers that simply describe change or rank performance.

    I know the above is a bit crude, but hope it helps.

    Keep going?

    Reply

Leave a Reply