I had the honour and pleasure to be a speaker on a panel last week at the Royal Society of the Arts (RSA) in London. The event was titled “Refreshing Development: Making the Case for New Economic Thought in Global Development Policy”. It was part of a whole series of events 10 years after the economic crash in 2007 and supported by the Alumni network of the Hertie School of Governance in Berlin.
My presentation was about transformational economic change and in it I tried to convey the basic ideas on how change happens in the economy, wrapped in the example of the “Growing Rubber Opportunities” (GRO) Project I have been supporting over the last four years in Myanmar. Continue reading
I blogged before about the systemic change work I did last year. Recently, I have been reading up a bit on resilience and resilience thinking and was stricken by the similarity of the thinking between that field and what we have come up with as a way to see systemic change in market systems. Continue reading
Over the course of 2016, Shawn and I worked on a piece of research on systemic change in market systems development, funded by the BEAM Exchange. In this work, we question the utility of the concept of systemic change in market systems development (though this is valid in the wider field of economic development) as it is currently used and suggest a rethink. To do so, we went back to search for a fundamental understanding of economic change. This is what we found.
Following up on my last blog post on a new framework for systemic change, I would like to present here the main methodology we used to measure whether there have been transformations in the attitudes of farmers. The approach we used was Cognitive Edge’s SenseMaker®, which allowed us to deeply scan for changes in attitudes and beliefs beyond mere observation of changed behaviours. Continue reading
Over the last year or so I was hired by a large market systems development programme in Bangladesh to develop a new framework for assessing systemic change for them. We did an initial feasibility study and then a larger pilot study. The report of the pilot study has now been published. Rather than to bore you with the whole report, I would like to share the conceptual thinking behind the framework and the framework itself in this post. In a later post, I will share the methodology. This is not the end of all wisdom and the silver bullet framework everybody has been looking for. For me this is an important step to bring my work and thinking over the last couple of years together into something practically applicable. But this work is not done as I am embarking on a longer research project on systemic change. So there is more learning to come and with it more development of this tool. Please share your thoughts, which would help me to further improve the framework. Continue reading
After submitting a long comment as a reply to Aly Miehlbradt’s post, I could win Daniel Ticehurst to instead write another guest post. Daniel’s perspective on the DCED Standard nicely contrasts with the one put forward by Aly and I invite you all to contribute with your own experiences to the discussion. This was not originally planned as a debate with multiple guest posts, but we all adapt to changing circumstances, right?
Dear Marcus and Aly, many thanks for the interesting blog posts on monitoring and results measurement, the DCED standard and what it says relating to the recent Synthesis on Monitoring and Measuring changes in market systems.
This is a guest post by Aly Miehlbradt. Aly is sharing her thoughts and experiences on monitoring and results measurement in market systems development projects. She highlights the Donor Committee for Enterprise Development (DCED) Standard for Results Measurement and its inception as a bottom-up process and draws parallels between the Standard, her own experiences, and the recently published Synthesis Paper of the Systemic M&E Initiative.
In one of Marcus’s recent blog posts, he cites the SEEP Value Initiative paper, “Monitoring and Results Measurement in Value Chain Development: 10 Lessons from Experience” (download the paper here), as a good example of a bottom-up perspective that focuses on making results measurement more meaningful for programme managers and staff. Indeed the SEEP Value Initiative was a great learning experience, and is just one example of significant and on-going work among practitioners and donors aimed at improving monitoring and results measurement (MRM) to make it more useful and meaningful. The DCED Results Measurement Standard draws on and embodies much of this work and, also, promotes it. In fact, the lessons in MRM that emerged from the SEEP Value initiative came from applying the principles in the DCED Results Measurement Standard.