What else to do on an early Sunday morning than listening to some baroque music and catching up with some blogs. Here a couple of things I found interesting from a complexity perspective.
Mehnaz Safavian writes in the private sector development blog of the World Bank about a study conducted in Pakistan that shows that the micro finance sector is not actually helping women entrepreneurs. Rather, the findings show that
most women borrowers are actually acting as loan conduits for the men in their family, that much of the sector is engaging in de facto discriminatory practices, and that women who are actually running businesses in Pakistan have little interest in using microfinance products, because the products offered are unsuitable for their business needs
Now how long did it take the Bank to come to this conclusion? Too long in my view. I interpret this as a classical case of program implementation following a linear and overly simplistic logic neatly packaged in a logframe, but having nothing to do with reality. Monitoring systems reported about indicators in the logframe and could not make out the emerging pattern described above. Please, someone correct me if I’m wrong. Thus, I also disagree with Mehnaz first conclusion in the post:
We also need to hold ourselves to a high standard when it comes to the results agenda – up until now, most of us have assumed that our work had important payoffs for women. We no longer have the luxury of assumptions on this, and need to be more consistent and rigorous in measuring the impacts of our investments.
It is not about measuring the same thing more rigorously, but about measuring differently. Complexity teaches us that we cannot trust our logic models but that we constantly need to look out for emerging patterns caused by our intervention, be able to recognize them early, adapt our mental models, and react accordingly. Measuring the indicators tied to static logic models will only tell us that the model is flawed, but not what actually happened.
I do, however, partly agree with Mehnaz’s second conclusion.
Secondly, I believe we may really need to re-think our focus on replicability and scale. The more we try to standardize and scale up our approaches, the more we may be missing the target in terms of impact. I’ve personally always been a huge believer in the power of scale and sustainability. But, given what we’ve seen happen at the field level, not only in our work, but in the work supported by others, I’m no longer convinced scale and replicability are the way to really change the trajectory for women entrepreneurs. I’m getting more and more convinced that quality and customization may be critical factors for us to have an impact on women-run businesses – especially for businesswomen who are working in challenging environments, and where women are particularly disempowered.
Only partly, because I think that Mehnaz does not go far enough. We don’t only need to refocus our intervention in every context, but we need to design our projects in a way that logic models are seen as hypotheses that need to be tested against reality by designing safe-to-fail experiments and be very attentive to how the system reacts, what patterns emerge and how we can reinforce favorable patterns and stop unfavorable patterns.
Frustratingly, Mehnaz’s final conclusion already builds up a new logic model that might be enshrined in the next generation of program logframes only to find out after decades that this was also not the solution.
Lastly, I’ve been struck lately by the evidence to suggest that a single instrument, in the form of access to finance or business training, can be effective on its own. What struggling business women may need is a more customized package of support, that focuses less on a single bullet solution, and more on creating an ecosystem that allows businesses with growth potential to push the productivity frontier. For example, access to networks and mentoring are in high demand by women, and may be an important key to unlocking growth potential
Instead of big interventions that cover whole countries based on mental logic models, we need projects that are able to constantly learn and adapt to their complex environment. Starting with small safe-to-fail interventions and open sensors for emerging patterns, we can recognize early on if a project actually helps women or makes their situation actually worse.
Talking about women as a specific target group of development initiatives brings me to another blog post I read. Shawn Cunningham has it spot on when he writes about our misguided focus on small enterprises when working in economic development. Also one of these off-the-world logics that might work in some settings but cannot be seen as the silver bullet. And to make the link to the gender, it’s not only small enterprises, but specific target groups in general that we have to be wary about. We need to be more aware of the fact that optimizing some function for a specific actor in the system, such as ‘small enterprises’ or ‘women,’ is not always optimal for the whole system and vice versa. In development, we are often obsessed with optimizing return for our target group, forgetting that this might not lead to sustainable improvements in the overall system.
On another blog of the World Bank, Duncan Green (who normally writes the ‘from poverty to power’ blog at Oxfam: fp2p blog) discusses the findings of the five year Africa Power and Politics Programme (APPP). Not many nasty comments from my side here, just that Duncan’s conclusions do make sense from a complexity perspective.
This seems to be heading towards some kind of ‘participatory institutional appraisal’ approach, where development actors specialize in convening discussions of local players to get over these logjams in ways that reflect and adapt local traditions and values. This runs up against the way aid agencies currently work: high staff turnover, massive pressure to dole out funds in large amounts, demands to show ‘value for money’ via an increasingly demanding and imposed system of governance, monitoring, evaluation etc etc.