Tag Archives: complexity

What ants can teach us about the market

Ant BridgeI recently stumbled over a blog called Complexity Finance by a company called Rational Investment. A series of three posts which I liked was called ‘What ants can teach us about the market’. In part one, the author writes about a phenomenon that the number of ants would, given two identical and steadily replenished food sources not be divided 50/50, but rather 80/20:

Alan Kirman found some interesting behavior in the foraging activities of ants. He starts his account by citing the results of an experiment by Deneuboug et al. (1987a) and Pasteels et al. (1987) where two identical food sources were offered to ants. They were replenished so that they remained identical. Ants, after a period of time, were found not to be split 50/50 as common sense would conclude, but rather 80/20. Kirman further noted that this 80/20 split would often reverse inexplicably.[1] This phenomenon is mirrored in studies by Becker where only one of two similar restaurants on opposite sides of the street tend to attract long lines of customers.

Apparently, this behavior is also mirrored by investors in a market.

In part two of the series, the author introduces Melanie Mitchell’s book on complexity and especially what she writes on ants. I introduced the book in an earlier post.

In part three, another interesting concept is introduced: herding. Herding was identified as a common behavior in markets, responsible for creating trends.

Described as “History’s Hidden Engine”, socionomics posits that large trends in society and the market are driven by social mood. If the society at large is feeling positive, constructive behavior ensues, e.g. cooperation between governments, a rising stock market, expanding economy, box-shaped cars and brighter fashion tones. A negative mood will cause society to go to war, the stock market to decline, a recession/depression, rounder-shaped cars and darker fashion tones.

Socionomics is counter-intuitive in that most people believe events cause social mood. The stock market goes up and investors feel happy. Socionomics believes that a society that feels happy, for whatever prior cause, will cause them to buy stocks. It is the mood that causes the event. This mood is generated and reinforced through the herding mechanism.

Herding behavior is simply acting the way others do. It is a type of sampling heuristic and, like cognitive biases, is triggered in times of uncertainty. When uncertain about what to do, most will default to following the actions of others. The socionomic model of herding describes it as “a model of unconscious, prerational herding behavior that posits endogenous dynamics that have evolved in homogenous groups of humans in contexts of uncertainty, while eschewing the traditional economic assumptions of equilibrium and utility-maximization.”

I wonder how this herding behavior could be used in the work of developing markets for the poor in developing countries. I do recognize one type of herding in these contexts that I often don’t see as particularly helpful, but a very understandable behavior: all people in a region, market, village, etc. do the same thing, regardless whether it is particularly beneficial or profitable. In general, diversification would not only lead to higher profits by tapping new markets, but also to a higher degree of resilience by not depend on only one product. A negative instance of herding?

Maybe the increasing interest of companies (and investors?) in social business can be seen as a positive type of herding that needs to be better exploited.

Other ideas?

How to plan in uncertainty?

I haven’t been very actively writing here recently. I think I got trapped in the question ‘what meaningful could I write that is not already out there?’. Well, anyway, just a short post today with some thoughts that I have carried around for a while. I just came across a post on the Aid on the Edge blog that was talking about South Africa and the uncertainty of its future:

We human beings do not like uncertainty. We seek to understand what events portend, taking comfort in coming up with an answer. (…) Yet sometimes there is more wisdom, and more comfort to be taken, in acknowledging a more humbling truth – that which of many alternative futures (including ones we cannot imagine) will come to pass is unknowable, is a product of decisions and actions that have not yet been made. This understanding of change as something ‘emergent’, evolving, which can unfold in far-reaching yet ex ante unpredictable directions, is the key insight of ‘complexity theory’ – an insight which can offer a useful dose of humility to governance prognosticators.

The question that comes to my mind when reading this is how to handle the tension of the uncertainty of the future and the deeply institutionalized need for planning in development institutions.

I have worked with a systems dynamics approach combining causal loop diagrams with a method called the sensitivity analysis. It helps us to determine the relative importance of impact factors in a system and characterize them as active, critical, passive, and buffering. Together, these two tools allow to select impact factors that could be targeted by development agencies in future projects.

Now, what is the value of causal loop diagrams? Some people say that they are not more than an improved version of linear causal chains, but still not able to reflect ‘real’ complexity, i.e., the unpredictability of complex systems. Loop diagrams still work with cause and effect relationships, the cause and effect between two factors that can be connected with other factors to eventually build a loop. Yet, complexity sciences say that in complex systems cause and effect are hard to determine, so why bother?

I think that the causal loop analysis and the sensitivity analysis allow us to evaluate the factors that are most relevent and focus on them. They further illustrate some of the more prominent feedback mechanisms of the system that could amplify or hamper our interventions or that we could even use to change some of the dynamics of the system in our favor. They also cater the need of planning or at least of establishing a rational base for planning.

But yeah, we have to avoid falling back into the ‘we can predict the future’ trap, trying to build a prefect model of a system (just remember, models of complex systems have to be as complex as the real thing to accurately simulate it). Complex systems remain inherently unpredictable and our actions need to be tuned to the reactions of the system to any intervention. The above mentioned tools help us to make sense of the dynamics of a system and to select the more promising interventions. They do, however, not release us from the need of an experimental (or may I call it evolutionary) approach to solving real problems in real systems.

I would appreciate any thoughts on that in the comments!

Eric Berlow: How Complexity Leads to Simplicity

One of my favorite TED talks on complexity is the one of Eric Berlow who explains in three minutes ‘how complexity leads to simplicity’. Of course, he leaves out many steps, but I think he gets two important points over to the audience

1. We need new tools to explain interactions and make sense of emergence in complex systems, such as the causal loop diagrams.

2. The solution to complex problems can actually be surprisingly simple, but they must be based on an understanding of the behavior of complex systems and not on simple and obvious cause-effect relationships.

We don’t have these tools in a ready-made form available yet in our work in development. That’s certainly a field we need to work in and that’s were I want to put a lot of my intention and time in the future.

Here the link to Eric Berlow’s TED Talk.

Melanie Mitchell: Complexity – A Guided Tour

The latest book I finished reading on complexity is Melanie Mitchell’s ‘Complexity – A Guided Tour’. The book is going through the very basics of what is colloquially known as complexity science, a mix of scientific disciplines on the search for a common theory that applies to all complex systems, from human genomes to artificial intelligence and from the evolution of species to the economy.

Mitchell starts off her journey by mentioning a number of complex systems, i.e. ant colonies, the brain, and the immune system, economies and the world wide web, directly putting forward the questions ‘Are there common properties in complex systems?‘ and ‘How can complexity be measured?

The first question she answers directly with three very generic properties that are inherent to all complex systems: complex collective behavior, signaling and information processing, and adaptation. For the second question she proposes a couple of measures, but when concluding the book, she makes it clear that there is no commonly agreed measurement for complexity.

In part one of the book, Mitchell comprehensively describes the background and history of complexity, including the fields of information, computation (herself being a computer scientist), evolution, and genetics. In part two she focuses on life and evolution in computers to further deepen the topic of computation in part three. Part four explores the realms of network thinking, leading to a more ‘complex’ view on evolution, before concluding the book in part five.

From this very interesting basis of ‘complexity science’ from physics, mathematics, computer sciences, biology, etc., for which to understand I had to dig deep into my knowledge from University, I distilled some takes from the book that I think are particularly relevant for my work:

– One of the basic properties of complex systems is that they are extremely dependent on the initial conditions. Even if we have a very ‘simple’, completely deterministic complex system (e.g. the logistic map), we are not able to predict its behavior without knowing the exact initial parameters (exact meaning that even changes in the tenth or more decimal place of a parameter can have a significant impact). Now, the systems in which we work in development are much more complex than the logistic map in the sense that they are hardly deterministic from the point we look at them (since we work with humans, it is impossible to model their decisions). Secondly, we are never able to gather all necessary data to determine the initial conditions for a model to run. This insight strengthens my belief that we should concentrate our use of tools to make sense of the systems to qualitative ones, since quantitative modeling can hardly predict the behavior of a system and, hence, the outcome of an intervention.

– To know how information flows through a system is crucial to determine how it works and to be able to influence it. The reason being that these processes are also energy intensive, i.e. they follow the laws of thermodynamics. I honestly never lost a thought on that before reading Mitchell’s chapter on information entropy or the so called ‘Shannon entropy’ (coined by Claude Shannon, whose work stood at the beginning of what is now called information theory). The take for me here is to focus our analysis more on information flows and how a system is managing these flows, not only focusing our analyses on flows of goods and money. To give a relatively simple example: in order to understand how an ant colony works and specifically how an ant colony takes decisions, we need to know how information is collected, communicated, and processed.

– Based on the insights of the question how systems compute information, Mitchell describes that most decisions that are taken by agents in complex systems are mostly based on feedback from the agent’s direct environment, based on samples and statistical probabilities. To go back to the example of ants: every individual ant makes decisions based on the frequency of feedback from ants it meets or the intensity of pheromones on a particular track towards a possible food source. Similar in the systems we work in in development: actors take decisions mainly based on information from their direct environment. Hence, if we analyze causal loops in a system, we should focus on the feedback that comes from the direct environment of our target group.

– At one point in the book, Mitchell talks about models to simulate reality. Specifically, she mentioned so called ‘idea models’ as being “relatively simple models meant to gain insight into a general concept without the necessity of making detailed predictions (…)”. The exploration of such idea models have been the major thrust of complex systems research. Mitchell describes idea models as ‘intuition pumps’: thought experiments to prime our intuitions about complex phenomena. Although Mitchell’s idea models are rather general concepts such as the prisoners’ dilemma, I think that qualitative causal loop models of specific systems we work in can also be seen as idea models and used as intuition pumps. Working in complex systems such as markets in developing countries, we also have to prime our intuition in how these systems work in order to understand them and be able to work with them to bring about change. This brings me back to my point on focusing on qualitative models, sense-making models. We are hardly able to gather enough data to be able to run satisfactory simulations of market systems, so we have to work more following ‘idea models’ of the systems and base our decisions on intuition and experience.

– Finally, Mitchell confirms in the conclusion of the book that the so called ‘complexity science’ is not one coherent science as the term would suggest. Many different disciplines are working with complex systems and thanks to places like the Santa Fe Institute the different scientists also work together and exchange their insights. Nevertheless, there is not yet one coherent vocabulary for this field, nor are there any general theories that can be applied in all fields. Furthermore, there is still a field of critiques on the field, mainly stating that nothing significant has come out of the field so far. To quote Mitchell on that: “As you can glean from the wide variety of topics I have covered in this book, what we might call modern complex systems science is, like its forebears [Mitchell mentions ‘cybernetics’ and the so called ‘General Systems Theory’], still not a unified whole but rather a collection of disparate parts with some overlapping concepts. What currently unifies different efforts under this rubrik are common questions, methods, and the desire to make rigorous mathematical and experimental contributions that go beyond the less rigorous analogies characteristic of these earlier fields.

The same is also true for people who work for the better use of insights of this fragmented ‘complexity theory’ in development projects. We lack the necessary vocabulary and not only that – we also lack a general understanding how to go about the challenge to better embrace complexity in what we do and avoid to fall back into a mode of coming up with ‘engineering solutions’ based on simple cause-and-effect models. There is now a bunch of people who want to take this challenge and do the work necessary to develop a common vocabulary and toolkits to better harvest the insights of the ‘complexity school’. Let’s keep the train moving!

I enjoyed reading Mitchell’s book very much. It is well written and gives a solid background of the scientific concept of complexity. I think, though, you need to be a person enjoying sciences and especially natural and computer sciences, to really enjoy the book. Mitchell writes about the logistic map, cellular automata, Gödel’s theorem, the Turing machine, fractals, etc., etc. If you are interested in complexity and have the nerve to go through theoretical scientific concepts like a self replicating computer program or genetic algorithms, then you really should read the book.

PS on a humorous note: One part that really caught my attention was when Mitchell wrote about the research on computation in natural systems and the work of Stephen Wolfram. He has done research with cellular automata and how they can compute information (cellular automata are simple lines or squares of cells that change their state [usually on or off] following very simple rules based on information from their neighboring cells). Wolfram’s thesis which he brought forward in his 2002 book ‘A New Kind of Science’ in very simple words and as I as a layman understood it is that when cellular automata can do universal computation (the term ‘Universal Computation’ refers to any method or process capable of computing anything able to be computed), presumably most natural systems are able to do universal computation, too. Where am I going with that? Well, the notion that presumably many natural systems can do universal computation really got me thinking about what Douglas Adams wrote in his book ‘The Hitch Hiker’s Guide to the Galaxy’ about the earth being a computer designed to find the question to which the answer was 42. We really should start asking questions to those white mice …

Presentation on System Dynamics at the SEEP Annual Conference 2011

I was very lucky to be invited to present my work using the Systems Dynamics Analysis methodology at the SEEP Network‘s annual conference last week in Arlington, VA.

I presented the methodology based on the work I did in Mongolia on the problem of pastoralism and pastureland degradation. Based on my presentation and my script I prepared a special version of the Prezi I used with more text so it can also be understood without me presenting. This extended presentation can be accessed here.

The main goals of the presentation were to show the participants a concrete and practical tool that improves our way of looking at systems and their dynamics. Especially, I presented the loop analysis as an alternative to the widely used tools based on linear causal chains.

My presentation of the Systems Dynamics Analysis was well framed by Lucho Osorio from Practical Action who was setting the scene introducing the concept of working in complex realities and how Practical action uses the participatory market mapping to better reflect the reality as well as Tjip Walker from USAID who gave an insight perspective of how USAID is approaching the issue (he was actually also involved in organizing the event on complexity within USAID on which I wrote here). This short article gives an idea of the whole session.

The feedback I and my fellow panelists received was very encouraging. Many practitioners approached us with a clear message: these concepts of complexity and the system dynamics analysis are seen as very potential to better reflect the realities in the field and improve not only the ability to plan better and more effective interventions, but also to improve the ability to show and report the more intangible changes on a system level.

This positive feedback gives us enormous motivation to go ahead with our work on how to better embrace complexity in our work in international development and beyond. I am keen to report in this blog how this work is progressing.

What is complexity? II

One concept I like when I’m thinking of complexity is the Cynefin framework developed by Dave Snowden (see the picture on the right). I mentioned the framework already in one of my answers to the comments of the last post on ‘What is complexity?’.

The beauty of the framework is that it helps you to categorize problems in simple, complicated, complex and chaotic. Furthermore, it gives you a strategy for each of these domains how to design your problem solution. For example for complicated problems the strategy would be ‘sense – analyze – respond’, meaning that first you have to sense the problem, analyze the system (or call in experts who know the system) and respond based on the analysis.

I do think that it makes sense to differentiate between the four domains. The problem really is that in the past we treated many problems that are actually complex as only complicated or even simple problems. Also in international development. In order to categorize these problems as actually being complex, we need this sort of frameworks and guidance how to approach them.

I realize that I use the word categories here. Now if you listen to the video on YouTube where Dave Snowden introduces the Cynefin framework, he makes it quite clear that this is not a categorization model, but a sense-making model. A categorization model, in his explanation, is model where the framework precedes the data. That means that the data can be filled in quickly into the existing model – with the risk to lose out on the subtleties. A sense-making model on the other hand is one where the data precede the framework. Here, “the pattern of the framework emerges from the data in a social process”, as Dave Snowden puts it.

But I think it is easiest if I let Dave Snowden introduce the framework himself. Have a look here at the YouTube movie.

For more information, there is also a Wikipedia page on the Cynefin Framework.

USAID event on complexity

I had the privilege to participate in a part of an event organized by USAID on embracing complexity and what this means for the agency. I participated by webinar, which unfortunately only covered the first half of the day. However, Ben Ramalingam, one of the speakers at the event, posted a summary of the day on his blog. I highly recommend to read his post here.

What is complexity?

At the moment, I am reading and thinking a lot about complexity and how it could be applied to development and enrich the Systems Dynamics Analysis I am using in my work. Today, I read an article by David J. Snowden and Mary E. Boone titled “A Leader’s Framework for Decision Making” and published in the Harvard Business Review back in November 2007. Snowden and Boone added a box to their article in which they describe the main characteristics of complex systems. I found this to be a very comprehensive and yet understandable description an that’s why I want to share it here.

Here you go:

  • It [a complex system] involves large numbers of interacting elements.
  • The interactions are nonlinear, and minor changes can produce disproportionately major consequences.
  • The system is dynamic, the whole is greater than the sum of its parts, and solutions can’t be imposed; rather, they arise from the circumstances. This is frequently referred to as emergence.
  • The system has a history, and the past is integrated with the present; the elements evolve with one another and with the environment; and evolution is irreversible.
  • Though a complex system may, in retrospect, appear to be ordered and predictable, hindsight does not lead to foresight because the external conditions and systems constantly change.
  • Unlike in ordered systems (where the system constrains the agents), or chaotic systems (where there are no constraints), in a complex system the agents and the system constrain one another, especially over time. This means that we cannot forecast or predict what will happen.

Moreover, Snowden and Boon differentiate between two types of complex systems. In the first type, the individual actors or ‘agents’ in the system strictly follow predefined, simple rules, such as birds flying in a flock or ants in an ant colony. In the second type, however, the individual agents are not animals but humans and, hence, follow their own reasoning according to the relevant context and situation.

Consider the following ways in which humans are distinct from other animals:

  • They have multiple identities and can fluidly switch between them without conscious thought. (For example, a person can be a respected member of the community as well as a terrorist.)
  • They make decisions based on past patterns of success and failure, rather than on logical, definable rules.
  • They can, in certain circumstances, purposefully change the systems in which they operate to equilibrium states (think of a Six Sigma project) in order to create predictable outcomes.

So where does this lead us in our everyday work? Snowden and Boone also offer a number of tools to manage complex situation out of which I want to pick two that I find are relevant for the work in development projects:

  • Open up the discussion. Complex contexts require more interactive communication than any of the other domains. Large group methods (LGMs), for instance, are efficient approaches to initiating democratic, interactive, multidirectional discussion sessions. Here, people generate innovative ideas that help leaders with development and execution of complex decisions and strategies. (…)
  • Stimulate attractors. Attractors are phenomena that arise when small stimuli and probes (whether from leaders or others) resonate with people. As attractors gain momentum, they provide structure and coherence. (…)

The first point clearly points out that participation still is a very important part of every development project that really wants to make a difference. In the end we have to be aware that it is not us that is changing the system, but we are merely working to enable the system to move itself towards a more favorable state (who defines whether this state is more favorable remains another point to discuss and influences a lot whether the system is actually moving in that direction).

The second point is important to recognize that we always have to look for things that work or try to start small pilots and see whether they work and amplify them. This is essentially the recognition that change to a system happens from within a system.

I will continue blogging about complexity, many things are going on in that field. So stay tuned.

Blog-Posts I liked

Here some blog posts I’ve read recently and liked. You’ll find the links to the blogs also in my blogroll on the right.

Back to output-only reporting? Duncan Green is writing on results measurement: Can we demonstrate effectiveness without bankrupting our NGO and/or becoming a randomista?

A post also related to measuring results of development interventions by Ben Ramalingam, which dates back a bit longer: Results 2.0: Towards a portfolio-based approach

And here a controversal post by Owen Barder where he argues that it is not measuring the results that is the real problem, but the overambitious goals that we are setting for our aid initiatives, i.e., that our aid money should lead to long-term economic growth: MEASURING AID EFFECTIVENESS EFFECTIVELY: BEING CLEAR ABOUT OBJECTIVES

On another topic: Shawn Cunningham has posted a whole series on innovation systems that is definitely worth reading for anyone working in private sector and local economic development:

Always good for a laugh: xkcd on file transfers

And last but not least an older post by Duncan Green on using games for learning and improved decision-making in complex systems using evolutionary principles: Playing games with the climate – a great way to explore difficult choices in complex systems

Spotting ’emerging patterns’ to report on changes

In a training on evaluating projects I attended a while ago, a representative of the Swiss Charity HEKS presented their results measurement (RM) system. The presentation caught my immediate attention and interest since HEKS is using principles of complexity theory as a basis for their RM framework. Based on this rather experimental framework, the organization published a first ‘effectiveness report‘ in March 2011. I want to present some of the interesting features of the RM system, based on the effectiveness report.

HEKS acknowledged when building their RM framework that development takes place in complex and dynamic systems with the consequence that the behavior of such systems is largely unpredictable and, thus, effects of interventions also hard to predict.

This challenging perspective implies a different understanding of cause and effect. Connected to their environment, living systems do not react to a single chain of command, but to a web of influences.

As a consequence, HEKS does not base its projects on rigid impact logics and impact chains, but they are conscious that

HEKS cannot always objectively trace the effects of its actions, but can make its intentions, input and observations transparent.

As a consequence, HEKS’ particular approach focuses on the changes observed and experienced by different stakeholders involved at several levels of their projects.

The focus is more on the significance than on the quantification of such changes for the people who experience them. HEKS herewith takes a path different from strict measurement and hard data collection. Its aim is to grasp and understand the changes in the purpose, identity and dynamics that hold and drive the systems it gets involved – rather than to measure their ever changing dimensions.

Subsequently, HEKS’ method is to adopt a bird’s-eye view, look for ‘emerging patterns‘ and try to interpret them. Qualitative data is collected on three levels, i.e., the indivudual level, the project level and the programme level through methods like ‘Most Significant Changes’, monthly newsletters and annual reports focusing on observations of different level staff as well as a two days workshop for compilation.

Nevertheless, HEKS defined 10 key indicators that are collected for all countries they are active in. These indicators are for example number of beneficiaries, income increase, yield increase, etc.

For me, this is a very interesting approach and it resonates very well with the discussion on ‘experiential knowledge and staff observation’ of the GROOVE network that I mentioned in my last post. Also the staff observation have as an implicit goal to grasp emerging patterns of positive changes in the system the project tries to influence in order to amplify this change.

Owen Barder, on whose presentation on evolution and development I wrote in my last post, is asking for more rigorous evaluation of project impacts in order to be able to see what works and what doesn’t. Is the RM framework proposed by HEKS rigorous enough to comply with Owen’s demand? After all, HEKS’ approach is not using result chains at all, although they are one of the mainstays of results measurement – at least according to the DCED Standard on Results Measurement. Are the 10 universal indicators enough? And what about the attribution of the changes and emerging patterns?

When I read through the four patterns that were described by the HEKS effectiveness report, I see that they are very much focused on the community level – naturally, since this is where also the focus of interventions lie. Here is an example:

Pattern 1: Sustainable development starts with the new ways in which people look at themselves. Women especially become a driving force in the development of their communities.

Or another one:

Pattern 2: People who are aware of their rights become players in their own development. They launch their initiatives beyond the scope of HEKS’ projects.

The question that immediately pops up in my mind is: What are the consequences of the projects’ actions on the wider system, beyond the community? What are the ripples that the successful projects have throughout the wider system, e.g. in the market system or the policy environment? Or even more fundamentally: Can we achieve changes in the wider system by focusing on the community level? What additional interventions are needed?

There are still many open questions, but for me, HEKS is making a huge and courageous step in the right direction.