Monitoring in complex systems: from spotlight and indicators to sensor networks and landscapes

I have been thinking a lot about how a monitoring framework could look like that takes into account the quirks of a complex system. One of the central things, I believe, is that we need to find alternatives to measuring only where we believe that change will happen towards an approach to gather data more broadly.

Where do I see problems with how we measure change currently?

Hypotheses/Theories of Change We implicitly or explicitly have a hypothesis on how we (a project, a change initiative, etc.) want to change the system (a market system, a community, policy, etc.). This usually takes the form of a simple explanation which is based on a causal logic of the type of “if we do this, then that will happen, which will lead to this and finally lead to the [poverty reduction] effect that we want to see.” The trend in “good practice” monitoring is to make these hypothesis more explicit. This is good. But in many cases, the trend is also going into the direction of overdoing it by creating intricate theories of change of a project packed into large flow diagrams with many causal connections and then think that this is how the system really works. It has made the step from being a hypothesis to becoming a theory. These diagrams then are the basis of our monitoring efforts, the blue-print for our monitoring frameworks; we need to fill the cells of a spreadsheet in order to calculate the impact of the project.

Measuring where the light is I think everybody knows the story of the guy looking for his keys in the middle of the night, when he is asked by a policeman whether he really lost his keys there under the street lights. “No,” he answers, “but this is where the light is.” I often feel that current monitoring systems are doing a similar thing. We are developing intricate theories of change and pack them with hundreds of indicators. Then we go out, start our interventions and measure our indicators. When we are asked why we measure where we measure we answer: because this is where our theory points us; this is where the light is. Or we are simply measuring here because things are measurable.

Confirmation bias It is widely known and scientifically proven that we humans have a significant confirmation bias. In combination with a strong theory of change, we become receptive for the facts that confirm our theory and easily ignore facts that disprove our theory or that we cannot explain. Human nature – something we need to work hard to overcome.

This is certainly not an exhaustive list of problems I have with the current practice. These things are further aggravated by the three issues with current monitoring practice a large number of practitioners has identified during the Systemic M&E Initiative, particularly the first two:

  1. Excessive focus on our direct effect on the poor [this is why we need pseudo-causal chains that link our inputs to the impacts].
  2. Excessive focus on extracting information on accountability to the donors [this is why we are forced to do what we do instead of asking us what information we need to more effectively manage our projects].
  3. Sustainability understood as longevity of our legacy

Somehow, the pressure to report numbers and report impact (to generate “evidence”) forces us to find a relatively obvious way to monitor that is relatively simple to explain, i.e. define a causal logic of change, which everyone can follow with some common sense, and measure that. Whether this gives us the data we need to successfully manage a project or not does apparently not really matter. In effect, many projects fly effectively blind in reality as the paths that are illuminated by the theories of change often don’t hold and we have to change course – into the dark.

But what is the alternative?

The alternative, I believe, is something Dave Snowden calls a human sensor network. Sensor networks are networks that gather data which we can use to see changes in the system. Not at specific points, with specific indicators, but broadly. We can use this data subsequently to identify patterns of change and to make sense of a situation at hand. Effectively, this helps managers to take strategic decisions about the direction of the projects. We humans are bad in acknowledging facts that go against our theories, but we are good in spotting patterns.

In other words, we are shifting monitoring from being a inductive task (proving a hypothesis) to becoming an abductive task (making sense of observations). From “this is happening so my intervention must have happened/worked” to “there are positive patterns of change in the system, let’s try to reinforce them.” Abductive research is fundamentally linked to our ability to connect apparently unconnected aspects of our observations and make sense of them. Human sensor networks are not necessarily a replacement of the indicator based monitoring done at the moment, but they can be an addition to it, helping us to spot diffuse change that is happening at the fringe of our project’s influence or to detect unintended consequences – things that are happening outside the spotlight but are nonetheless of importance to the project. These things might turn out to be crucial for the systems change process but too weak and unrelated to be picked up by a hypothesis and indicator-based monitoring system.

Of course, abductive research is equally in danger of falling prey to our confirmation bias. Humans are in effect as good in spotting patterns that we often see patterns where there are none. We are in danger of connecting observations to patterns that confirm our hypothesis. But as I said above, this is something we have to actively work against. Firstly be making our own hypotheses explicit so we can try to become aware of the confirmation bias and secondly by trying not to converge on a common hypothesis (or theory of change) but retain variety and dissent as long as possible. Furthermore, it should always be our aim to disprove a hypothesis instead of confirming it as this firstly only takes one prove and secondly does not allow us to only see things that confirm our view and ignore all the rest but makes us actively seek contradicting information.

Landscapes, Found Data, and Sensor Networks

SenseMaker Landscape

A landscape constructed from people’s stories using SenseMaker

I like the metaphor of looking at complex systems in the form of a landscape. Scott Page uses the idea of a dancing fitness landscape to describe complex systems. The dancing indicates the constant change in the fitness landscape through mutualistic feedback and leads to continuous adaptation of the actors to each other, which again changes the fitness landscape. Also for Alicia Juarrero, complex systems – or to be more precise the phase space of complex systems – can be depicted as landscapes with dips and valleys being places with increased probability for the system to be and ridges and peaks being places the system shies away from. Using landscapes and showing how landscapes change over time helps us to visualize change in a system.

How do we construct these landscapes? We need a large number of data points with at least three dimensions, i.e. three parameters (obviously, we can maximally visualize three dimensions at a time in the form of a landscape). There are various ways to get this data that, many are currently explored in the large field of ‘big data’ based research. Using big data does, however, also have it’s caveats of which we should be aware – particularly when using what Tim Harford coined as ‘found data’ – data that we do not collect purposefully but that is collected for other purposes and we use it  for our own purpose. (more on this in the excellent article by Tim Harford).
Apart from found data with its restrictions, we can of course collect data ourselves. The problem often is that the amount of data points to cover a sensible breadth of the system can become quite big. Many traditional survey-based approaches for data collection can become quite resource intense if brought to scale. So we need to find a way to collect ‘big data’ in resource saving ways.

Another aspect of using landscapes to monitor changes in complex systems is that we need continuous input of real-time data into the landscape in order to be able to see changes over time and spot weak signals for emerging change or problems.

This is where sensor networks can come into play. We need to place sensors throughout the system that continuously submit data to us. Since in development we are working with social systems most of the time, these sensors are most probably going to be people – hence the term human sensor networks. The advantage of using humans as sensors is that we cannot only ask them about their experience but we can also let make sense of their own experiences and use this to add additional layers of data.

Here is a definition of a human sensor network I have found on the Cognitive Edge website written by Michael Cheveldave:

“…when you engage a significant percentage of employees, customers, or citizens in the continuous process of recording not only observations and experiences, but also the meaning and influences that such observations and experiences have on them, you have in effect created a human sensor network.”

Collecting narratives to build landscapes with SenseMaker

There are multiple ways of how to collect enough data points to construct a landscape. One of the methods that is most advanced, and already deployed for projects is SenseMaker, an approach and fitting software developed by Cognitive Edge and Dave Snowden. The SenseMaker approach is based on the collection of narrative fragments or other fragments taken from the lives of the people. Added to that is a layer of meta-data, data collected through the self-interpretation of the fragments by the person who shared it.

Instead of relatively arbitrary data from the ‘big data’ sources, SenseMaker uses narratives because they are one of the most fundamental ways for humans to share knowledge and experience. It adds meaning to the narratives through self-interpretion by the people who shared them, i.e. at point of origin, instead of interpretation by experts.

There are, of course also other ways to collect data for landscapes. Some market development projects  are experimenting with using SMS surveys. I’m curious to see what the results of these experiments are.

I am currently trying to set up a number of experiments using SenseMaker so hopefully I will soon be able to share how this worked and whether I see the approach for monitoring development initiatives.

4 thoughts on “Monitoring in complex systems: from spotlight and indicators to sensor networks and landscapes

  1. Adinda

    Hi, I found this a very interesting article, and am using (and forwarding) it actively, so many thanks for this! Just want to flag that there are various types of ToC approaches out there. The one you criticize is only one particular type. Second, you give the impression that SenseMaker will solve all problems that come with a classic M&E and ToC approach, while in fact it’s not a planning framework or a hypothesis but a data collection and analysis tool –thus quite a different animal after all! As you say yourself later on, one still need a hypothesis of which you make your assumptions explicit, which is to say: a ToC. Confirmation bias cannot be overcome by SenseMaker or any other method since it’s part of the design and management of a project/program or evaluation or monitoring activity, and not inherent to a single methodology or framework. I think what is still missing is a critical reflection and analysis of what the real added value is of SenseMaker to other methodologies in terms of the type of knowledge as well as the quality (sorts of biases it does help overcome, and therefore can compensate for other methods’ weaknesses) it generates and in which type of planning approaches and frameworks it best fits. I have some thoughts and ideas about what this could be but feel this should come from the core group of SenseMaker practitioners. It would require some study of the merits of other methodologies and frameworks of course, to obtain a better understanding of the broader field and the innovative thinking that is emerging from this field. But to conclude: I think this is an excellent blog, and very useful for colleagues who are eager to learn about new methodologies and acknowledge the need for more complexity-sensitive approaches.

    Reply
    1. Marcus Jenal

      Dear Adinda. Thank you very much for your comments and the positive assessment of the blog post. I totally agree with you that SenseMaker is not the solution to all our problems, alas! But I think it can add a dimension to our monitoring tools and management data that we could not add before. So I see it as an important tool in our arsenal to tackle complex change!

      Reply
  2. Charles

    Hi Marcus, interesting article. I am a bit of a novice in the complexity field but you might be able to answer a question for me. Are different research and monitoring methods more suitable for different systems? For me, this is a question which I cannot seem to get a clear answer. Taking the Cynefin model for instance, If we are in a simple or complicated system, are more tradition methods for measurement and prediction more appropriate? Why then are they not appropriate for complex systems? Are traditional research and monitoring methods are at higher risk of design failure due to the way they engage a complex system? Cheers Charles

    Reply
    1. Marcus Jenal

      Charles, good question. I think the answer is yes, the different methods are more or less ordered systems. Traditional logframe and results chain thinking is suitable for ordered systems, as causal relationships can be predicted and are stable. So you can build an if-this-and-this-then-that logic. In unordered space, that does not work. You need to work with shorter-term expectations and hypothesis and build up an understanding of the system by doing things. Human sensor networks can help you keep track on what is going on and what is changing.

      Reply

Leave a Reply