The latest book I finished reading on complexity is Melanie Mitchell’s ‘Complexity – A Guided Tour’. The book is going through the very basics of what is colloquially known as complexity science, a mix of scientific disciplines on the search for a common theory that applies to all complex systems, from human genomes to artificial intelligence and from the evolution of species to the economy.
Mitchell starts off her journey by mentioning a number of complex systems, i.e. ant colonies, the brain, and the immune system, economies and the world wide web, directly putting forward the questions ‘Are there common properties in complex systems?‘ and ‘How can complexity be measured?‘
The first question she answers directly with three very generic properties that are inherent to all complex systems: complex collective behavior, signaling and information processing, and adaptation. For the second question she proposes a couple of measures, but when concluding the book, she makes it clear that there is no commonly agreed measurement for complexity.
In part one of the book, Mitchell comprehensively describes the background and history of complexity, including the fields of information, computation (herself being a computer scientist), evolution, and genetics. In part two she focuses on life and evolution in computers to further deepen the topic of computation in part three. Part four explores the realms of network thinking, leading to a more ‘complex’ view on evolution, before concluding the book in part five.
From this very interesting basis of ‘complexity science’ from physics, mathematics, computer sciences, biology, etc., for which to understand I had to dig deep into my knowledge from University, I distilled some takes from the book that I think are particularly relevant for my work:
– One of the basic properties of complex systems is that they are extremely dependent on the initial conditions. Even if we have a very ‘simple’, completely deterministic complex system (e.g. the logistic map), we are not able to predict its behavior without knowing the exact initial parameters (exact meaning that even changes in the tenth or more decimal place of a parameter can have a significant impact). Now, the systems in which we work in development are much more complex than the logistic map in the sense that they are hardly deterministic from the point we look at them (since we work with humans, it is impossible to model their decisions). Secondly, we are never able to gather all necessary data to determine the initial conditions for a model to run. This insight strengthens my belief that we should concentrate our use of tools to make sense of the systems to qualitative ones, since quantitative modeling can hardly predict the behavior of a system and, hence, the outcome of an intervention.
– To know how information flows through a system is crucial to determine how it works and to be able to influence it. The reason being that these processes are also energy intensive, i.e. they follow the laws of thermodynamics. I honestly never lost a thought on that before reading Mitchell’s chapter on information entropy or the so called ‘Shannon entropy’ (coined by Claude Shannon, whose work stood at the beginning of what is now called information theory). The take for me here is to focus our analysis more on information flows and how a system is managing these flows, not only focusing our analyses on flows of goods and money. To give a relatively simple example: in order to understand how an ant colony works and specifically how an ant colony takes decisions, we need to know how information is collected, communicated, and processed.
– Based on the insights of the question how systems compute information, Mitchell describes that most decisions that are taken by agents in complex systems are mostly based on feedback from the agent’s direct environment, based on samples and statistical probabilities. To go back to the example of ants: every individual ant makes decisions based on the frequency of feedback from ants it meets or the intensity of pheromones on a particular track towards a possible food source. Similar in the systems we work in in development: actors take decisions mainly based on information from their direct environment. Hence, if we analyze causal loops in a system, we should focus on the feedback that comes from the direct environment of our target group.
– At one point in the book, Mitchell talks about models to simulate reality. Specifically, she mentioned so called ‘idea models’ as being “relatively simple models meant to gain insight into a general concept without the necessity of making detailed predictions (…)”. The exploration of such idea models have been the major thrust of complex systems research. Mitchell describes idea models as ‘intuition pumps’: thought experiments to prime our intuitions about complex phenomena. Although Mitchell’s idea models are rather general concepts such as the prisoners’ dilemma, I think that qualitative causal loop models of specific systems we work in can also be seen as idea models and used as intuition pumps. Working in complex systems such as markets in developing countries, we also have to prime our intuition in how these systems work in order to understand them and be able to work with them to bring about change. This brings me back to my point on focusing on qualitative models, sense-making models. We are hardly able to gather enough data to be able to run satisfactory simulations of market systems, so we have to work more following ‘idea models’ of the systems and base our decisions on intuition and experience.
– Finally, Mitchell confirms in the conclusion of the book that the so called ‘complexity science’ is not one coherent science as the term would suggest. Many different disciplines are working with complex systems and thanks to places like the Santa Fe Institute the different scientists also work together and exchange their insights. Nevertheless, there is not yet one coherent vocabulary for this field, nor are there any general theories that can be applied in all fields. Furthermore, there is still a field of critiques on the field, mainly stating that nothing significant has come out of the field so far. To quote Mitchell on that: “As you can glean from the wide variety of topics I have covered in this book, what we might call modern complex systems science is, like its forebears [Mitchell mentions ‘cybernetics’ and the so called ‘General Systems Theory’], still not a unified whole but rather a collection of disparate parts with some overlapping concepts. What currently unifies different efforts under this rubrik are common questions, methods, and the desire to make rigorous mathematical and experimental contributions that go beyond the less rigorous analogies characteristic of these earlier fields.”
The same is also true for people who work for the better use of insights of this fragmented ‘complexity theory’ in development projects. We lack the necessary vocabulary and not only that – we also lack a general understanding how to go about the challenge to better embrace complexity in what we do and avoid to fall back into a mode of coming up with ‘engineering solutions’ based on simple cause-and-effect models. There is now a bunch of people who want to take this challenge and do the work necessary to develop a common vocabulary and toolkits to better harvest the insights of the ‘complexity school’. Let’s keep the train moving!
I enjoyed reading Mitchell’s book very much. It is well written and gives a solid background of the scientific concept of complexity. I think, though, you need to be a person enjoying sciences and especially natural and computer sciences, to really enjoy the book. Mitchell writes about the logistic map, cellular automata, Gödel’s theorem, the Turing machine, fractals, etc., etc. If you are interested in complexity and have the nerve to go through theoretical scientific concepts like a self replicating computer program or genetic algorithms, then you really should read the book.
PS on a humorous note: One part that really caught my attention was when Mitchell wrote about the research on computation in natural systems and the work of Stephen Wolfram. He has done research with cellular automata and how they can compute information (cellular automata are simple lines or squares of cells that change their state [usually on or off] following very simple rules based on information from their neighboring cells). Wolfram’s thesis which he brought forward in his 2002 book ‘A New Kind of Science’ in very simple words and as I as a layman understood it is that when cellular automata can do universal computation (the term ‘Universal Computation’ refers to any method or process capable of computing anything able to be computed), presumably most natural systems are able to do universal computation, too. Where am I going with that? Well, the notion that presumably many natural systems can do universal computation really got me thinking about what Douglas Adams wrote in his book ‘The Hitch Hiker’s Guide to the Galaxy’ about the earth being a computer designed to find the question to which the answer was 42. We really should start asking questions to those white mice …