I had another trip to Amsterdam this month to attend Dave Snowden‘s course: Cynefin and Sense-Making. I’ll be making a series of posts about what I learnt. I’ll start by comparing systems thinking and complexity thinking and by giving an introduction to the field of study that Dave is calling Anthro-Complexity.
In systems thinking, analysis of the system leads to finding leverage points. These leverage points can then be used to create strategies which aim to bring the system closer to an ideal future state.
In the theory, a deep enough analysis of the system should produce patterns in behaviour which allow root causes to surface. A causal chain links a root cause with symptoms and leverage points can be used to influence change. So called ‘high’ leverage points must be identified to tackle a root cause. Leveraging ‘low’ points will produce only superficial change as only intermediary causes are affected. Failing to identify the correct leverage point can produce unintended consequences and may introduce new problems into the system.
Systems thinking is about closing the gap between an ideal future state and the present by focusing on improving individual components or dynamics.
Complexity thinking first attempts to understand the present and then makes small changes towards an unanticipated future state which is both sustainable and resilient.
Unlike systems thinking, complexity thinking argues that complex adaptive systems are dispositional in nature, not causal. This has the implication that a system may not exhibit a root cause to certain symptoms (and therefore they cannot be resolved by ‘leveraging’ the system).
Instead of attempting to find root causes, multiple, safe to fail experiments are run in parallel to test the evolutionary potential of the system. If an experiment fails, it has done so safely and so has had minimal effect on the wider system. If an experiment succeeds, it has the potential to form the basis of a change which can be made collectively within the current context. It is important to note here that an experiment which fails in one context may be successful in a future context (and vice versa).
Things get even more interesting when we consider a human system. Humans are also complex adaptive systems. However, unlike artificial components and most other organisms, we exhibit high intelligence, have a strong sense of identity and are capable of having intent.
These three Is cannot be modelled accurately. Our intellect allows us to reason about abstract ideas and use metaphors as descriptors. This is not (currently) possible for a machine. A person’s sense of identity changes over time and several identities may be present at once. Think about yourself; you may have a different identity at work to when you are at home or taking part in a hobby. Your identity may also change rapidly. Suppose you meet a friend in passing whilst travelling between workplaces. Finally, an individual’s intent may differ wildly to a second individual’s, making it unwise to simplify a group of people to a one-person model. This area of study in which complexity thinking is applied to human systems has been coined Anthro-Complexity.
That was a light introduction. I will be revisiting and adding to these ideas over the coming weeks as I explore more of what I learnt in Amsterdam.