Systems Thinking, Complexity Thinking and Anthro-Complexity

I had another trip to Amsterdam this month to attend Dave Snowden‘s course: Cynefin and Sense-Making. I’ll be making a series of posts about what I learnt. I’ll start by comparing systems thinking and complexity thinking and by giving an introduction to the field of study that Dave is calling Anthro-Complexity.

In systems thinking, analysis of the system leads to finding leverage points. These leverage points can then be used to create strategies which aim to bring the system closer to an ideal future state.

In the theory, a deep enough analysis of the system should produce patterns in behaviour which allow root causes to surface. A causal chain links a root cause with symptoms and leverage points can be used to influence change. So called ‘high’ leverage points must be identified to tackle a root cause. Leveraging ‘low’ points will produce only superficial change as only intermediary causes are affected. Failing to identify the correct leverage point can produce unintended consequences and may introduce new problems into the system.

Systems thinking is about closing the gap between an ideal future state and the present by focusing on improving individual components or dynamics.

Complexity thinking  first attempts to understand the present and then makes small changes towards an unanticipated future state which is both sustainable and resilient.

Unlike systems thinking, complexity thinking argues that complex adaptive systems are dispositional in nature, not causal. This has the implication that a system may not exhibit a root cause to certain symptoms (and therefore they cannot be resolved by ‘leveraging’ the system).

Instead of attempting to find root causes, multiple, safe to fail experiments are run in parallel to test the evolutionary potential of the system. If an experiment fails, it has done so safely and so has had minimal effect on the wider system. If an experiment succeeds, it has the potential to form the basis of a change which can be made collectively within the current context. It is important to note here that an experiment which fails in one context may be successful in a future context (and vice versa).

Things get even more interesting when we consider a human system. Humans are also complex adaptive systems. However, unlike artificial components and most other organisms, we exhibit high intelligence, have a strong sense of identity and are capable of having intent.

These three Is cannot be modelled accurately. Our intellect allows us to reason about abstract ideas and use metaphors as descriptors. This is not (currently) possible for a machine. A person’s sense of identity changes over time and several identities may be present at once. Think about yourself; you may have a different identity at work to when you are at home or taking part in a hobby. Your identity may also change rapidly. Suppose you meet a friend in passing whilst travelling between workplaces. Finally, an individual’s intent may differ wildly to a second individual’s, making it unwise to simplify a group of people to a one-person model. This area of study in which complexity thinking is applied to human systems has been coined Anthro-Complexity.

That was a light introduction. I will be revisiting and adding to these ideas over the coming weeks as I explore more of what I learnt in Amsterdam.

You can read Dave’s own blog or follow him on Twitter.

 

 

Advertisements

4 thoughts on “Systems Thinking, Complexity Thinking and Anthro-Complexity

  1. “unanticipated future state” , Is it really an unanticipated state, this make is sound like we are blindly going into the future, I think the state is anticipated, but in a directional sense only, we know we want to build this type of system, but lets let the finer detail emerge

    Like

    1. In a complex adaptive system, I think we ARE going blindly into the future. If a system is dispositional then we can’t know exactly how a change (even a small one) will affect the system. Let’s suppose the system is in a particular state and a move towards another state that you feel is preferable at the time is made. The act of moving itself will cause changes in the system and your destination after the small shift may or may not be closer to the state you had anticipated.

      Like

      1. We never blindly blunder forward, somebody somewhere has to have an idea, a vision of where we are going. Take the analogy of a journey, you need to work out where you are first, but you still have an idea of your destination, yes you may aim to go to Manchester and the Journey ends in Liverpool, but you still set out with an aim in mind. How else do you have coherant safe to fail exeriments , what are the actual Safe to Fail against?? Think of adjacent possibles, you aim for these using Vector Targets, not blindly stumble onto them in the dark

        Like

  2. We’re saying the same thing 🙂 I agree it’s not “blundering” and you may have an idea of your destination but as you say, the journey may not end where you planned it to.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s