The Cynefin Framework in Action: The Gimli Glider

Recently I viewed a National Geographic program – “Aircrash Investigation” which focused on the near disaster of the aircraft nicknamed “The Gimli Glider”. In viewing this program, I saw that this was a good example of how the Cynefin Framework can be seen as a dynamic and not just a static decision support tool. So…. I thought it would be interesting to map out the events that led up to and post the Air Canada Boeing 767 running out of fuel at 41,000ft halfway through its flight from Montreal to Edmonton via Ottawa and making an emergency landing at Gimli Industrial Park, a former Canadian Air Force base in Gimli, Manitoba.

Firstly if you have time,  (7:30) have a look at this video taken from YouTube that summarises what happened:

Gimli Glider re-enactment (condensed)

OK, now having seen what happened in this accident, to make sense of it we need to start at the beginning (transcript sourced from Wikipedia):

On 22 July 1983, this near new Boeing 767 flew from Toronto to Edmonton where it underwent routine checks. The amount of fuel in the tanks of a Boeing 767 is computed by the Fuel Quantity Indicator System (FQIS) and displayed in the cockpit. The FQIS has built in redundancy with two channels measuring and monitoring the fuel levels independently and cross-checking with each other, so in the event one failed, the other could continue to operate. However, to continue flying the aircraft with one channel operational also required the the fuel quantity to be checked against a floatstick measure before departure. If both channels were non-operationall then the aircraft would be deemed unserviceable and be taken out of service.

There had been reported inconsistencies with the FQIS in other 767′s and service bulletin was issued by Boeing to check the system as part of routine maintenance. An engineer in Edmonton undertook these checks, and in the course of doing so, found that the FQIS failed and the file gauges went blank. This engineer had previously encountered this problem on the same aircraft earlier in the month and had found the thy disabling the second channel by pulling the circuit breaker, the FQIS was restored to operation albeit only with one working channel. Not having the correct spares for the FQIS, he repeated this fix by pulling and tagging the circuit breaker.

The day that the accident occurred, the plane flew from Edmonton to Montreal. Before it departed, the engineer had informed the pilot of the problem, and acknowledged that the tanks would have to be verified  with a floatstick. However, the pilot misunderstood what the engineer had told him and believed that the plane had been flown with the fault from Toronto the previous day without incident operating the FQIS on one channel. In Montreal, there was a crew change for the return flight back to Edmonton. The outgoing pilot informed Captain Pearson and First Officer Quintal of the problem with the FQIS and passed along their belief that the aircraft had been flown the previous day with this problem. In a further misunderstanding, Captain Pearson believed that he was being told that the FQIS had been totally unserviceable since then.

While the aircraft was being prepared for its return to Edmonton, a maintenance worker decided to investigate the problem with the faulty FQIS. In order to test the system he re-enabled the second channel, at which point the fuel gauges in the cockpit went blank. He was called away to perform a floatstick measurement of fuel remaining in the tanks. Distracted, he failed to disable the second channel, leaving the circuit breaker tagged (which masked the fact that it was no longer pulled). The FQIS was now completely unserviceable and the fuel gauges were blank.

On entering the cockpit, Captain Pearson saw what he was expecting to see: blank fuel gauges and a tagged circuit breaker. He consulted the aircraft’s Minimum Equipment List, which told him that the aircraft could not be flown in this condition. However, the 767 was still a very new aircraft, having flown its maiden flight in September 1981. This was the 47th Boeing 767 off the production line, delivered to Air Canada less than 4 months previously. In that time there had been 55 changes to the MEL, and some pages were still blank pending development of procedures.

As a result of this unreliability, it had become practice for flights to be authorised by maintenance personnel. To add to his own misconceptions about the condition the aircraft had been flying in since the previous day, reinforced by what he saw in the cockpit, he now had a signed-off maintenance log that it had become custom to prefer above the Minimum Equipment List.

At this stage in the Cynfin framework operations are occurring in the SIMPLE domain (the relationship between cause and effect is obvious to all, the approach is to Sense – Categorise – Respond and we can apply best practice) though we are seeing that the activities occurring are moving more towards the COMPLICATED domain (where the relationship between cause and effect requires analysis or some other form of investigation and/or the application of expert knowledge, the approach is to Sense – Analyze – Respond and we can apply good practice).

At the time of the incident, Canada was converting to the metric system. As part of this process, the new 767s being acquired by Air Canada were the first to be calibrated for the new system, using litres and kilograms instead ofgallons and pounds. All other aircraft were still operating with Imperial units (gallons and pounds). For the trip to Edmonton, the pilot calculated a fuel requirement of 22,300 kilograms (49,000 lb). A dripstick check indicated that there were 7,682 litres (1,690 imp gal; 2,029 US gal) already in the tanks. In order to calculate how much more fuel had to be added, the crew needed to convert the quantity in the tanks to a weight, subtract that figure from 22,300 and convert the result back into a volume. (In previous times, this task would have been completed by a flight engineer, but the 767 was the first of a new generation of airliners that made this position redundant.)

A litre of jet fuel weighs 0.803 kg, so the correct calculation was:

7682 litres × 0.803 = 6169 kg
22300 kg − 6169 kg = 16131 kg
16131 kg ÷ 0.803 = 20088 litres of fuel to be transferred

Between the ground crew and flight crew, however, they arrived at an incorrect conversion factor of 1.77, the weight of a litre of fuel in pounds. This was the conversion factor provided on the refueller’s paperwork and which had always been used for the rest of the airline’s imperial-calibrated fleet. Their calculation produced:

7682 litres × 1.77 = 13597 kg
22300 kg − 13597 kg = 8703 kg
8703 kg ÷ 1.77 = 4916 litres of fuel to be transferred

Instead of 22,300 kg of fuel, they had 22,300 pounds on board — only a little over 10,000 kg, or less than half the amount required to reach their destination. Knowing the problems with the FQIS, Captain Pearson double-checked their calculations but was given the same incorrect conversion factor. He checked their arithmetic, inevitably coming up with the same erroneous figures.

The Flight Management Computer (FMC) measures fuel consumption, allowing the crew to keep track of fuel burned as the flight progresses. It is normally updated automatically by the FQIS, but in the absence of this facility it can be updated manually. Believing he had 22,300 kg of fuel on board, this is the figure the captain entered.

Because the FMC would reset during the stopover in Ottawa, the captain had the fuel tanks measured again with the dripstick while there. In converting the quantity to kilograms, the same incorrect conversion factor was used, leading him to believe he now had 20,400 kg of fuel; in reality, he had less than half the required amount.

At 41,000 feet (12,500 m), over Red Lake, Ontario, the aircraft’s cockpit warning system sounded, indicating a fuel pressure problem on the aircraft’s left side. Assuming a fuel pump had failed, the pilots turned it off, since gravity would still feed fuel to the aircraft’s two engines. The aircraft’s fuel gauges were inoperative. However, the flight management computer indicated that there was still sufficient fuel for the flight, but, as the pilots subsequently realized, the entry calculation was incorrect. A few moments later, a second fuel pressure alarm sounded, prompting the pilots to divert to Winnipeg. Within seconds, the left engine failed and they began preparing for a single-engine landing.

As they communicated their intentions to controllers in Winnipeg and tried to restart the left engine, the cockpit warning system sounded again, this time with a long “bong” that no one present could recall having heard before. This was the “all engines out” sound, an event that had never been simulated during training.Seconds later, most of the instrument panels in the cockpit went blank as the right-side engine also stopped and the 767 lost all power.

At this stage, the flight crew had been operating in the COMPLICATED domain (where the relationship between cause and effect requires analysis or some other form of investigation and/or the application of expert knowledge, the approach is to Sense – Analyze – Respond and we can apply good practice), but with the range and number of events occurring, the situation was rapidly moving towards making a deep dive into CHAOS (where there is no relationship between cause and effect at systems level, the approach is to Act – Sense – Respond and we can discover novel practice).

The 767 was one of the first airliners to include an Electronic Flight Instrument System (EFIS), a system that required the electricity generated by the aircraft’s jet engines in order to operate. With both engines stopped, the system went dead, leaving only a few basic battery-powered emergency flight instruments. While these provided basic but sufficient information with which to land the aircraft, a vertical speed indicator – that would indicate the rate at which the aircraft was descending and therefore how far it could glide unpowered – was not among them.

In airliners the size of the 767, the engines also supply power for the hydraulic systems without which the aircraft cannot be controlled. Such aircraft are therefore required to accommodate this kind of power failure. As with the 767, this is usually achieved through the automated deployment of a ram air turbine, a generator driven by a small propeller, which in turn is driven by the forward motion of the aircraft. As the Gimli pilots were to experience on their landing approach, a decrease in this forward speed means a decrease in the power available to control the aircraft.

In line with their planned diversion to Winnipeg, the pilots were already descending through 35,000 feet (11,000 m) when the second engine shut down. They immediately searched their emergency checklist for the section on flying the aircraft with both engines out, only to find that no such section existed.

So now, with the events unfolding, the crew are now firmly in the domain of CHAOS (where there is no relationship between cause and effect at systems level, the approach is to Act – Sense – Respond and we can discover novel practice). However, they need to rapidly exit this domain if the aircraft is to successfully land and all passengers survive.

Captain Pearson, however, was an experienced glider pilot, which gave him familiarity with some flying techniques almost never used by commercial pilots. In order to have the maximum range and therefore the largest choice of possible landing sites, he needed to fly the 767 at the “best glide ratio speed”. Making his best guess as to this speed for the 767, he flew the aircraft at 220 knots (410 km/h; 250 mph). First Officer Maurice Quintal began making calculations to see if they could reach Winnipeg. He used the altitude from one of the mechanical backup instruments, while the distance traveled was supplied by the air traffic controllers in Winnipeg, measuring the distance the aircraft’s echo moved on their radar screens. The aircraft had lost 5,000 feet (1,500 m) in 10 nautical miles (19 km; 12 mi), giving a glide ratio of approximately 12:1. The controllers and Quintal both calculated that Flight 143 would not make it to Winnipeg.

At this point, Quintal proposed landing at the former RCAF Station Gimli, a closed air force base where he had once served as a Canadian Air Force pilot. Unknown to him, however, part of the facility had been converted to a race track complex, now known as Gimli Motorsports Park. It includes a road race course, a go-kart track, and a dragstrip. Furthermore, a CASC amateur sports car race was underway that day and the area around the decommissioned runway was full of cars and campers. Part of the decommissioned runway itself was being used to stage the race. Two boys were on the runaway later going to the end .

Without power, the pilots had to try lowering the aircraft’s main landing gear via a gravity drop, but, due to the airflow, the nose wheel failed to lock into position. The decreasing forward motion of the aircraft also reduced the effectiveness of the Ram Air Turbine, making the aircraft increasingly difficult to control because of the reduced power being generated.

As the runway drew nearer, it became apparent that the aircraft was too high and fast, raising the danger of running off the runway before the aircraft could be stopped. The lack of hydraulic pressure prevented flap/slat extension. These devices are used under normal landing conditions to reduce the stall speed of the aircraft for a safe landing. The pilots briefly considered executing a 360 degree turn to reduce speed and altitude, but decided that they did not have enough altitude for the maneuver. Pearson decided to execute a forward slip to increase drag and lose altitude. This maneuver is commonly used with gliders and light aircraft to descend more quickly without gaining forward speed.

As soon as the wheels touched the runway, Pearson “stood on the brakes”, blowing out two of the aircraft’s tires. The unlocked nose wheel collapsed and was forced back into its well, causing the aircraft’s nose to scrape along the ground. The plane also slammed into the guard rail now separating the strip, which helped slow it down.

None of the 61 passengers was seriously hurt. A minor fire in the nose area was extinguished by racers and course workers armed with fire extinguishers. As the aircraft’s nose had collapsed onto the ground, its tail was elevated and there were some minor injuries when passengers exited the aircraft via the rear slides, which were not long enough to accommodate the increased height. These were treated by a doctor who had been about to take off in an aircraft on Gimli’s remaining runway.

During these events, the flight crew were acting in the COMPLEX domain ( where the relationship between cause and effect can only be perceived in retrospect, but not in advance, the approach is to Probe – Sense – Respond and we can sense emergent practice), trying things to enable them to safely land the aircraft without loss of life.

An Air Canada investigation concluded that the pilots and mechanics were at fault, although the Aviation Safety Board of Canada (predecessor of the modern Transportation Safety Board of Canada) found the airline at fault.

The safety board reported that Air Canada management was responsible for “corporate and equipment deficiencies”. The report praised the flight and cabin crews for their “professionalism and skill”. It noted that Air Canada “neglected to assign clearly and specifically the responsibility for calculating the fuel load in an abnormal situation”, finding that the airline had failed to reallocate the task of checking fuel load that had been the responsibility of the flight engineer on older (three-crew) aircraft.

Subsequent to this near disaster, the airline undertook a full review of maintenance and fuelling procedures as well as streamlining changes to the Minimum Equipment List.

This clearly is my interpretation of how the events that occurred fit into the Cynefin framework. There is no right or wrong, but the beauty is that by comparing our own frameworks based on a key issue or problem, we can engage in more meaningful conversations to get a shared perspective on not only what makes up the issue or problem, but more importantly what are you join to do about it.

Making sense of your networks

I have just come across a neat little app from LinkedIn, that allows you to visualise your network LinkedIn contacts. As they say, a picture tells a thousand words. Your network map is delivered to you unlabelled, but coloured. It is up to you to attribute meaning to the colours and add your own labels. This is where I think it is really powerful, as YOU attribute the meaning to your network, not an expert providing interpretation. In this way it will have more meaning, and also allow you to reflect of whether you think your network is “healthy” or could do with some remedial care!

 

 

 

 

 

 

In my case, it reflects stages of my career – around working in the consulting industry, in knowledge management, business development and more recently as part of an independent business. What it also tells me is that there are significant opportunities to connect others in my network – and potentially share new and exciting perspectives.

So – I had better get busy…..

Outputs or outcomes?

Organisations are under increasing pressure to produce results – especially in the uncertain times we now live in. There is a general recognition that a focus on outcomes is important for effective and and responsive management. However, to date implementing an outcome orientated approach has proved to be difficult.

One of the challenges in implementing an outcomes focus, is that is represents a significant shift in mindset, requiring different ways of thinking, acting and managing – moving from a focus on “process” to a focus on “benefits”. Doing so also means a time shift from the “here and now” to a longer timeframe.

Bottom-up and Top-down

Our experience has shown that to get an Outcomes based approach agreed upon, implemented and seen through to its conclusion requires support from not only the executive at the top of the organisation, but also from the staff as well. Support from staff is critical – if they don’t see the benefits to themselves as individuals and their immediate workgroups, then the chances are they will view this as an another administrative burden mandated from “leadership” and probably will not give it the time it really deserves.

We have just finished the first data collection for a Workplace Environment Assessment at a major utility, where the project team has had to invest significant effort to get team leaders to encourage their staff to participate. Why? because there was seen to be a strong disconnect between the benefits participation would bring teams vs the time and effort to share their workplace experiences. The next step is to brief each team on the results over coming weeks – this will help in making the next round of data collection easier, as there will be a tangible link between participation and benefits.

Your thoughts?

The Promise & Perils of Narrative Research

I have just returned from the International Organisation Development Association (IODA) conference held this year in Melbourne, where we presented a session on The Promise and Perils of Narrative Research which showcased a 2 year project recently completed on the Impact Evaluation of Executive Education.

What was refreshing to see at the conference held over 3 days, was the large international contingent that traveled to Australia and the great networking that occurred between 80+ participants, with everyone willing to share what they were doing in the OD space – both successes and failures.

Narrative and story – the same beast?

I recently came across an article in strategy+business titled “The Art of Business Narrative” where Hollywood producer Peter Gruber talks about making the case that telling purposeful stories is an essential skill for leaders.

What immediately stood to for me, was the words “narrative” and “story” being used interchangeably. One might think that I am dealing in semantics, but I think not! There is a difference between “story” and “narrative”.

Stories are artificial constructs told with a purpose in mind – they are constructed to deliver a message, normally with a beginning, middle and end. On the other hand, narrative is naturally occurring – those small fragments or conversations you have over the desk in the office, in the tea room, at the pub. Sometimes people refer to these as anecdotes. Indeed we try to avoid using the word “story” with our clients and tend to refer to “experiences” as people start to roll their eyes and say that telling a story is too much work – to think about how to introduce it, to tell it and provide a conclusion…..

Whilst I may be preaching to the converted, it is an important difference that bears consideration. This is especially true in the world of Sense-making and narrative research where Context is key. Why did the blogsphere take off and to this day continues to grow in popularity? Because blog posts are narrative fragments and by reading many different perspectives on an issue, we can blend them and come up with our own interpretation and context. Why do we hear organisations bemoaning the difficulty in developing a knowledge sharing culture? because they tend to try to capture information which is not knowledge without a context – and where does that context come from? People – and how to people share their context? through the use of narrative fragments.

Your the consultant – tell me the answer….

It is interesting to still see today the desire for some groups to have the consultant provide “the answer” to the issue they have on the table. There is a desire for the one outcome that will solve their problem – however in reality we all know that there never can be one correct answer when dealing with “messy” complex issues.

Discussing this with prospective clients often leads to a degree of discomfort when we tell them that we can work alongside them to gain insight into the problem and explore interventions that could influence a resolution of the issue, but we cannot provide “the answer”. This discomfort I think can partially be attributed to entrained thinking that consultants are paid to provide “the answer”, partially absolving the client of risk and responsibility for the outcomes that might occur. When advised that they need to have active participation, there often is a response of “no problem”…. until they are required to get involved!

I acknowledge that the statements I am making are broad generalisations and that not all business is like this. However if you are trying to get stakeholders to buy into a program of activities that revolve around dealing with a complex issue, then getting them to understand the need for active participation and an understanding that the outcomes will be emergent are critical.

Some of the ways we endeavour to get a clear understanding are:

  • Defining what a complex issue is – we often use the Cynefin framework as a way of doing so
  • Using the Cynefin framework to articulate if the issue is indeed complex, or are there specific aspects of the problem that reside in Complex space
  • Ask who are they willing to let be involved in working through the issue at hand. If they are only willing to allow a select group of people, then one must ask if they really want to get insight or rather wanting to control a predetermined outcome
  • Determine what they want to do with any insight into the issue on the table – if they don’t have a mandate to take insight into action, then this can dilute the value of the project in the first place. Insight can be nice, but does not address the “so what”
  • Find out who is going to be managing the project – is it someone who has direct contact / reporting to the sponsor, or a mid level manager who might have other vested / conflicting interests
From our experience, the less the client wants to engage in answering these questions, then the greater the probability that the project outcomes will become “problematic” with both the client and consultant being frustrated.

Some thoughts from past experience!

Chris

Triads in SenseMaker

Triads are one of the visual methods used to signify experiences provided, encouraging

participants to consider the dynamics that exist between three competing but linked aspects of an issue being explored.

To respond, the participant needs to consider each aspect, moving the ball within the triad to the point they think best reflects the degree to which that aspect is dominant.

Whilst this may seem simple enough, it is much harder to actually design effective triads than might meet the eye! In the first instance I mentioned that the aspects of the theme being looked at need to be competing and in tension. The way I often describe this is to consider the ball in the middle being held in place by a rubber band from each of the three apex, and as you move the ball it is moved in relation to all three aspects being considered. This is important as if you have three totally different ideas on the apex, then you will have trouble interpreting the meaning of clustering in the aggregated “heat map”.

In order to ensure that the elements you have on the apex are in tension I get the project team to imagine the ball being place half way between Apex A and B (A being at the top, B being at the bottom left and C being at the bottom right) and interpret what that would mean. I would do the same for between A and C and B and C. If no one can provide an explanation for the ball placement at these points, then the items on the Apex are not in tension and related. This can be hard work to get it right – in fact with one client we went through a whole day trying to get a number of triads, but in the end, it was realised that triads were not going to work as part of the signifier set.

Triads are a very useful way of signifying experiences – especially of you are looking to undertake longitudinal comparisons of the data. However, there also is a health warning that comes with the use of triads – you cannot apply the statistical suite of tools that comes with SenseMaker to the triads. The reason for this is:

Think of the entire space within the triad as 100 points, so that wherever the ball is moved, there is an allocation of points between the three apex elements.

If the ball were to placed on Apex A, then 100 points would be allocated to A, Zero to B and Zero to C.

If the ball was moved to half way between A and B, then 50 points would get allocated to A, 50 points to B and zero points to C.

So what happens if the ball is placed in the centre? Then 33 points gets allocated to A, 33 points to B and 33 points to C. However, statistically this would indicate that all three elements are equally low in relevance or importance, but we do not know that for certain – what if the respondent thought that all were equally high in relevance or importance? The only way to know would be to look at the responses.

Thus the health warning – you cannot use statistical inference with triads. However, they are a very powerful way of signifying respondent experiences as long as one is aware of their limitations.