“Clearly, there should have been a red flag ...”
"This should never have happened."
"MI5 let terrorists slip through the net."
"There was a failure to connect the dots."
I've been thinking a lot about this as it is very much a feature of our times. It seems to be something most people accept is of limited use or even a waste of time, but which is somehow required until we come up with something better.
I am talking about the process that occurs after a disaster or near-disaster whereby a lot of criticism and analysis goes on, some blame is apportioned and lessons are supposedly learned for the future.
I am not talking about systemic problems where blame is warranted, such as cases of beaurocratic malaise, bad leadership, corporate wrong-doing or horrible incompetence. I am talking about those disasters or events which everyone secretly suspects could not have been prevented but which in hindsight look like they could have been, and therefore everyone agrees to agree that they SHOULD have been prevented - ergo, the fact that they weren't means someone must be to blame.
We can all think of examples -
- the US media and politicians (especially the senate) castigating the CIA and FBI after any attempted terrorist attack, as if it should be possible to thwart every single one;
- some of the post-GFC analysis castigating the behaviour of investors or investment banks believing house prices/stock prices "would keep rising forever" (which by the way, no one believed. Everyone knew the reckoning was coming. But no one knew quite when...);
- to some extent, the analysis following the Victorian bushfires of Black Saturday, February 2009. A horrific event, and yes, there do appear to have been some leadership and process problems, but perhaps also a once-in-30-years full-scale disaster which no one could have managed effectively.
On a smaller scale, this happens in the corporate world too, driven I think by the growth of this process in the media. It's made everyone terrified of not being "proactive" or of being "complacent".
At my workplace after every loss or "near loss" (being the ones we quantify as such), we are required to produce a "post-mortem", a process about which I am ambivalent. I don't completely oppose them but think most of them are misguided, don't give any real benefit and give a false sense of security that we are bolstering ourselves against future errors. This is because, where day to day control management is very strong and effective (as ours is), the events that cause grief tend to be "out of the box" zingers or once-in-a-year process failures. You can't really "learn from" them, because (a) they are one-off events which could not be foreseen and won't likely recur, or (b) humans tend to act erratically under stress and while it's always possible to improve process and management style, you can't prevent mistakes.
However, it's hard to argue the "anti" case without sounding like a slacker or a whinger. This is because there are always one or two points that come out of the post-mortem that seem reasonable and where it feels like a general lesson can be applied, even if you don't accept that the error could have been prevented.
The problem is, there is no possible way to actually prevent the kind of errors that occur. They are not systemic failures but moments where individuals or teams have "failed" to act a certain way in a time of extreme pressure or stress or where information was flying from ten different directions.
Sure it's always possible to point to a behavioural or procedural improvement - such as, "when something new is happening and the stakes are high, we should pull back for ten minutes and round-table with other teams to ensure everyone is operating under the right assumption/same strategy/etc". No one can disagree with that. Also no one can argue that we actually did this or that it would have been wrong to do it - so it seems like this is a lesson we should learn from and accept the validity of the post-mortem exercise.
But the difficulties in applying this in future to prevent any similar problems lie in (1) recognising the event in the moment it's happening [as next time won't look like this time] and (2) knowing when action now or pulling back is the more appropriate [as the answer won't be known until we get it wrong].
In the absence of systemic problems, there are three general problems with the "post mortem":
1 - No matter how tight a ship you run or how good your operations and controls, at some point someone will stuff up. Anlaysing who and how afterwards does not guarantee anything in future, because next time it will be something and someone else. "The human element" in other words; there will always be an element of error
2 - Hindsight is 20-20. Something happens. Something else happens. A lot of other things are happening at the same time around these two things. The two things are related - one leads to the other. Some of the other things appear to be related that are not, or create more noise at the time. Afterwards it may appear the development was "obvious", that there were "red flags" and a "failure to connect the dots" - but however obvious it looks afterwards, there are some events that just do not stand out at the time. Even the participants can forget this afterwards.
3 - Unexpected things can happen which were not possible to predict or guard against, such as an out-of-the-box system bug. As these events are "one-offs", the lessons you learn from them are not applicable to future events - other than as a general lesson to beware of one-off events. (And this is always in mind during a project anyway).
Some good comments on this same theme can be found by:
- Shakar Vedantam commenting on "intellectual dishonesty" in a comment called False Positives vs False Negatives in his blog "The Hidden Brain"
- Malcolm Gladwell in the chapter "Connecting The Dots" in his book What The Dog Saw
No comments:
Post a Comment