image The term ‘normalisation of deviance’ refers to situations where employees become accustomed to deviation from standards/designs in engineering and industrial situations without recognising that these can be precursor events to major incidents.

This was a contributory factor in the loss of the Challenger and Columbia space shuttles, the Buncefield explosion, the loss of an RAF Nimrod MR2 and many other major incidents.

Can normalisation of deviance occur with data?

I think the answer to this is a resounding YES – in fact ‘normalisation of deviance’ is arguably one of the main reasons for data quality issues arising. But first, lets explore some of the reasons for ‘normalisation of deviance’ in an engineering/industrial context. These can include:

  • Ignoring the signs – Prior to each shuttle disaster there had been previous indications of the eventual failure – partial burn through on the shuttle booster O-rings and damaged insulation tiles at critical points. Repeated observance makes people think that this is normal behaviour and not a risk. Similarly, the loss of the RAF Nimrod was caused by fuel leaking onto a hot exhaust pipe – a risk that had been identified on a number of occasions, but because the planes had flied without incident for 30 years was assumed to be a low risk
  • Following the rules – Prior to the Challenger launch, NASA staff  involved believed that they had been following all the correct engineering and organisational procedures for launch decisions, which they in fact had done. However, key concerns by some engineers did not get communicated to allow the launch to be halted
  • Passing the buck – Concerns raised by junior staff of risks may be repeatedly ignored by more senior staff who are focusing on ‘bigger’ issues, such as programme over-runs, finance and strategy. The experts and junior staff have sometimes been crying out for their concerns to be addressed, with little effect
  • Living with non-conformance – At Buncefield poor adherence to procedures and working with faulty measuring instruments were accepted as normal. The risk of failure was not appreciated by the staff running the site
  • Belief that controls are not needed – The original designers of a system will have designed it for particular circumstances. Over time they may move to different roles and new staff may not check deviances against the design/operating manual so the magnitude of risks are not apparent and the organisation forgets why something is important
  • Fix the people – There can be a mistaken view that all that is needed is to fix the issue and move or retrain the individuals involved, however, this does not correct the underlying organisational problem of poor procedures and processes which will fail to correctly address future technical issues

So how does ‘normalisation of deviance’ apply to data? Consider the following cases:

  • Incomplete data returns – Staff (and managers) become used to accepting incomplete data returns for work activities etc. as the requirement to supply the data is not understood by local staff and there is no negative response from the organisation. At a worst case, staff only supply data that is enforced by system validation rules
  • Default data values – Staff not bothering to change the value of fields that are pre-populated with a default value
  • Entering ‘good enough’ data – If staff cannot find the exact code they require, they choose a close one. Alternatively, they may develop a local convention of using codes in a consistent, but non-compliant way
  • Local process deviations – Staff developing and following local deviations to standard processes in order to complete work activities
  • Use of local systems – Data Anarchists (see the Data Zoo) create and lovingly maintain their own systems outside the main corporate ones
  • Forgetting the reasons for data – A coding system may exist for particular entities, however, knowledge of how the coding was developed and applied may have been lost leading to a loss of meaning and value for the data
  • Loss of system knowledge – For older systems, the original developers and maintainers may no longer be available. If applications and code are poorly documented, then data errors may arise through lack of knowledge of how the system manipulates data

You should be able to identify the similarities between these actions and the syndrome of ‘normalisation of deviance’. You may also recognise these actions in your organisation.

Is there ‘normalisation of data deviance’ in your organisation?

How do you prevent it?

Tagged on:             

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.