Those of you who are familiar with our blog will recall that we periodically compare how people treat data compared to real life situations. Two examples of this approach are “Would you allow this?” and “The Data Accident Investigation Board”.
Taking a similar approach this post will compare data accident reporting with Health and Safety Reporting.
Health and safety approaches now increasingly recognise the concept of the safety triangle (see below) which uses widely recognised statistics relating to the proportion of major incidents to minor incidents, no injury accidents and near misses (often termed near hits).
In safety conscious industries there is increasing encouragement to report all near misses. By analysing and correcting the root causes of near misses, this aims to reduce the overall size of the safety triangle and therefore reduce the risk of fatalities or major injuries. For this approach to work, senior management don’t just need to encourage such reporting but must ensure that anyone reporting a near miss is treated non-judgmentally and that the organisation genuinely learns from reported incidents.
An organisation with low levels of reported near misses, particularly when compared to statistics for “lost time” accidents, may believe that it is operating safely. However, the reality may be that there are many unsafe practices, but most are not reported.
So how does this compare with approaches to data? A data accident could be data recorded incorrectly (or not at all), inappropriate data analysis, data compliance and security breaches etc.
Most data intensive organisations will have an “issues log” to record identified data issues and then to track them to resolution. A key question to ask yourself is “Are all our data accidents being reported?”
If you are only being informed of “big” data issues that cannot easily be ignored, then you may be unaware of the number of “near misses” that are occurring on a daily basis. This is a direct analogy to the safety triangle and similarly may mask poor underlying behaviours towards data.
So what can be done about it?
- Encourage reporting of all data accidents and near misses
- Report on the number of data accidents by period and question whether all such problems are being recorded if reported numbers decrease
- Ensure that there is no blame ascribed for reported data accidents (but you may wish to consider a different approach for data accidents that were not reported!)
- Ensure that data accidents are investigated to determine their root causes – human error, poor process, unclear data requirements, lack of system support etc.
- Look for common root causes that could be resolved by organisation wide approaches, for example process changes
- Use audits to confirm whether reported levels of data accidents are correct
What other steps have you found successful?