At a recent talk by Professor Graham Braithwaite of Cranfield University on the role and work of the UK Air Accidents Investigation Branch (AAIB) and the approaches taken to investigating an accident, it made me think about whether similar approaches may work for data.

This did not immediately become a blog post, but having read Jim Harris’s blog post on the “The Poor Data Quality Jar” and the subsequent discussion of this post, I resurrected this idea.

Do you think there is a place for the Data Accident Investigation Board?

The process for the investigation of air accidents is specified in Annex 13 of the Convention on International Civil Aviation. The objective of this process is stated as:

The sole objective of the investigation of an accident or incident shall be the prevention of accidents and incidents. It is not the purpose of this activity to apportion blame or liability.

As a result of this objective, this means that people are less likely to lie or hide their part in any accident allowing investigators to better understand the root causes and contributory factors. This means that outcomes are system based and should identify why the safety system failed and are not individually based by stating who may have been to blame.

This has a two fold outcome:

  1. If system failures are identified, then this allows changes to be made to improve the overall safety of the system
  2. It avoids what may happen if an individual were to be blamed, namely that the accident could be considered as the failing of a single individual which may result in no changes to safety systems.

In Jim’s post about “The Poor Data Quality Jar” he, and the people who have commented, generally take a light hearted approach to the subject including suggesting public humiliation or electric shocks for staff who are discovered creating data errors. This may be fun for those not involved, and distinctly uncomfortable for those who are caught, but is  likely improve data quality approaches only marginally. As people work within the context of an organisation and will be affected by its rules, procedures, standards, culture and behaviours, then it is unlikely that a single individual will be truly to blame.

A “No Blame” approach to the investigation of “Data Accidents” should focus on finding, and correcting, the root causes of data problems.

  • Would this approach result in more Data Accidents being declared?
  • Would this approach provide a more effective means for removing the root causes of data problems?
  • Would this improve overall data quality?
Tagged on:             

4 thoughts on “The Data Accident Investigation Board

  • 11th May 2010 at 14:49

    Great post Julian,

    I definitely agree that a “no blame” approach using a data accident investigation board would result in more data quality issues being reported, as well as leading to the true root causes of those problems being identified.

    It is the latter aspect that I think needs to be emphasized. I have seen many organizations adopt a blameless approach – however, they didn’t investigate the root causes of data quality issues because they felt the investigation would reveal who was responsible – even if that person or business process wasn’t “officially” blamed.

    Therefore, without the root cause analysis, both the data cleansing and defect prevention measures put in place were not properly targeting the real underlying issues.

    Organizations need to embrace both aspects of the Data Accident Investigation Board that you have done an excellent job describing in order to truly improve their overall data quality.

    Best Regards,


    • 11th May 2010 at 15:47


      Thanks for the insightful comment.

      The other bonus if you can determine the root cause of a data accident is that you can then back track to similar accidents.

      A few years ago one of the big data stores I was steward for had numerous data issues, one set of which was caused by a surveyor who had misinterpreted the data hierarchy guidelines. Foruntately, they were very consistent in the way they had classified the data, so once you had spotted their ‘fingerprint’ on the data, it became a much easier task to correct the errors.


  • 23rd May 2010 at 21:34

    sorry for the late response, but I still want to leave my thoughts as I really like the approach of “no blame”.

    I agree that the “Data Accident Investigation Board” is a useful (dare I say “right”) approach to looking into data quality problems .. however, I see one problem: When does it convene?

    For air traffic accidents, the “trigger” is usually hard to miss – a plane has come down, or a serious problem was narrowly averted. For data accidents, it is less obvious as there are more ways to cover things up and usually not enough people involved so that everyone is open about their roles. Usually, people try to cover up their mistakes or rely on manual correction of other people’s mistakes.

    I find it very hard to get people to admit they have a problem. I guess most organizations are still in the denial stage ..

    I look forward to read more of your posts!

    • 24th May 2010 at 06:54


      I agree that finding the “trigger” could be a problem, as in many cases it will rely on staff admitting to errors that they have found and were possibly created by them or their colleagues. In a culture where there is potential disciplinary action for such transgressions, people will stay quiet as much as they can.

      If an organisation has a culture where such admissions are treated as learning events and where staff are only disciplined for fraud or gross negligence, then staff will be more likely to admit to such errors.

      A radical thought could be to reward staff financially for each new type of data error they find, so long as it was not created by them or their associates. This would truly impart the view that data was important to the organisation.



Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.