I recently came across two examples of how to handle poor quality supplier data, one good and one bad, in the same business unit of a large organisation.
The organisation concerned is reliant on contractors supplying accurate data for the work they have undertaken. Due to the complexity of data entries, in particular where different ‘layers’ of data interact, the likelihood of data error is high. Additionally, there are cost and safety implications if any one had to physically check the data, and that this same data has cost and possible long term safety implications if incorrect, it is essential that data supplied for input is correct.
One team member demonstrated a very poor approach when recieving data that they believed to be incorrect – they entered data into the system to represent what they thought the correct data should be! This is decidedly risky in that they may actually be making the error worse, by changing data supplied by the contractor they are taking over liability for the accuracy of the data and the contractor will tend to repeat the errors in future, as they know the data is likely to be checked. If the data had been entered as supplied, then this team member would have retained some liability for the data, as they knew it to be wrong, additionally, the contractor would have retained the majority of the liability.
A different team operated the correct process for addressing data that was supplied with errors – they rejected the batch of data stating that there were data errors, but did not state the nature of the errors. In this case the supplier would have to ensure they understood the data requirements in order to correct the data, which in turn would lead to better quality data in future.Additionally, the team recieving the data would not have picked up any liability for data errors.
The second approach takes less effort for less risk, whereas the first took more effort and assumed a lot of the risks – so why did they do it?