How to define your data quality problems: How to get started

Data Mining   |   
Published October 9, 2017   |   

To tackle any problem in a systematic and effective way, you must be able to break it down into parts. After all, understanding the problem is the first step to finding the solution.  From there, you can develop a strategic battle plan. With data quality, the same applies: every initiative features many stages and many different angles of attack.

When starting a data quality improvement program, it’s not enough to count the number of records that are incorrect, or duplicated, in your database. Quantity only goes so far. You also need to know what kind of errors exist to allocate the correct resource.

In this interesting blog by Jim Barker, the different types of data quality are broken down into two parts. We’ll look closely at defining these ‘types’, and how we can use this to our advantage when developing a budget.

Types of data

Jim Barker – known as ‘Dr Data’ to some – has borrowed a simple medical concept to define data quality problems. His blog explains just how these two types fit together and will be of interest to anyone who has struggled to find the data quality gremlins in their machine.

On the one hand, there’s the Type I data quality problem: things we can detect using automated tools. On the other hand, Type II is more enigmatic. You know the data quality problem is there, but it’s more difficult to detect and deal with because it needs to be contextualized to be detected.

The key differences can be simply and quickly defined:

  • Type I data quality problems require “know what” to identify: completeness, consistency, uniqueness, and validity. These attributes can be picked up using data quality software, or even manually. You don’t need to have a lot of background knowledge, or a track record working with that data. It’s there, it’s wrong and you can track it down. For example, if we insert a 3 into a gender field, we can be sure that it is not a valid entry.
  • Type II data quality problems require “know how” for detection of timeliness, congruence and accuracy attributes. They require research, insight and experience and are not as simple or straightforward to detect. These datasets may appear free of problems, at least on the surface. The devil is in the detail, and it takes time to correct. Jim’s example is an employee record for someone who has retired. Without knowing the date of retirement, their data would otherwise appear to be correct.

The key takeaway is that data quality problems require a complex, strategic approach that is not uniform across a database. Once we break the data down, we start to see that it requires human and automated intervention – a dual attack.

Cost to fix

So, how do we deal with Type I and Type II data quality problems? Are the costs comparable, or are they different beasts entirely?

The important thing to remember is that a Type I data validation or verification problem can be logically defined, and that means we can write software to find it and display it. Automated fixes are fast, inexpensive and can be completed with only occasional manual review. Think of Type I data quality problems as form field validation. Once valid, the problem disappears.

We could estimate that Type I data presents 80 percent of our data quality problems, yet consumes 20 percent of our budget.

Type II data needs the input of multiple parties so that it can be discovered, flagged up and eradicated. While every person in our CRM may have a date of purchase, that purchase date may be incorrect or not tally with an invoice or shipping manifest. Only specialists will be able to seed out problems and manually improve the CRM by carefully verifying its contents.

Often, businesses find it difficult to allocate the necessary resource – particularly if they have grown rapidly, or have high employee churn. While these Type II problems are fewer – perhaps the remaining 20 percent of the database – they could require 80 percent of our data quality budget or more. If you continually lose staff who have that knowledge, and you fail to retain any of it over time, you will find Type II data much more difficult to deal with because the human detection element is lost.

Improving accuracy

In order to improve data accuracy, we must work on Type I and Type II data as separate but conjoined, problems. Fixing Type I data quality challenges can present quick wins, but Type II presents a challenge that human expertise can solve.

Over time, a database will always drift out of date, and this requires on-going and sustained effort. Data can be cleansed in situ, or validated at the point of entry, but Type I errors will still occur for a number of reasons; import/ export, corruption, manual edits, human error. Type II data problems will occur naturally, of their own accord; data that validates and looks correct may now be incorrect, simply because someone’s circumstances have changed.

Ensuring data integrity

Data informs business decisions and helps us get a clear picture of the world. Detecting Type I data quality problems is simple, inexpensive and quick. If your business has not yet adopted some kind of data quality software, there’s no doubt that it should be implemented to avoid waste, brand damage and inaccuracy.

As for Type II, the key is to understand that it exists and to implement new processes to prevent it from occurring. Workarounds and employee diversions from business processes will drag the data down. A failure to allocate subject matter experts could increase the amount of Type II over time. And as the proportion increase, so does the price of fixing it, because you need expert eyes on the data to weed it out. See the 1:10:100 Rule article.

Detecting and eradicating both types of problem is not impossible. One is easier than the other. Data quality vendors are continually looking at new ways to make high-quality data simpler to achieve.

This article originally appeared here. Republished with permission. Submit your copyright complaints here.