Processing Missing Values with Hadoop

Missing values are just part of life in the data processing world. In most cases you can not simply ignore the missing values as it may adversely affect whatever analytic processing you are going to do. Broadly speaking, handling missing data consists of two steps, gaining some insight on missing fields in the data and then taking some actions based on the insight gained from the first step. In this post the focus will be primarily on the first step.

The Hadoop based implementation is available in my OSS project chombo on github. In future I will provide Spark port for the implementation.

Missing Data Mechanism

There are 3 underlying mechanisms behind missing data as follows. Knowledge of the underlying mechanism helps the choice of missing value processing techniques to be used.

  1. Missing Completely  at Random (MCAR) : Missing values for a field in record are completely random and the probability of having missing values does not depend upon the known values of other fields or the missing field value
  2. Missing at Random (MAR): Missing values are random, but the probability of a missing values in a field  depends on the values in other fields.
  3. Missing Not at Random (MNAR): The probability  of having a missing value in a field depends on the value for the missing field.

With MCAR, probability of missing value for a field is same for all records i.e, it is completely random.

With MAR, the probability of a missing value in a field depends on the known values in other fields in the same record. For example, in a survey if the education level college, the probability of the earning field being missing is higher. The probability of a missing value may also depend on a hidden variable that’s not part of the record.

With MNAR, the probability of the missing value depends on the missing value itself. For example, people with higher income are more likely not to provide their income information.

Handling Missing Values

Unless having missing values doesn’t have adverse consequence, which is rare, you have to do something about the missing values. Here is a workflow that could be used to handle missing values.

  1. Do exploratory analysis to get statistics on missing values, row wise and column wise. The insight gained could be used in the next steps
  2. Filter out row and columns with too many missing based on results obtained in the previous step.
  3. Accept the data data obtained from step 2 with some missing values. Depending on the further analysis that’s going to be performed, this  may be viable approach
  4. If missing values are not acceptable replace the missing values with values predicted through various statistical techniques. These techniques are known as missing value imputation.

For our use case, we will follow the steps 1,2 and 4 in the  workflow above. Focus of this post is on steps 1 and 2. Missing value imputation as in step 4 will be covered a future post. There two drawback to filtering as follows

  1. If the underlying mechanism for missing values is MAR or MNAR, then the data resulting from the filtering may not conform to the underlying probability distribution of the data and it becomes biased.
  2. If there are too many missing fields, there may not be enough data left after filtering and may not be useful the analysis to be performed on the data

Customer Service Survey

We will use customer service survey data along with some data collected from customer calls as the use case. The fields in the survey data are as follows.

  1. Customer ID
  2. Customer type
  3. Call hold time
  4. Number of previous calls on the same issue
  5. Number of customer service re routes in the call
  6. Score for customer service friendliness
  7. Score for customer identity verification process
  8. Whether issue got resolved
  9. Customer service representative satisfaction score
  10. Over all satisfaction score

Here is some sample input data.


Missing Value Statistics

Missing value statistics is calculated using the Map Reduce class MissingValueCounter. It operates in the 3 different modes based on the parameter mvc.counting.operation.

When set to row, it calculates row wise missing field counts as below. The output contains the row unique key followed by the count of missing fields, with the count in the descending order.


When set to col, it calculates column  wise missing field counts as below. It shows column index followed by the count of the missing fields with the counts in descending order.


There is also another setting distr, which allows you to get distribution of counts row wise and column wise. Here is some output


The second row in the output tells us that there are 48 rows with 2 missing fields. Similarly, the fifth record tells us that there 1 column with 52 missing fields.

Filtering Out Rows and Columns

The next step is to filter out row and columns from the data using the output of the last MapReduce as a guide. For filtering we will use a Map Reduce class called Projection, which is essentially an implementation of SQL select query. You can set the  projection fields and predicates for select clause. It also supports some UDF for predicates.

Based on the findings from the previous Map Reduce job, this is how we are going to filter the data

  1. Filter out rows with more than 2 missing fields
  2. Filter out any row with the last field missing
  3. Filter out the 7th column which has most of the values missing

Overall customer satisfaction score is mandatory. That’s why we filter out any rows that have this field missing, as  indicated by the 2nd filter condition. Here is some sample  output from the filtering operation


The output will still have some missing values. However the most offending rows and columns have been removed.

Missing Value Imputation

Imputation is essentially prediction of missing values. Available data can used  to predict the missing values using various statistical techniques.

Any missing field in the data after filter operation can be filled in with imputed values. This will be the topic of a future post.

Imputation also has the same risk as filtering. After imputation, the data may not any longer correspond to the underlying probability distribution that generated the data set.

Summing Up

In this post we have gone through a Hadoop based solution for handling missing values. The steps to to execute this use case can be found in this tutorial document.


For commercial support for any solution in my github repositories, please talk to ThirdEye Data Science Services. Support is available for Hadoop or Spark deployment on cloud including installation, configuration and testing.


About Pranab

I am Pranab Ghosh, a software professional in the San Francisco Bay area. I manipulate bits and bytes for the good of living beings and the planet. I have worked with myriad of technologies and platforms in various business domains for early stage startups, large corporations and anything in between. I am an active blogger and open source project owner. I am passionate about technology and green and sustainable living. My technical interest areas are Big Data, Distributed Processing, NOSQL databases, Machine Learning and Programming languages. I am fascinated by problems that don't have neat closed form solution.
This entry was posted in Big Data, Data Profiling, Data Science, ETL, Hadoop and Map Reduce and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s