Machine Learning Model Interpretation and Prescriptive Analytic with Lime

Machine learning model interpretablity is the degree to which a human can comprehend the reasons behind the prediction made by a model. Interpretablity may be required for various reasons e.g. meeting compliance requirements or gaining insight for high stakes situation e.g. medical diagnosis. In this post we will show how to use lime python library to to interpret a Random Forest based loan approval Machine Learning predictive model.

The implementation is available in my open source project avenir. To make it easier, a wrapper class around lime has been created, so that lime can be used without any Python coding by defining all relevant parameters in a properties configuration file.

Model Interpretation

There is an inverse relationship between model interpretability and complexity. There are some Machine Learning models with low complexity that are inherently interpretable e.g regression, logistic regression, decision tree. These models are intrinsically interpretable and not additional steps are necessary for interpretation.

Unfortunately, most Machine Learning models are complex e.g. deep learning and not subject to interpretation. For these complex models the only solution is to use black box or post hoc techniques, which require additional analysis after the model is trained. There are various black box techniques.

Our focus is on a black box technique called Local Interpretable Model Agnostic Explanation (LIME). It’s local in the sense that the interpretations are created around a specified data point. This is how LIME woks

  • Perturbs the provided data point to create a new data point
  • Uses the data point and the actual provided predictive model to get prediction i.e generate labels
  • Repeat the steps many times. Using the generated data set as training data set, build a simpler and interpretable local model e.g logistic regression or decision tree
  • Uses the provided data point and the new model to generate explanations

As with other black box technique LIME is agnostic to the actual machine learning model. You just have to provide a call back function to LIME. The function makes predictions based on the actual model.

Loan Approval Data

We will use loan approval data as an illustrative example. With LIME we will be able to learn the influencing features along with predicates involving the feature that cause a loan application to be approved or rejected. Here are the various fields in the loan approval data. The data is artificially created using ancestral sampling.

  • loan ID (ignored)
  • marital status
  • number of children
  • education
  • employment status (employee, self employed)
  • income
  • years of experience
  • number of years in current job
  • outstanding debt
  • loan amount
  • loan term
  • credit score
  • approval status (target or class variable)

Training the Classification Model

We will be using Random Forest for the actual model. The first step is to train the actual classification model. You can manually tune the model by following the instructions in tutorial document.

The training and validation can be performed without writing any Python code. The Random Forest implementation in scikit is wrapped with another Python class. All relevant parameters are provided through a properties configuration file. Here is the output from training and validation.

running mode: trainValidate
...building random forest model and kfold cross validating model
average error with k fold cross validation 00.047

Alternatively, you can use Hyperopt based AutoML to train the model as explained in depth in my earlier post. If using AutoML, it will also select the right classification model, along with optimum parameters for the data set, which may not be Random Forest.

Once the model is tuned, it should be trained and the trained model saved as described in the tutorial. When LIME calls the call back function for prediction, the saved trained model is used for the predictions.

Model Interpretation

A Python class wraps lime enabling it to be used without writing Python code. Only a properties configuration file needs to be created containing all the parameters.

When getting interpretations, the data point of interest is provided as the last argument, as you can see in the tutorial document. We will analyze two opposite cases in details. In one case, the outcome is positive i.e. the loan application is approved. In the other case, the outcome is negative i.e. the loan application is rejected.

For the positive outcome case, here is the data point and the explanations. The data point excludes the first field (loan ID) and the last field (target variable). Each explanation includes a predicate involving a feature and an associated score. The score indicates the degree of influence of the feature under consideration.


model explanation
('18.00 < debt <= 25.00', 0.13919279603120172)
('2.43 < current job <= 3.12', 0.09016956212983421)
('income > 122.00', 0.07651169041465362)
('loan amount <= 285.00', 0.05923023734962852)
('11.85 < work experience <= 15.69', 0.057867014493570776)

As gleaned from the explanations, reason for the approval are as follow. Intuitively, they all make sense.

  • Debt below a level
  • Number of years in current job above a level
  • Income above a level
  • Loan amount below a level
  • Number of years of work experience above a level

Now we will focus on the negative case i.e loan application is rejected. Here is the data point and corresponding explanations.


model explanation
('debt > 55.00', -0.1912247331605266)
('current job <= 1.05', -0.15997418168086547)
('work experience <= 5.36', -0.07838758199402282)
('53.00 < income <= 101.00', -0.05215212991337308)
('0.00 < employment <= 1.00', -0.02748299140505971)

Here is a summary of the explanations for the rejection. They all make sense.

  • Debt above a level
  • Number of years in current job below a level
  • Number of years of work experience below a level
  • Income below a level
  • Self employed status

Prescriptive Analytic

How is machine learning model interpretation related to prescriptive analytic. The goal of Prescriptive Analytic is data driven decision making. Prescriptive Analytic should generate a set of recommendations that when executed will produce a certain desired outcome.

To relate to the loan approval case, consider a case when the loan application is rejected. The loan officer wants to make a set of recommendations so that if they are followed the outcome could become positive at some time in future.

Relationship between model interpretation and prescriptive analytic is revealed when we make the following key observation. If we invert the predicates we got from the model explanation, the outcome is also likely to be inverted i.e it becomes positive.

In other words, the inverted predicates become the recommendations. Here is the result of inversion. Not all the recommendation will be practical or viable to implement in real life.

  • Bring debt below 55K. The applicant could pay off some of the debt.
  • Make number of years in current job more than 1.05. There is no immediate impact of this
  • Make number of years of work experience greater than 5.36. Same comment as above
  • Make income above 53K. The applicant could try to get a raise or change job
  • Become an employee instead of being self employed.

As far as the fourth recommendation, the applicant could get a raise in the current job or get a higher paying different job. However, getting a different job has a caveat. It will result in a drop of the number of years in current job which will negatively impact any loan application in future.

The recommendations are helpful but somewhat deficient. The deficiencies are described below along with ways to overcome them.

  • Inverting the predicates may be just enough for the data point to cross over the class boundary. For example, instead of recommending to bring the debt below 55K, the solution should specify a level to which the debt should be brought down, so that we have some some minimum odds of the outcome being positive.
  • It may not be necessary to invert all the predicates to get a positive outcome. Top few might be adequate. The minimum number of predicate inversion necessary could be found by iteration.
  • Following and executing the recommendations do not require the same level of effort. The level of effort should be taken into account. For example it might be easier to lower the current debt than getting a new higher paying job.

Prescriptive analytic to get more precise and least cost or effort based recommendations is a complex optimization problem. I will address it in a later post.

Summing Up

We have gone through a technique for machine learning model interpretation using lime Python library. Along the way we have seen how model interpretation can be leveraged for prescriptive analytic. Please follow the tutorial for for the details of the execution steps.

About Pranab

I am Pranab Ghosh, a software professional in the San Francisco Bay area. I manipulate bits and bytes for the good of living beings and the planet. I have worked with myriad of technologies and platforms in various business domains for early stage startups, large corporations and anything in between. I am an active blogger and open source project owner. I am passionate about technology and green and sustainable living. My technical interest areas are Big Data, Distributed Processing, NOSQL databases, Machine Learning and Programming languages. I am fascinated by problems that don't have neat closed form solution.
This entry was posted in Data Science, Machine Learning, Python and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s