Measuring Performance of Classifiers

Random Forest Calibration Plot

In next few posts we will focus on building and evaluating predictive models for categorical response. In this post we will review the basics of measuring performance of classifiers. We will use learnings from this post to evaluate performance of different classifiers that we discuss in future posts. The contents of this post is mostly influenced by my learning from Max Kuhn’s book Applied Predictive Modeling

Classification models usually generate two types of outputs:

  • A continuous number usually in the form of probability
  • A discrete value that shows if a point belongs to a category (Predicted Class).

In practical applications we are usually interested in the discrete value. The probability value is important because it helps us to understand the confidence of model in predicted category. Also, there are some practical applications for predicted probability. For example, predicted probability can be used to calculate the customer lifetime value(CLV).

We first discuss the performance of models based on predicted classes(discrete values) and then turn into evaluation of performance based on predicted probabilities.

1.Evaluating Predicted Classes

Confusion Matrix is a common method for describing the performance of classifiers. It’s a simple cross tabulation of predicted classes Vs. obsessed classes. Below table shows the confusion matrix for a logistics regression model on Titanic Dataset. The code for creating model and confusion matrix is available at the end of this post. We basically trained a model on 70% of samples and tested the model on remaining 30%. The confusion matrix shown below is based on prediction of model on test data with threshold of 50%:

1.1 Overall Accuracy & Kappa Statistic

The simplest measure of accuracy of this model is called Overall Accuracy which is simply the percent of samples that model predicted correct class for them. For above example, the Overall Accuracy is (150 + 95)/313 = 78.3%.

Although Overall Accuracy is simple and interpretable, there are at least to major problems with this measure:

  • The Overall Accuracy measure makes no assumptions about natural frequencies of classes. For example, if we build a model to classify credit card transactions as fraudulent or good, we probably can simply achieve very high Overall Accuracy by predicting all transactions as good. Because, the fraudulent transactions account for small part of all transactions.
  • The Overall Accuracy measure treats all classes the same. Consider a scenario such as classification of emails as Spam or Good. In this scenario classifying a good email as spam and deleting it will have a negative impact on user experience and has a higher cost compare to misclassifying a spam email as good. The Overall Accuracy measure does not distinguish between a model that misclassifies the good emails or spam emails.

The overall accuracy measure helps us to understand if model passes the minimum requirements. The overall accuracy needs to be higher than no-information rate for the model to be even considered. For example in the case of simple binary classification, the no information rate based on pure randomness is 50%. So, if we randomly assign classes to each observation, with a large enough sample, we probably get 50% accuracy. So, any model with overall accuracy of less than 50% in binary classification and less than 1/C (assuming there are C classes) accuracy will be unacceptable. In the case of imbalanced classes no information rate is not a good baseline because it does not account for relative frequencies. In this case the relative frequency of the largest class is an alternative baseline.

An alternative to no information rate is Kappa Statistic. This statistic shows the overall agreement between two raters. This statistic can have values between -1 and 1. One shows complete agreement, zero shows complete disagreement and -1 shows complete agreement in opposite direction. Kappa statistics higher than 0.3 to 0.5 is considered acceptable(depending on application). Kappa statistic is calculated using below formula:

$latex k=\frac{P_{o}-P_{e}}{1-P_{e}} $

where Po is observed values and Pe is expected value. Here is the calculation of Kappa statistics for the confusion matrix of titanic dataset presented before:

There were 150 samples that survived and model predicted that they survive and there was 95 samples that didn’t survive and model predicted that they don’t survive. There were total of 313 samples:

Po = (150 + 95)/313 = 0.783

 To calculate the  Pe (Probability of random agreement) we note that:

  • Observed values show 169 survived and 144 didn’t survive. So, the probability of survival is 169/313 = 0.539
  • Predicted values show 199 survived and 114 didn’t survive. So, the probability of survival is 199/313 = 0.635

Therefore the probability that both observed and predicted values show survival is 0.539*0.635 = 0.343 and the probability that both observed and predicted values show not survival is 0.461*0.365 =0.167 and as a result probability of agreement is Pe = 0.343 + 0.167 = 0.51

Finally the Kappa Statistic is (0.783 – 0.51)/(1-0.51) = 0.557

1.2 Sensitivity Vs. Specificity

Now that we discussed the overall accuracy of model, we turn into detailed measures that help us better understand strengths and weaknesses of classifier. We will mainly focus on Sensitivity and Specificity:

  • Sensitivity (a.k.a True Positive Rate, TP or Recall): measures the proportion of positives that are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition).
  • Specificity (a.k.a True Negative Rate, TN): measures the proportion of negatives that are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition).

Now, let’s go back to Titanic example and see how Sensitivity and Specificity help us better understand difference between models. We presented the confusion matrix for Logistics Regression model in table 1. Below table shows the confusion matrix for Random Forest model with threshold of 50%:

 

The table below shows comparison of different metrics for those 2 models:

 

As you can see the overall accuracy of those models are very close. However, when we look at sensitivity and specificity, it’s obvious that Random Forest model is doing a better job of predicting the cases which outcome is false(not survival). So, depending on the situation and problem on hand any of those models can be preferred to the other.

1.3 Younden’s J Index

J = Sensitivity + Specificity − 1

The index was suggested by W.J. Youden in 1950 as a way of summarising the performance of a diagnostic test. Its value ranges from 0 to 1, and has a zero value when a diagnostic test gives the same proportion of positive results for groups with and without the disease, i.e the test is useless. A value of 1 indicates that there are no false positives or false negatives, i.e. the test is perfect. The index gives equal weight to false positive and false negative values, so all tests with the same value of the index give the same proportion of total misclassified results.

This index as well as other measures such as F-Score is being used in conjunction with ROC curves to identify the best cut-off threshold of probabilities to predict classes. We will discuss the ROC and AUC in next section.

2.Evaluating Predicted Probabilities

Class probabilities offer more information about model predictions than the simple class value. In this section we discuss several approaches for using the probabilities to compare models.

2.1 Calibration Plot

Before we jump into ROC/AUC discussion, let’s discuss methods to evaluate quality of probabilities predicted by model. Ideally model predicts probabilities which are close to actual probabilities. One common method to evaluate the quality of predicted probabilities is calibration plot. One approach to create calibration plot is partitioning the predicted probabilities of test values to different bins. Then calculate the ratio of observed events among samples that fall in each bin. Finally, plotting the mid point value of each bin against ratio of events among samples in that bin should be a 45 degree line for well calibrated probabilities. Below figures show calibration plot for Logistic Regression and Random Forest models discussed in previous section.

 

Logistics Regression Calibration Plot

Logistics Regression Calibration Plot

Random Forest Calibration Plot

Random Forest Calibration Plot

 

As you can see in the calibration plots, the probabilities predicted by Logistic Regression model is much closer to actual probabilities. The Random Forest model in the other hand performs poorly when probabilities are very low or very high. The Random Forest model underestimated the probabilities when they were very low and over estimated the probabilities when they are close to 1. There are several approaches for calibrating the probabilities generated by model which we will not discuss them here.

2.2 Receiving Operator Characteristic (ROC) Curves

ROC curves were designed as a general method that, given a collection of continuous data points, determine an effective threshold such that values above threshold are indicative of a specific event. The ROC curve is created by evaluating the class probabilities for the model across a continuum of thresholds. For each candidate threshold, the resulting true-positive rate(Sensitivity) and the false positive rate(1-Specificity) are plotted against each other.

The plot is a helpful tool for choosing a threshold that appropriately maximizes the trade-off between Sensitivity and Specificity. The ROC curve can also be used for a quantitative assessment of the model. A perfect model that completely separates the two classes would be a single step between (0,0) and (0,1) and remains constant. The area under the curve for such a model would be 1. In the other hand, a completely ineffective model would result in a ROC curve that closely follows 45 degree line and will have an area under the curve of 0.5. The optimal model should be shifted toward the upper left corner of plot.

Below figure shows comparison of different ROC curves.

Accuracy is measured by the area under the ROC curve. An area of 1 represents a perfect test; an area of 0.5 represents a worthless test. A rough guide for classifying the accuracy of a diagnostic test is the traditional academic point system:

  • 0.90 – 1 = excellent (A)
  • 0.80 – 0.90 = good (B)
  • 0.70 – 0.80 = fair (C)
  • 0.60 – 0.70 = poor (D)
  • 0.50 – 0.60 = fail (F)
ROC Curve

ROC Curve

2.3 Lift Charts

Lift charts are a visualization tool for assessing the ability of a model to detect events in a dataset with two classes. Lift is a measure of the effectiveness of a predictive model calculated as the ratio between the results obtained with and without the predictive model. The greater the area between the lift curve and the baseline, the better the model. Below are steps to create a lift chart:

  1. Predict the probabilities for test data set
  2. Determine the baseline event rate. The percent of events in entire dataset.
  3. Order the data by classification probabilities from step 1
  4. For each unique probability value calculate the percent of true events in all samples below the probability value
  5. Divide the percent of true events for each probability threshold by the baseline event rate

Below charts show lift charts for Logistic Regression and Random Forest models on Titanic dataset:

Logistic Regression Lift Chart

Logistic Regression Lift Chart

Random Forest Lift Chart

Random Forest Lift Chart

As you can see both models are performing better than random state. The charts tell us that if we wanted to chose a sample of titanic passengers (for example for rescue operation) then we would be better of selecting the passengers that either model predicts higher probability of survival compare to random passengers.

Finally, here is the R code I used for generating data and charts in different sections of the post.

One response

Leave a Reply

Your email address will not be published. Required fields are marked *