ROC curve and AUC: a comprehensive overview

A beautiful sight
Test your knowledge
0%

ROC curve and AUC: a comprehensive overview - Quiz 1

1 / 3

What are the 2 axes of the Cost curve (in the most simple form)?

2 / 3

Which type of graph is a ROC curve?

3 / 3

Should using ROC Curves (or PR Curves) a preferred choice for model comparison?

Your score is

0%

Please rate this quiz

The ROC curve is an often-used performance metric for classification problems. In this article, we attempt to familiarize ourselves with this evaluation method from scratch, beginning with what a curve means, the definition of the ROC curve to the Area Under the ROC curve (AUC), and finally, its variants.

The Curve

In the scope of classification performance measurement, a curve is a line graph showing how a model performs at varied thresholds.

Being a line graph, it is composed of 2 axes and, of course, a line running in-between. The 2 axes usually represent some of the simple ratios taken from the confusion matrix (e.g. Precision, Recall, and False-Positive Rate), while the line is plotted by stitching different points together.

For simplicity, we only examine the use of curves when working with binary labels. In the cases of multi-class classification, a curve should be made for each individual label.

2 examples of curves for performance evaluation.
Example prototypes of curves for performance evaluation.

Note that:

  • The curves should only be drawn if the output of the model is a score or ranking. If the predictions are either 0 or 1 (Negative or Positive), there is no meaning in drawing a curve since we cannot take different thresholds.
  • By threshold, we mean the cut point so that all outputs that are smaller than the threshold being interpreted as negative while the outputs that are larger than or equal to the threshold being marked positive.
    For example, using Logistic Regression, the predicted value of any sample is in the range (0, 1), so we often take 0.5 as the threshold: samples that are predicted with values less than 0.5 are converted to 0 (meaning they are predicted to be Negative), while the others are converted to 1 (meaning Positive). However, we can choose different thresholds, e.g. 0.3, 0.6 or even 0.9. The curves are formed by calculating the x-criterion and y-criterion at these thresholds.

There are many types of curves, in which the most popular ones seem to be the ROC and the Precision-Recall curves. Usually, their difference is only on the selections of which criteria for the 2-axes.

The ROC curve

The ROC (Receiver Operating Characteristic) curve is the curve with its 2-axes being the True Positive Rate (TPR) and False Positive Rate (FPR).

Be reminded that:

TPR = \frac{TP}{TP+FN}

FPR = \frac{FP}{FP+TN}

bad, moderate and good ROC curves
Example of a ROC curve

A ROC curve always has 2 ends at (0, 0) and (1, 1).

  • Normally, a curve that is straight (?!) represents a bad model whose output is no better than flipping a coin (50/50).
  • This is not utterly true, but, in general, the closer a curve gets to the point (0, 1) the better measurement it expresses (the purple one in the figure above).

To draw a ROC curve in Python:

import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve

fpr, tpr, threshold = roc_curve(y_true, y_score)
plt.plot(fpr, tpr)
plt.show() 

The Area Under the ROC curve

As comparing models by looking at their curves is quite vague and inefficient, we have to look for another means that should make comparisons more simple and clear. The answer, intuitively, happens to be the area under those curves.

example of the area under the ROC curve
A ROC curve and its corresponding area.

The Area Under the ROC curve (AUC) is a quantitative measurement of model performance. It is a condensed version of the ROC curve itself and is often used for model comparison. An AUC value of 1 implies a perfect model while being close to 0.5 is not so favored.

In another point of view, the AUC equals to the probability that a random Positive sample is ranked higher than a Negative one, as stated in Green M David et al.’s paper in 1966.

In Python, to get the AUC given the predicted and actual labels, we can take the below function from sklearn. For more parameter tuning, refer to its documents.

import sklearn

sklearn.metrics.roc_auc_score(y_true, y_score)

Properties and Analysis

A curve:

  • is a visual interpretation of how a model works. (And we love visualization.)
  • helps us see the performance on a 2-dimensional space, thus being more comprehensive than just a single measurement (like the Accuracy).
  • helps us find the most promising threshold by looking at the graph.
  • is insensitive to the threshold (since we don’t need to set a threshold at all), which gives hands in reducing over-fitting.
  • is scale-invariant, meaning only the rank of the values is compared, not the absolute magnitude.
  • is usually not for model performance comparison.

The ROC curve:

  • ROC means Receiver Operating Characteristic, the name stemmed from the World War II time and has almost nothing to do with the curve’s characteristics now, so it is safe to ignore this name’s meaning.
  • is insensitive to class-imbalance, which means resampling the positive or negative class does not affect the ROC curve. Note that this may be either an advantage or disadvantage depending on our problem.
  • cares equally about the Positive and Negative classes. Again, this property may be beneficial or harmful in different situations.

The AUC, as compared to the original ROC curve:

  • be quantitative and make it plausible to compare models.
  • the value of AUC depends on regions of the ROC space that we probably do not care about, for example, the space to the far-right where the FPR is too high.
  • by being a single number, the AUC cannot account for the distribution of the errors, concealing the fact that if the errors are distributed homogeneously or mainly assembled in some specific ranges.

Important note for the ROC curve (and hence also the AUC) that worths pointing out separately is that it is weak if the dataset is highly imbalanced in favor of the Negative samples. These cases are quite common in practice, e.g. cancer detection, spam filtering.

If the proportion of Negative samples is huge, the number of True Negative would also be much larger than False Positive, which make the False Positive Rate FPR = \frac{FP}{FP+TN} unreliable, thus breaking the whole ROC space.

For the above situation, using the F-score or the Precision-Recall curve would be a much better choice.

Variants of the ROC

  • Concentrated ROC (CROC). The ROC curve, and especially the AUC, gives too much attention to the regions such as the extreme right of the graph where the FPR is very high, even though this region is probably not in use in practice. The CROC addresses this issue by supporting to magnify the left-hand side while minimizing the right-hand side of the graph, using an exponential function with parameter \alpha. For example, setting \alpha = 7 transforms FPRs [0.0, 0.5, 1.0] into [0.0, 0.971, 1.0], which in turn, produces a clearer view of the effective FPR area.
  • Cost curve (CC). The 2 dimensions of a CC are the probability of Positive (x-axis) and the Error rate (y-axis). For the cases when the distribution of labels in training data is different from the real distribution, or if the real distribution changes over time, we cannot assume the deploying distribution (the distribution of labels when we deploy our model – after training and testing) to be similar to the distribution in training, which makes it hard for evaluating model performance. The CC, by giving out its x-axis for representing the variability of class distribution, assists to handle this problem. For example, look at the CC below, which plots 2 models error rate on varying class distribution. If we can somehow approximate the real distribution around the time we need to give predictions, we are able to choose the better model accordingly.
figure of a Cost Curve
a Cost Curve showing the performance of 2 different models. The picture was taken from Chris et al.

Note that both CROC and CC are insensitive to class imbalance.

Test your understanding
0%

ROC curve and AUC: a comprehensive overview - Quiz 2

1 / 8

Which performance curves are NOT sensitive to the distribution of data labels? Choose all that apply.

2 / 8

What is an advantage of Cost Curve?

3 / 8

What are the 2 axes of the ROC curve?

4 / 8

What are the 2 axes of the Concentrated ROC curve?

5 / 8

What is the requirement of models to be assessed by plotting a Curve (e.g. ROC Curve, PR Curve)?

6 / 8

Which of the below are suitable for imbalanced datasets? (Assume that the datasets have much less positive samples than the negative ones, and we want to emphasize the performance on those positive samples). Choose all that apply.

7 / 8

In what type of problems is a ROC curve used for?

8 / 8

What is an advantage of Concentrated ROC Curve?

Your score is

0%

Please rate this quiz

References:

  • Wikipedia’s page about ROC curve: link
  • Jorge M. Lobo et al.’s paper on the weaknesses of AUC: link
  • Tom Fawcett’s An introduction to ROC analysis: link
  • Signal detection theory and psychophysics by Green M David et al.: link
  • Takaya et al.’s research on different metrics for imbalanced datasets: link
  • Chris et al.’s detailed paper on Cost curves: link

Leave a Reply