diff --git a/.RData b/.RData new file mode 100644 index 00000000..abd32871 Binary files /dev/null and b/.RData differ diff --git a/CourseSessions/ClassificationProcessCreditCardDefault.Rmd b/CourseSessions/ClassificationProcessCreditCardDefault.Rmd index bdfc3f70..cefce75b 100644 --- a/CourseSessions/ClassificationProcessCreditCardDefault.Rmd +++ b/CourseSessions/ClassificationProcessCreditCardDefault.Rmd @@ -563,7 +563,7 @@ df.all <- do.call(rbind, lapply(list(df1, df2, df3), function(df) { colnames(df)[1] <- "False Positive rate" df })) -ggplot(df.all, aes(x=`False Positive rate`, y=value, colour=variable)) + geom_line() + ylab("True Positive rate") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") +ggplot(df.all, aes(x=`False Positive rate`, y=value, colour="red")) + geom_line() + ylab("True Positive rate") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") ``` How should a good ROC curve look like? A rule of thumb in assessing ROC curves is that the "higher" the curve (i.e., the closer it gets to the point with coordinates (0,1)), hence the larger the area under the curve, the better. You may also select one point on the ROC curve (the "best one" for our purpose) and use that false positive/false negative performances (and corresponding threshold for P(1)) to assess your model. @@ -640,7 +640,7 @@ df.all <- do.call(rbind, lapply(list(frame1, frame2, frame3), function(df) { colnames(df)[1] <- "% of validation data selected" df })) -ggplot(df.all, aes(x=`% of validation data selected`, y=value, colour=variable)) + geom_line() + ylab("% of class 1 captured") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") +ggplot(df.all, aes(x=`% of validation data selected`, y=value, colour="red")) + geom_line() + ylab("% of class 1 captured") + geom_abline(intercept = 0, slope = 1,linetype="dotted", colour="green") ``` Notice that if we were to examine cases selecting them at random, instead of selecting the "best" ones using an informed classifier, the "random prediction" gains chart would be a straight 45-degree line. @@ -749,7 +749,7 @@ df.all <- do.call(rbind, lapply(list(frame1, frame2, frame3), function(df) { colnames(df)[1] <- "% of validation data selected" df })) -ggplot(df.all, aes(x=`% of validation data selected`, y=value, colour=variable)) + geom_line() + ylab("Estimated profit") +ggplot(df.all, aes(x=`% of validation data selected`, y=value, colour="red")) + geom_line() + ylab("Estimated profit") ``` We can then select the percentage of selected cases that corresponds to the maximum estimated profit (or minimum loss, if necessary). @@ -830,7 +830,7 @@ df.all <- do.call(rbind, lapply(list(df1, df2, df3), function(df) { colnames(df)[1] <- "False Positive rate" df })) -ggplot(df.all, aes(x=`False Positive rate`, y=value, colour=variable)) + geom_line() + ylab("True Positive rate") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") +ggplot(df.all, aes(x=`False Positive rate`, y=value, colour="red")) + geom_line() + ylab("True Positive rate") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") ``` Gains chart for the test data: @@ -881,7 +881,7 @@ df.all <- do.call(rbind, lapply(list(frame1, frame2, frame3), function(df) { colnames(df)[1] <- "% of test data selected" df })) -ggplot(df.all, aes(x=`% of test data selected`, y=value, colour=variable)) + geom_line() + ylab("% of class 1 captured") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") +ggplot(df.all, aes(x=`% of test data selected`, y=value, colour="red")) + geom_line() + ylab("% of class 1 captured") + geom_abline(intercept = 0, slope = 1,linetype="dotted",colour="green") ``` Finally the profit curves for the test data, using the same profit/cost estimates as above: @@ -946,7 +946,7 @@ df.all <- do.call(rbind, lapply(list(frame1, frame2, frame3), function(df) { colnames(df)[1] <- "% of test data selected" df })) -ggplot(df.all, aes(x=`% of test data selected`, y=value, colour=variable)) + geom_line() + ylab("Estimated profit") +ggplot(df.all, aes(x=`% of test data selected`, y=value, colour="red")) + geom_line() + ylab("Estimated profit") ``` **Questions:** diff --git a/CourseSessions/ClassificationProcessCreditCardDefault.html b/CourseSessions/ClassificationProcessCreditCardDefault.html index d16b7736..8c39b5c4 100644 --- a/CourseSessions/ClassificationProcessCreditCardDefault.html +++ b/CourseSessions/ClassificationProcessCreditCardDefault.html @@ -1,12 +1,12 @@ - + - + @@ -14,18 +14,1307 @@ Classification for Credit Card Default - + - - - - - - - - - - + + + + + + + + + + + + - - + + - -
- - - + + + - // establish options - var options = { - selectors: "h1,h2,h3", - theme: "bootstrap3", - context: '.toc-content', - hashGenerator: function (text) { - return text.replace(/[.\\/?&!#<>]/g, '').replace(/\s/g, '_').toLowerCase(); - }, - ignoreSelector: ".toc-ignore", - scrollTo: 0 - }; - options.showAndHide = false; - options.smoothScroll = true; - // tocify - var toc = $("#TOC").tocify(options).data("toc-tocify"); -}); - + + + + + + + +
+ +
@@ -224,7 +1551,7 @@

Classification for Credit Card Default

-

Theos Evgeniou, Spyros Zoumpoulis

+

Theos Evgeniou, Spyros Zoumpoulis

@@ -234,7 +1561,7 @@

Theos Evgeniou, Spyros Zoumpoulis

The Business Context

A Taiwan-based credit card issuer wants to better predict the likelihood of default for its customers, as well as identify the key drivers that determine this likelihood. This would inform the issuer’s decisions on who to give a credit card to and what credit limit to provide. It would also help the issuer have a better understanding of their current and potential customers, which would inform their future strategy, including their planning of offering targeted credit products to their customers.


-

+

The Data

@@ -358,7 +1685,7 @@

The Data

- + @@ -373,7 +1700,7 @@

The Data

- + @@ -386,7 +1713,7 @@

The Data

- + @@ -399,7 +1726,7 @@

The Data

- + @@ -412,7 +1739,7 @@

The Data

- + @@ -425,7 +1752,7 @@

The Data

- + @@ -438,7 +1765,7 @@

The Data

- + @@ -451,7 +1778,7 @@

The Data

- + @@ -464,7 +1791,7 @@

The Data

- + @@ -477,7 +1804,7 @@

The Data

- + @@ -490,7 +1817,7 @@

The Data

- + @@ -503,7 +1830,7 @@

The Data

- + @@ -516,7 +1843,7 @@

The Data

- + @@ -529,7 +1856,7 @@

The Data

- + @@ -542,7 +1869,7 @@

The Data

- + @@ -555,7 +1882,7 @@

The Data

- + @@ -568,7 +1895,7 @@

The Data

- + @@ -581,7 +1908,7 @@

The Data

- + @@ -594,7 +1921,7 @@

The Data

- + @@ -607,7 +1934,7 @@

The Data

- + @@ -620,7 +1947,7 @@

The Data

- + @@ -633,7 +1960,7 @@

The Data

- + @@ -646,7 +1973,7 @@

The Data

- + @@ -659,7 +1986,7 @@

The Data

- + @@ -672,7 +1999,7 @@

The Data

- + @@ -687,7 +2014,7 @@

The Data

01 02 03
IDID 1 2 3 10
LIMIT_BALLIMIT_BAL 20000 120000 90000 20000
SEXSEX 2 2 2 1
EDUCATIONEDUCATION 2 2 2 3
MARRIAGEMARRIAGE 1 2 2 2
AGEAGE 24 26 34 35
PAY_0PAY_0 2 -1 0 -2
PAY_2PAY_2 2 2 0 -2
PAY_3PAY_3 -1 0 0 -2
PAY_4PAY_4 -1 0 0 -2
PAY_5PAY_5 -2 0 0 -1
PAY_6PAY_6 -2 2 0 -1
BILL_AMT1BILL_AMT1 3913 2682 29239 0
BILL_AMT2BILL_AMT2 3102 1725 14027 0
BILL_AMT3BILL_AMT3 689 2682 13559 0
BILL_AMT4BILL_AMT4 0 3272 14331 0
BILL_AMT5BILL_AMT5 0 3455 14948 13007
BILL_AMT6BILL_AMT6 0 3261 15549 13912
PAY_AMT1PAY_AMT1 0 0 1518 0
PAY_AMT2PAY_AMT2 689 1000 1500 0
PAY_AMT3PAY_AMT3 0 1000 1000 0
PAY_AMT4PAY_AMT4 0 1000 1000 13007
PAY_AMT5PAY_AMT5 0 0 1000 1122
PAY_AMT6PAY_AMT6 0 2000 5000

-

+

A Process for Classification

@@ -724,16 +2051,16 @@

Step 2: Set up the dependent variable

- + - - - + + +
Class 1 Class 0
# of Observations533718663# of Observations537018630
@@ -741,16 +2068,16 @@

Step 2: Set up the dependent variable

- + - - - + + +
Class 1 Class 0
# of Observations6462354# of Observations6032397
@@ -761,7 +2088,7 @@

Step 3: Simple Analysis

- + @@ -773,244 +2100,244 @@

Step 3: Simple Analysis

- + - - - - - - + + + + + + - + - - - - + + + + - + - - + + + + + - - - - + - - + - - - + + + + - + - + - - - - + + + + - + - - - - - - + + + + + + - + - - - - + + + + - + - + - - + + - + - + - + - - - - - - + + + + + + - + - - - - - + + + + + - + - - - - - - + + + + + + - + - - - - + + + + - + - + - - - - - - + + + + + + - - - - - - - - + + + + + + + + - + - - - - + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - - - + + + + + + + + - - + - - - - - + + + + + + - - + - - - + + + + - + - - + - - - - - + + + + + + - - + - - - + + + + - + - - + - - - - - + + + + + + - - + - - - - - + + + + + +
min 25 percent median
IDID 174751479214791.3321889300008557.816097.2511970.011874.3817512.50240006782.73
LIMIT_BALLIMIT_BAL 100005000090000129233.2220000050000.0090000.0129725.82200000.00 740000113805.35115263.81
SEX1SEX 11.002.01.592.00 21.57220.500.49
EDUCATION1EDUCATION 121.9021.002.01.892.00 60.730.72
MARRIAGEMARRIAGE 0121.5221.002.01.532.00 3 0.53
AGEAGE 21283435.7942759.6527.0034.035.5242.00739.69
PAY_0PAY_0 -2010.6820.001.00.672.00 81.391.38
PAY_2PAY_2 -200-1.000.0 0.4622.00 71.501.51
PAY_3PAY_3 -2-100.36271.49-1.000.00.372.0081.51
PAY_4PAY_4 -2-100.2627-1.000.00.252.008 1.50
PAY_5PAY_5 -2-100.16071.47-1.000.00.182.0081.48
PAY_6PAY_6 -2-100.100-1.000.00.120.00 81.481.49
BILL_AMT1BILL_AMT1 -667631962054748574.015901061386073926.803204.5019913.047553.2557769.7561072371718.95
BILL_AMT2-911928262056847477.035755058177571997.92BILL_AMT2-73342823.7520250.046522.8056489.0057283469956.13
BILL_AMT3BILL_AMT3 -6150625522008945451.40548482500.0019818.544452.9252134.00 57897168948.4867027.84
BILL_AMT4-6516723001920842198.3950307BILL_AMT4-466272305.2519068.541230.7649233.00 54802064835.0562688.03
BILL_AMT5-5300715421868239640.6847624BILL_AMT5-466271566.5018596.039270.6647449.00 54788061952.6660441.45
BILL_AMT6-9462511331818638618.444753751497560170.86BILL_AMT6-3396031150.0018089.537945.7246981.5047248058573.78
PAY_AMT10PAY_AMT1 016393349.4634302445009246.740.001657.03357.713406.003000009340.34
PAY_AMT20PAY_AMT2 015803423.5833030.001500.03428.813250.00 35868912222.6412013.68
PAY_AMT30PAY_AMT3 012443295.23300050822912508.080.001191.03202.443000.0023445610576.47
PAY_AMT40PAY_AMT4 010003108.8429760.001000.03134.022845.00 43213010815.3311621.30
PAY_AMT50PAY_AMT5 010003155.63300033098211431.090.001000.03279.783000.0033200012654.68
PAY_AMT60PAY_AMT6 010003277.53293528798212166.730.001000.03325.662867.5034529313062.02
@@ -1018,7 +2345,7 @@

Step 3: Simple Analysis

- + @@ -1030,244 +2357,244 @@

Step 3: Simple Analysis

- + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - + + + - + - + - - - + + + - + - + - - - + + + - + - - - - + + + + - + - + - - - + + + - + - + - - - + + + - + - - - + + + - + - - - + + + - + - + - - - + + + - + - - - + + + - + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + - + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + + - + - - - - - - + + + + + +
min 25 percent median
IDID 37547.51513015090.5722668.5299978690.075964.2512012.512036.8518167.75239996969.51
LIMIT_BALLIMIT_BAL 1000070000.0150000177890.59250000.0800000131040.8160000.00150000.0175806.55250000.001000000131059.41
SEXSEX 11.021.621.00 2.01.642.00 20.490.48
EDUCATIONEDUCATION 01.021.841.00 2.01.842.00 60.810.80
MARRIAGEMARRIAGE 01.021.561.00 2.01.562.00 3 0.52
AGEAGE 2128.03435.4441.028.0034.035.3441.00 799.099.15
PAY_0PAY_0 -2-1.00-0.21-1.00 0.0-0.200.00 80.960.95
PAY_2PAY_2 -2-1.00-0.30-1.00 0.0-0.290.00 8 1.04
PAY_3PAY_3 -2-1.00-0.32-1.00 0.0-0.300.00 8 1.05
PAY_4PAY_4 -2-1.00-0.36-1.00 0.0-0.340.00 81.021.01
PAY_5PAY_5 -2-1.00-0.39-1.00 0.0-0.380.00 7 0.98
PAY_6PAY_6 -2-1.00-0.41-1.00 0.0-0.390.00 71.011.02
BILL_AMT1BILL_AMT1 -1655803703.02324251740.7069301.074681472519.963731.5023119.551474.2068187.5096451172893.67
BILL_AMT2BILL_AMT2 -697773009.52150749361.9665911.074397069874.353147.7521842.049258.0864624.7598393170471.89
BILL_AMT3BILL_AMT3 -1572642784.02019247129.1961885.585508667626.402861.5020145.046918.8060646.50166408968518.18
BILL_AMT4BILL_AMT4 -1700002339.51895643285.5255944.562869963341.322350.2518889.542696.0554019.7589158663178.42
BILL_AMT5BILL_AMT5 -813341759.51777840289.3551114.582354059920.99
BILL_AMT6-2090511246.01637838737.7549699.069994458723.63
PAY_AMT101171.524536198.065614.01777.0017955.540211.1150705.0092717160317.17
BILL_AMT6-1509531244.2516609.038741.8449493.7596166459322.82
PAY_AMT101158.002423.06172.795556.00 50500016898.1916297.18
PAY_AMT2PAY_AMT2 01003.522516487.225310.5102451620457.551000.002200.06503.245200.00168425922660.15
PAY_AMT3PAY_AMT3 0600.020005680.195000.041758816879.70500.002000.05478.545000.0089604017330.59
PAY_AMT4PAY_AMT4 0390.017125282.704539.052889716476.72390.001700.05207.404566.0049700015667.78
PAY_AMT5PAY_AMT5 0364.017375227.524555.542652916154.69371.001734.55216.924581.2541799015918.86
PAY_AMT6PAY_AMT6 0300.017075771.204600.052714318790.97268.501663.55726.614500.0052866618707.41
@@ -1277,9 +2604,9 @@

Step 3: Simple Analysis

Even though each independent variable may not differ across classes, classification may still be feasible: a (linear or nonlinear) combination of independent variables may still be discriminatory.

A simple visualization tool to assess the discriminatory power of the independent variables are the box plots. A box plot visually indicates simple summary statistics of an independent variable (e.g. mean, median, top and bottom quantiles, min, max, etc.). For example consider the box plots for our estimation data for the repayment status variables, for class 1

-

+

and class 0:

-

+

Questions:

  1. Draw the box plots for class 1 and class 0 for another set of independent variables of your choice.
  2. @@ -1302,7 +2629,7 @@

    Step 4: Classification and Interpretation

    - + @@ -1311,175 +2638,175 @@

    Step 4: Classification and Interpretation

    - + - + - + - - + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - - + + - + + + - - - + - - + + - + - + - + - - + + - + - - + + - - + - + + - + - - + + - - + - + + - + - + - + - + - + - - + + - + - + - - + - + + - + @@ -1491,7 +2818,7 @@

    Step 4: Classification and Interpretation

    Estimate Std. Error z value
    (Intercept)(Intercept) -0.6 0.1-4.7-4.4 0.0
    IDID 0.0 0.0-0.50.6-0.30.7
    LIMIT_BALLIMIT_BAL 0.0 0.0-5.2-3.6 0.0
    SEXSEX -0.1 0.0-3.1-2.9 0.0
    EDUCATIONEDUCATION -0.1 0.0 -4.2 0.0
    MARRIAGEMARRIAGE -0.2 0.0-4.9-4.8 0.0
    AGEAGE 0.0 0.03.92.8 0.0
    PAY_0PAY_0 0.6 0.029.629.1 0.0
    PAY_2PAY_2 0.1 0.03.43.6 0.0
    PAY_3PAY_3 0.1 0.02.83.0 0.0
    PAY_4PAY_4 0.0 0.01.30.2-0.40.7
    PAY_5PAY_50.1 0.02.4 0.01.00.3
    PAY_6PAY_6 0.0 0.0-0.30.80.20.9
    BILL_AMT1BILL_AMT1 0.0 0.0-5.1-5.2 0.0
    BILL_AMT2BILL_AMT2 0.0 0.01.90.12.00.0
    BILL_AMT3BILL_AMT3 0.0 0.01.40.20.60.5
    BILL_AMT40.0BILL_AMT4 0.0 0.01.00.50.6
    BILL_AMT5BILL_AMT5 0.0 0.0-1.20.20.70.5
    BILL_AMT60.0BILL_AMT6 0.02.1 0.0-0.30.8
    PAY_AMT1PAY_AMT1 0.0 0.0-5.6-5.8 0.0
    PAY_AMT2PAY_AMT2 0.0 0.0-3.8-3.6 0.0
    PAY_AMT3PAY_AMT3 0.0 0.0-1.60.1-2.30.0
    PAY_AMT4PAY_AMT4 0.0 0.0-1.7-1.6 0.1
    PAY_AMT50.0PAY_AMT5 0.0-2.6 0.0-0.90.4
    PAY_AMT6PAY_AMT6 0.0 0.0 -2.2
    - + @@ -1499,61 +2826,61 @@

    Step 4: Classification and Interpretation

    - + - - + + - - + + - + - + - + - - + + - + - - + - + + - + - + - + - + - - - - + + + + - + - + - + @@ -1567,15 +2894,16 @@

    Step 4: Classification and Interpretation

    One of the biggest risks when developing classification models is overfitting: while it is always trivial to develop a model (e.g., a tree) that classifies any (estimation) dataset with no misclassification error at all, there is no guarantee that the quality of a classifier in out-of-sample data (e.g., in the validation data) will be close to that in the estimation data. Striking the right balance between “overfitting” and “underfitting” is one of the most important aspects in data analytics. While there are a number of statistical techniques to help us strike this balance - including the use of validation data - it is largely a combination of good statistical analysis and qualitative criteria (such as the interpretability and simplicity of the estimated models) that leads to classification models that work well in practice.

    Running a basic CART model with complexity control cp=0.0025, leads to the following tree (NOTE: for better readability of the tree figures below, we will rename the independent variables as IV1 to IV24 when using CART):

    -

    +

    The leaves of the tree indicate the number of estimation data observations that “reach that leaf” that belong to each class. A perfect classification would only have data from one class in each of the tree leaves. However, such a perfect classification of the estimation data would most likely not be able to classify well out-of-sample data due to overfitting of the estimation data.

    One can estimate larger trees through changing the tree’s complexity control parameter (in this case the rpart.control argument cp). For example, this is how the tree would look like if we set cp=0.00068:

    -

    +
    ## Warning: labs do not fit even at cex 0.15, there may be some overplotting
    +

    One can also use the percentage of data in each leaf of the tree to get an estimate of the probability that an observation (e.g., customer) belongs to a given class. The purity of the leaf can indicate the probability that an observation that “reaches that leaf” belongs to a class. In our case, the probability our validation data belong to class 1 (i.e., a customer’s likelihood of default) for the first few validation observations, using the first CART above, is:

    Actual Class Predicted Class Probability of Class 1
    Obs 1Obs 1 000.0710.50
    Obs 20Obs 21 00.450.21
    Obs 3Obs 3 0 00.250.18
    Obs 41Obs 40 10.560.89
    Obs 51Obs 5 00.4000.16
    Obs 6Obs 6 0 00.360.19
    Obs 7Obs 7 0 00.150.18
    Obs 8110.52Obs 8000.19
    Obs 9Obs 9 0 00.120.16
    Obs 10Obs 10 0 0 0.24
    - + @@ -1583,64 +2911,64 @@

    Step 4: Classification and Interpretation

    - - + - + + - - + + - + - + - + - - + + - - + + - + - + - - + + - + - + - - - - + + + + - + - + - + - +
    Actual Class Predicted Class Probability of Class 1
    Obs 10Obs 1 00.1710.71
    Obs 20Obs 21 00.170.14
    Obs 3Obs 3 0 00.170.14
    Obs 41Obs 40 1 0.71
    Obs 51Obs 50 00.450.14
    Obs 6Obs 6 010.7100.14
    Obs 7Obs 7 0 00.170.14
    Obs 8110.71Obs 8000.14
    Obs 9Obs 9 0 00.170.14
    Obs 10Obs 10 0 00.170.14
    @@ -1655,7 +2983,7 @@

    Step 4: Classification and Interpretation

    - + @@ -1663,148 +2991,148 @@

    Step 4: Classification and Interpretation

    - - + + - - + + - + - + - + - - + + - - + + - + - - - + + + - - - + + + - - - - + + + + - - - + + + - - - - + + + + - - + + - - + + - + - - + + - + - - - + + + - - + + - + - - + + - + - - + + - - + + - - + + - + - - + + - + - - + + - - + + - +
    Logistic Regression CART 1 CART 2
    ID-0.02ID-0.01 0.00 -0.02
    LIMIT_BAL-0.18LIMIT_BAL-0.12 0.00 -0.01
    SEXSEX -0.10 0.00 0.00
    EDUCATIONEDUCATION -0.14 0.000.00-0.01
    MARRIAGE-0.17MARRIAGE-0.16 0.00 0.00
    AGE0.13AGE0.10 0.00 0.02
    PAY_0PAY_0 1.00 1.00 1.00
    PAY_20.110.03PAY_20.120.23 0.23
    PAY_30.090.04PAY_30.10 0.060.07
    PAY_40.040.040.08PAY_4-0.01-0.05-0.06
    PAY_50.030.04PAY_5 0.080.060.07
    PAY_6-0.01-0.03-0.05PAY_60.010.040.05
    BILL_AMT1-0.17BILL_AMT1-0.18 0.00 -0.03
    BILL_AMT20.06BILL_AMT20.07 0.010.020.04
    BILL_AMT30.05BILL_AMT30.02 0.000.050.04
    BILL_AMT40.000.00BILL_AMT40.02 0.000.04
    BILL_AMT5-0.04BILL_AMT50.02 0.00-0.050.04
    BILL_AMT60.07BILL_AMT6-0.01 0.000.05-0.04
    PAY_AMT1-0.19PAY_AMT1-0.20 0.00 -0.01
    PAY_AMT2-0.13PAY_AMT2-0.12 0.00 -0.02
    PAY_AMT3-0.05PAY_AMT3-0.08 0.00-0.07-0.02
    PAY_AMT4-0.06PAY_AMT4-0.05 0.00-0.01-0.02
    PAY_AMT5-0.09PAY_AMT5-0.03 0.00 -0.01
    PAY_AMT6-0.07PAY_AMT6-0.08 0.00-0.01-0.02
    @@ -1817,10 +3145,14 @@

    Step 5: Validation accuracy

    Selecting the probability threshold based on which we predict the class of an observation is a decision the user needs to make. While in some cases a reasonable probability threshold is 50%, in other cases it may be 99.9% or 0.1%.

    Question:

    -

    Can you think of such a scenario?

    +
      +
    1. Can you think of such a scenario?
    2. +
    3. What probability threshold do you think would make sense in this case of credit card default classification?
    4. +

    Answer:

    • +

    For different choices of the probability threshold, one can measure a number of classification performance metrics, which are outlined next.

    @@ -1829,22 +3161,22 @@

    1. Hit ratio

    - + - - + + - - + + - - + +
    Hit Ratio
    Logistic Regression81.86667Logistic Regression82.50000
    First CART82.36667First CART83.63333
    Second CART82.53333Second CART83.40000
    @@ -1852,26 +3184,26 @@

    1. Hit ratio

    - + - - + + - - + + - - + +
    Hit Ratio
    Logistic Regression81.02083Logistic Regression80.8750
    First CART81.95000First CART81.9625
    Second CART83.13750Second CART83.1000
    -

    A simple benchmark to compare the hit ratio performance of a classification model against is the Maximum Chance Criterion. This measures the proportion of the class with the largest size. For our validation data the largest group is customers who do not default: 2354 out of 3000 customers). Clearly, if we classified all individuals into the largest group, we could get a hit ratio of 78.47% without doing any work. One should have a hit rate of at least as much as the Maximum Chance Criterion rate, although as we discuss next there are more performance criteria to consider.

    +

    A simple benchmark to compare the hit ratio performance of a classification model against is the Maximum Chance Criterion. This measures the proportion of the class with the largest size. For our validation data the largest group is customers who do not default: 2397 out of 3000 customers). Clearly, if we classified all individuals into the largest group, we could get a hit ratio of 79.9% without doing any work. One should have a hit rate of at least as much as the Maximum Chance Criterion rate, although as we discuss next there are more performance criteria to consider.

    2. Confusion matrix

    @@ -1879,21 +3211,21 @@

    2. Confusion matrix

    - + - - - + + + - - - + + +
    Predicted 1 (default) Predicted 0 (no default)
    Actual 1 (default)38.3961.61Actual 1 (default)34.6665.34
    Actual 0 (no default)5.3594.65Actual 0 (no default)4.0595.95
    @@ -1913,7 +3245,7 @@

    3. ROC curve

    Remember that each observation is classified by our model according to the probabilities Pr(0) and Pr(1) and a chosen probability threshold. Typically we set the probability threshold to 0.5 - so that observations for which Pr(1) > 0.5 are classified as 1’s. However, we can vary this threshold, for example if we are interested in correctly predicting all 1’s but do not mind missing some 0’s (and vice-versa).

    When we change the probability threshold we get different values of hit rate, false positive and false negative rates, or any other performance metric. We can plot for example how the false positive versus true positive rates change as we alter the probability threshold, and generate the so called ROC curve.

    The ROC curves for the validation data for the logistic regression as well as both the CARTs above are as follows:

    -

    +

    How should a good ROC curve look like? A rule of thumb in assessing ROC curves is that the “higher” the curve (i.e., the closer it gets to the point with coordinates (0,1)), hence the larger the area under the curve, the better. You may also select one point on the ROC curve (the “best one” for our purpose) and use that false positive/false negative performances (and corresponding threshold for P(1)) to assess your model.

    Questions:

      @@ -1926,23 +3258,23 @@

      3. ROC curve

    -
    -

    4. Lift curve

    -

    The lift curve is a popular technique in certain applications, such as direct marketing or credit risk.

    +
    +

    4. Gains chart

    +

    The gains chart is a popular technique in certain applications, such as direct marketing or credit risk.

    For a concrete example, consider the case of a direct marketing mailing campaign. Say we have a classifier that attempts to identify the likely responders by assigning each case a probability of response. We may want to select as few cases as possible and still capture the maximum number of responders possible.

    -

    We can measure the percentage of all responses the classifier captures if we only select, say, x% of cases: the top x% in terms of the probability of response assigned by our classifier. For each percentage of cases we select (x), we can plot the following point: the x-coordinate will be the percentage of all cases that were selected, while the y-coordinate will be the percentage of all class 1 cases that were captured within the selected cases (i.e., the ratio true positives/positives of the classifier, assuming the classifier predicts class 1 for all the selected cases, and predicts class 0 for all the remaining cases). If we plot these points while we change the percentage of cases we select (x) (i.e., while we change the probability threshold of the classifier), we get a curve that is called the lift curve.

    -

    In the credit card default case we are studying, the lift curves for the validation data for our three classifiers are the following:

    -

    -

    Notice that if we were to examine cases selecting them at random, instead of selecting the “best” ones using an informed classifier, the “random prediction” lift curve would be a straight 45-degree line.

    +

    We can measure the percentage of all responses the classifier captures if we only select, say, x% of cases: the top x% in terms of the probability of response assigned by our classifier. For each percentage of cases we select (x), we can plot the following point: the x-coordinate will be the percentage of all cases that were selected, while the y-coordinate will be the percentage of all class 1 cases that were captured within the selected cases (i.e., the ratio true positives/positives of the classifier, assuming the classifier predicts class 1 for all the selected cases, and predicts class 0 for all the remaining cases). If we plot these points while we change the percentage of cases we select (x) (i.e., while we change the probability threshold of the classifier), we get a chart that is called the gains chart.

    +

    In the credit card default case we are studying, the gains charts for the validation data for our three classifiers are the following:

    +

    +

    Notice that if we were to examine cases selecting them at random, instead of selecting the “best” ones using an informed classifier, the “random prediction” gains chart would be a straight 45-degree line.

    Question:

    Why?

    Answer:

    -

    So how should a good lift curve look like? The further above this 45-degree reference line our lift curve is, the better the “lift”. Moreover, much like for the ROC curve, one can select the percentage of all cases examined appropriately so that any point of the lift curve is selected.

    +

    So how should a good gains chart look like? The further above this 45-degree reference line our gains curve is, the better the “gains”. Moreover, much like for the ROC curve, one can select the percentage of all cases examined appropriately so that any point of the gains curve is selected.

    Question:

    -

    Which point on the lift curve should we select in practice?

    +

    Which point on the gains curve should we select in practice?

    Answer:

    • @@ -1951,36 +3283,36 @@

      4. Lift curve

      5. Profit curve

      Finally, we can generate the so called profit curve, which we often use to make our final decisions. The intuition is as follows. Consider a direct marketing campaign, and suppose it costs $1 to send an advertisement, and the expected profit from a person who responds positively is $45. Suppose you have a database of 1 million people to whom you could potentially send the promotion. Typical response rates are 0.05%. What fraction of the 1 million people should you send the promotion to?

      -

      To answer this type of questions, we need to create the profit curve. We can measure some measure of profit if we only select the top cases in terms of the probability of response assigned by our classifier. We can plot the profit curve by changing, as we did for the lift curve, the percentage of cases we select, and calculating the corresponding total estimated profit (or loss) we would generate. This is simply equal to:

      +

      To answer this type of questions, we need to create the profit curve. We can measure some measure of profit if we only select the top cases in terms of the probability of response assigned by our classifier. We can plot the profit curve by changing, as we did for the gains chart, the percentage of cases we select, and calculating the corresponding total estimated profit (or loss) we would generate. This is simply equal to:

      Total Estimated Profit = (% of 1’s correctly predicted)x(value of capturing a 1) + (% of 0’s correctly predicted)x(value of capturing a 0) + (% of 1’s incorrectly predicted as 0)x(cost of missing a 1) + (% of 0’s incorrectly predicted as 1)x(cost of missing a 0)

      Calculating the expected profit requires we have an estimate of the four costs/values: the value of capturing a 1 or a 0, and the cost of misclassifying a 1 into a 0 or vice versa.

      -

      Given the values and costs of correct classifications and misclassifications, we can plot the total estimated profit (or loss) as we change the percentage of cases we select, i.e., the probability threshold of the classifier, like we did for the ROC and the lift curves.

      +

      Given the values and costs of correct classifications and misclassifications, we can plot the total estimated profit (or loss) as we change the percentage of cases we select, i.e., the probability threshold of the classifier, like we did for the ROC and the gains chart.

      In our credit card default case, we consider the following business profit and loss to the credit card issuer for the correctly classified and misclassified customers:

      - + - + - +
      Predict 1 (default) Predict 0 (no default)
      Actual 1 (default)Actual 1 (default) 0 -1e+05
      Actual 0 (no default)Actual 0 (no default) 0 2e+04

      Based on these profit and cost estimates, the profit curves for the validation data for the three classifiers are:

      -

      +

      We can then select the percentage of selected cases that corresponds to the maximum estimated profit (or minimum loss, if necessary).

      Question:

      Which point on the profit curve would you select in practice?

      @@ -1994,25 +3326,25 @@

      5. Profit curve

      Step 6. Test Accuracy

      Having iterated steps 2-5 until we are satisfied with the performance of our selected model on the validation data, in this step the performance analysis outlined in step 5 needs to be done with the test sample. This is the performance that best mimics what one should expect in practice upon deployment of the classification solution, assuming (as always) that the data used for this performance analysis are representative of the situation in which the solution will be deployed.

      -

      Let’s see in our case how the hit ratio, confusion matrix, ROC curve, lift curve, and profit curve look like for our test data. For the hit ratio and the confusion matrix we use 50% as the probability threshold for classification.

      +

      Let’s see in our case how the hit ratio, confusion matrix, ROC curve, gains chart, and profit curve look like for our test data. For the hit ratio and the confusion matrix we use 50% as the probability threshold for classification.

      - + - - + + - - + + - + @@ -2021,30 +3353,30 @@

      Step 6. Test Accuracy

      Hit Ratio
      Logistic Regression81.23333Logistic Regression81.16667
      First CART82.16667First CART82.43333
      Second CARTSecond CART 82.70000
      - + - - - + + + - - - + + +
      Predicted 1 (default) Predicted 0 (no default)
      Actual 1 (default)37.5262.48Actual 1 (default)34.6965.31
      Actual 0 (no default)4.7395.27Actual 0 (no default)3.6896.32

      ROC curves for the test data:

      -

      -

      Lift curves for the test data:

      -

      -

      Finally the profit curves for the test data, using the same profit/cost estimates as we did above:

      -

      +

      +

      Gains chart for the test data:

      +

      +

      Finally the profit curves for the test data, using the same profit/cost estimates as above:

      +

      Questions:

      1. Is the performance in the test data similar to the performance in the validation data above? Should we expect the performance of our classification model to be close to that in our test data when we deploy the model in practice? Why or why not? What should we do if they are different?
      2. @@ -2073,7 +3405,7 @@

        Step 6. Test Accuracy

        // add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { - $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); + $('tr.odd').parent('tbody').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); @@ -2082,6 +3414,49 @@

        Step 6. Test Accuracy

        + + + + + + + +