Newer
Older
{
"cells": [
{
"cell_type": "code",
"source": [
"# IGNORE THIS CELL WHICH CUSTOMIZES LAYOUT AND STYLING OF THE NOTEBOOK !\n",
"%matplotlib inline\n",
"# `sklearn.tree.plot_tree` does not work well with \"retina\" backend - use \"svg\" instead\n",
"%config InlineBackend.figure_format = \"svg\"\n",
Mikolaj Rybinski
committed
"\n",
"import matplotlib.pyplot as plt\n",
"\n",
"warnings.filterwarnings(\"ignore\", category=FutureWarning)\n",
"warnings.filterwarnings = lambda *a, **kw: None\n",
Mikolaj Rybinski
committed
"from IPython.core.display import HTML\n",
"\n",
"HTML(open(\"custom.html\", \"r\").read())"
"source": [
"# Chapter 6: An overview of classifiers, Part 2\n",
"\n",
"<span style=\"font-size: 150%;\">Decision trees, ensemble methods and summary</span>"
"source": [
"Let's repeat our helper functions from previous part:"
"metadata": {},
"outputs": [],
"source": [
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
Mikolaj Rybinski
committed
"\n",
"def samples_color(ilabels, colors=[\"steelblue\", \"chocolate\"]):\n",
Mikolaj Rybinski
committed
" \"\"\"Return colors list from labels list given as indices.\"\"\"\n",
" return [colors[int(i)] for i in ilabels]\n",
"\n",
Mikolaj Rybinski
committed
"\n",
Mikolaj Rybinski
committed
" features_2d,\n",
" labels,\n",
" classifier,\n",
" preprocessing=None,\n",
" plt=plt,\n",
" marker=\".\",\n",
" N=100,\n",
" alpha=0.2,\n",
" colors=[\"steelblue\", \"chocolate\"],\n",
" title=None,\n",
" test_features_2d=None,\n",
" test_labels=None,\n",
" test_s=60,\n",
Mikolaj Rybinski
committed
" \"\"\"Plot a 2D decision surface for a already trained classifier.\"\"\"\n",
"\n",
" # sanity check\n",
" assert len(features_2d.columns) == 2\n",
"\n",
" # pandas to numpy array; get min/max values\n",
" xy = np.array(features_2d)\n",
" min_x, min_y = xy.min(axis=0)\n",
" max_x, max_y = xy.max(axis=0)\n",
"\n",
" # create mesh of NxN points; tech: `N*1j` is spec for including max value\n",
Mikolaj Rybinski
committed
" XX, YY = np.mgrid[min_x : max_x : N * 1j, min_y : max_y : N * 1j]\n",
" points = np.c_[XX.ravel(), YY.ravel()] # shape: (N*N)x2\n",
"\n",
" # apply scikit-learn API preprocessing\n",
" if preprocessing is not None:\n",
" points = preprocessing.transform(points)\n",
Mikolaj Rybinski
committed
"\n",
" # classify grid points\n",
" classes = classifier.predict(points)\n",
"\n",
" # plot classes color mesh\n",
Mikolaj Rybinski
committed
" ZZ = classes.reshape(XX.shape) # shape: NxN\n",
Mikolaj Rybinski
committed
" XX,\n",
" YY,\n",
" ZZ,\n",
" alpha=alpha,\n",
" cmap=matplotlib.colors.ListedColormap(colors),\n",
" shading=\"auto\",\n",
" )\n",
" # plot points\n",
" plt.scatter(\n",
Mikolaj Rybinski
committed
" xy[:, 0],\n",
" xy[:, 1],\n",
" marker=marker,\n",
" color=samples_color(labels, colors=colors),\n",
" )\n",
" # set title\n",
" if title:\n",
Mikolaj Rybinski
committed
" if hasattr(plt, \"set_title\"):\n",
" plt.set_title(title)\n",
" else:\n",
" plt.title(title)\n",
" # plot test points\n",
" if test_features_2d is not None:\n",
" assert test_labels is not None\n",
" assert len(test_features_2d.columns) == 2\n",
" test_xy = np.array(test_features_2d)\n",
" plt.scatter(\n",
Mikolaj Rybinski
committed
" test_xy[:, 0],\n",
" test_xy[:, 1],\n",
" s=test_s,\n",
" facecolors=\"none\",\n",
" linewidths=2,\n",
" color=samples_color(test_labels),\n",
" );"
},
{
"cell_type": "markdown",
"source": [
"## Decision trees\n",
"\n",
"Let's see what a decision tree is by looking at an (artificial) example: \n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/decision_tree-work.png\" width=600px></td></tr>\n",
"</table>"
},
{
"cell_type": "markdown",
"source": [
"### How are the decision tree splits selected?\n",
"\n",
"Starting from the top the decision tree is build by selecting **best split of the dataset using a single feature**. Best feature and its split value are ones that make the resulting **subsets more pure** in terms of variety of classes they contain (i.e. that minimize misclassification error, or Gini index/impurity, or maximize entropy/information gain).\n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/decision_tree-split.png\" width=600px></td></tr>\n",
"</table>\n",
"\n",
"Features can repeat within a sub-tree (and there is no way to control it in scikit-learn), but usualy categorical features appear at most once on each path. They do, however, repeat across different tree branches."
},
{
"cell_type": "markdown",
"source": [
"### XOR decision tree\n",
"\n",
"Let's try out decision trees with the XOR dataset, in which samples have class `True` when the two coordinates `x` and `y` have different sign, otherwise they have class `False`."
"source": [
"import pandas as pd\n",
"\n",
"df = pd.read_csv(\"data/xor.csv\")\n",
"features_2d = df.loc[:, (\"x\", \"y\")]\n",
"labelv = df[\"label\"]\n",
"\n",
"plt.figure(figsize=(5, 5))\n",
"plt.xlabel(\"x\")\n",
"plt.ylabel(\"y\")\n",
"plt.title(\"Orange is True, blue is False\")\n",
Mikolaj Rybinski
committed
"plt.scatter(features_2d.iloc[:, 0], features_2d.iloc[:, 1], color=samples_color(labelv));"
},
{
"cell_type": "markdown",
"source": [
"Decision trees live in the `sklearn.tree` module."
"source": [
"from sklearn.model_selection import train_test_split\n",
Mikolaj Rybinski
committed
"from sklearn.tree import DecisionTreeClassifier\n",
"# Note: split randomness picked manually for educational purpose\n",
"X_train, X_test, y_train, y_test = train_test_split(\n",
" features_2d, labelv, random_state=10\n",
")\n",
"# Note: features are permuted reandomly in case equally good splits are found\n",
"# fix randomization for reproduciblity\n",
"classifier = DecisionTreeClassifier(random_state=0)\n",
"classifier.fit(X_train, y_train)\n",
"\n",
Mikolaj Rybinski
committed
"print(\"train score: {:.2f}%\".format(100 * classifier.score(X_train, y_train)))\n",
"print(\"test score: {:.2f}%\".format(100 * classifier.score(X_test, y_test)))\n",
"\n",
"plt.figure(figsize=(5, 5))\n",
"plot_decision_surface(\n",
Mikolaj Rybinski
committed
" features_2d,\n",
" labelv,\n",
" classifier,\n",
" test_features_2d=X_test,\n",
" test_labels=y_test,\n",
},
{
"cell_type": "markdown",
"About the plot: **the points surrounded with a circle are from the test data set** (not used for learning), all other points belong to the training data.\n",
"\n",
"This surface seems a bit rough on edges. One of the biggest advantages of the decision trees is interpretability of the model. Let's **inspect the model by looking at the tree that was built**:"
"from sklearn.tree import plot_tree\n",
Mikolaj Rybinski
committed
"\n",
"fig = plt.figure(figsize=(12, 8))\n",
"fig.suptitle(\"XOR Decision Tree\")\n",
"plot_tree(classifier, feature_names=[\"x\", \"y\"], class_names=[\"False\", \"True\"]);"
},
{
"cell_type": "markdown",
"source": [
"<span style=\"font-size: 150%\">Whoaaa .. what happened here?</span>\n",
"\n",
"XOR is the **anti-example** for DTs: they cannot make the \"natural\" split at value `0` because splits are selected to promote more pure sub-nodes. We're fitting data representation noise here.\n",
"\n",
"Moreover, the tree is quite deep because, by default, it is built until all nodes are \"pure\" (`gini = 0.0`). This tree is **overfitted**."
},
{
"cell_type": "markdown",
"source": [
"### How to avoid overfitting?\n",
"\n",
"There is no regularization penalty like in logistic regression or SVM methods when bulding a decision tree. Instead we can set learning hyperparameters such as:\n",
"* tree pruning (based on minimal cost-complexity; `ccp_alpha`) - this is actually done only after the tree has been built, or\n",
"* maximum tree depth (`max_depth`), or\n",
"* a minimum number of samples required at a node or at a leaf node (`min_samples_split`, `min_samples_leaf`), or\n",
"* an early stopping criteria based on minumum value of impurity or on minimum decrease in impurity (`min_impurity_split`, `min_impurity_decrease`),\n",
"* ... and few more - see `DecisionTreeClassifier` docs.\n"
},
{
"cell_type": "markdown",
"source": [
"### Exercise section\n",
"\n",
"1. In theory for the XOR dataset it should suffice to use each feature exactly once with splits at `0`, but the decision tree learning algorithm is unable to find such a solution. Play around with `max_depth` to get a smaller but similarly performing decision tree for the XOR dataset.<br/>\n",
" Bonus question: which other hyperparameter you could have used to get the same result?\n",
"2. Build a decision tree for the beers dataset. Use maximum depth and tree pruning strategies to get a much smaller tree that performs as well as the default tree.<br/>\n",
" Note: `classifier.tree_` instance has attributes such as `max_depth`, `node_count`, or `n_leaves`, which measure size of the tree."
"metadata": {},
"outputs": [],
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.tree import DecisionTreeClassifier, plot_tree\n",
"\n",
"\n",
"df = pd.read_csv(\"data/xor.csv\")\n",
"features_2d = df.loc[:, (\"x\", \"y\")]\n",
"labelv = df[\"label\"]\n",
"\n",
"max_depths = [2, 3, 4]\n",
"# ..."
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.tree import DecisionTreeClassifier, plot_tree\n",
"\n",
"\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"# ..."
]
},
{
"cell_type": "code",
"metadata": {
"tags": [
"solution"
]
},
"source": [
"# SOLUTION 1\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.tree import DecisionTreeClassifier, plot_tree\n",
"\n",
"\n",
"df = pd.read_csv(\"data/xor.csv\")\n",
"features_2d = df.loc[:, (\"x\", \"y\")]\n",
"labelv = df[\"label\"]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(\n",
" features_2d, labelv, random_state=10\n",
")\n",
"\n",
"n_params = len(max_depths)\n",
"fig, ax_arr = plt.subplots(ncols=n_params, nrows=2, figsize=(7 * n_params, 7 * 2))\n",
"fig.suptitle(\"smaller XOR Decision Trees\")\n",
"for i, max_depth in enumerate(max_depths):\n",
Mikolaj Rybinski
committed
" classifier = DecisionTreeClassifier(\n",
" max_depth=max_depth,\n",
" random_state=0,\n",
Mikolaj Rybinski
committed
" )\n",
" classifier.fit(X_train, y_train)\n",
Mikolaj Rybinski
committed
"\n",
" plot_tree(\n",
" classifier,\n",
" feature_names=features_2d.columns.values,\n",
" class_names=[\"False\", \"True\"],\n",
" ax=ax,\n",
" fontsize=7,\n",
" )\n",
" ax.set_title(\n",
" (\n",
" f\"max depth = {max_depth}\\n\"\n",
" f\"train score: {100 * classifier.score(X_train, y_train):.2f}%\\n\"\n",
" f\"test score: {100 * classifier.score(X_test, y_test):.2f}%\"\n",
" )\n",
Mikolaj Rybinski
committed
" )\n",
Mikolaj Rybinski
committed
" features_2d,\n",
" labelv,\n",
" classifier,\n",
" test_features_2d=X_test,\n",
" test_labels=y_test,\n",
" )\n",
"\n",
"# We could have used equivalently `min_impurity_split` early stopping criterium with any (gini) value between 0.15 and 0.4"
"# SOLUTION 2\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.tree import DecisionTreeClassifier, plot_tree\n",
Mikolaj Rybinski
committed
"\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"print(df.head(2))\n",
"\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(features, labelv, random_state=10)\n",
"# default\n",
"classifier = DecisionTreeClassifier(random_state=0)\n",
"pipeline = make_pipeline(StandardScaler(), classifier)\n",
"pipeline.fit(X_train, y_train)\n",
"print(\"#### default Beers Decision Tree\")\n",
"print(\n",
" f\"depth: {classifier.tree_.max_depth}, \",\n",
" f\"#nodes: {classifier.tree_.node_count}, \",\n",
" f\"#leaves: {classifier.tree_.n_leaves}\",\n",
")\n",
"print(f\"train score: {100 * pipeline.score(X_train, y_train):.2f}%\")\n",
"print(f\" test score: {100 * pipeline.score(X_test, y_test):.2f}%\")\n",
"# smaller\n",
"classifier = DecisionTreeClassifier(max_depth=4, ccp_alpha=0.02, random_state=0)\n",
"pipeline = make_pipeline(StandardScaler(), classifier)\n",
"pipeline.fit(X_train, y_train)\n",
"print(\"#### smaller Beers Decision Tree\")\n",
"print(\n",
" f\"depth: {classifier.tree_.max_depth}, \",\n",
" f\"#nodes: {classifier.tree_.node_count}, \",\n",
" f\"#leaves: {classifier.tree_.n_leaves}\",\n",
")\n",
"print(f\"train score: {100 * pipeline.score(X_train, y_train):.2f}%\")\n",
"print(f\" test score: {100 * pipeline.score(X_test, y_test):.2f}%\")\n",
"fig = plt.figure(figsize=(10, 6))\n",
"plot_tree(classifier, feature_names=features.columns.values);"
]
},
{
"cell_type": "markdown",
"source": [
"One **issue with decision trees is their instability** - a small changes in the training data usually results in a completely different order of splits (different tree structure)."
},
{
"cell_type": "markdown",
"source": [
"## Ensemble Averaging: Random Forests\n",
"\n",
"The idea of Random Forest method is to generate **ensemble of many \"weak\" decision trees** and by **averaging out their probabilistic predictions**. (The original Random Forests method used voting.)\n",
"\n",
"\n",
"Weak classifier here are **shallow trees with feature-splits picked only out of random subsets of features** (*features bagging*). Random subset of features is selected per each split, not for the whole classifier.\n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/random_forest.png\" width=800px></td></tr>\n",
" <tr><td><center><sub>Source: <a href=\"https://towardsdatascience.com/random-forests-and-decision-trees-from-scratch-in-python-3e4fa5ae4249\">https://towardsdatascience.com/random-forests-and-decision-trees-from-scratch-in-python-3e4fa5ae4249</a></sub></center></td></tr>\n",
"</table>\n"
},
{
"cell_type": "markdown",
"### Demonstration\n",
"\n",
"You will find Random Forest method implementation in the `sklearn.ensemble` module.\n",
"\n",
"The main parameters are:\n",
"* number of trees (`n_estimators`),\n",
"* each tree max. depth 2 (`max_depth`), and\n",
"* max. number of randomly selected features to pick from when building each tree (`max_features`).\n",
"\n",
"Let's build a small Random Forest and have a look at its trees, available under `.estimators_` property."
"source": [
"import pandas as pd\n",
"from sklearn.ensemble import RandomForestClassifier\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.tree import plot_tree\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"print(df.head(2))\n",
"\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(features, labelv, random_state=10)\n",
"# 4 shallow (depth 2) trees, each using only 3 randomly selected features\n",
"# total: up to 4*3 decision nodes, up to 4*4 class nodes\n",
"n_trees = 4\n",
Mikolaj Rybinski
committed
"classifier = RandomForestClassifier(\n",
" max_depth=2,\n",
" n_estimators=n_trees,\n",
" max_features=3,\n",
" random_state=0,\n",
Mikolaj Rybinski
committed
")\n",
"pipeline = make_pipeline(StandardScaler(), classifier)\n",
"pipeline.fit(X_train, y_train)\n",
"print(\"#### Random Forest\")\n",
"print(f\"train score: {100 * pipeline.score(X_train, y_train):.2f}%\")\n",
"print(f\" test score: {100 * pipeline.score(X_test, y_test):.2f}%\")\n",
"\n",
"# to evaluate ensemble estimators, we need to use transformed data\n",
"X_train_trans = pipeline[:-1].transform(X_train)\n",
"X_test_trans = pipeline[:-1].transform(X_test)\n",
"\n",
"fig, ax_arr = plt.subplots(ncols=n_trees, nrows=1, figsize=(7 * n_trees, 5))\n",
"for i, internal_classifier in enumerate(classifier.estimators_):\n",
" plot_tree(internal_classifier, feature_names=features.columns.values, ax=ax)\n",
" ax.set_title(\n",
" (\n",
" f\"Tree #{i}\\n\"\n",
" f\"train score: {100 * internal_classifier.score(X_train_trans, y_train):.2f}%\\n\"\n",
" f\" test score: {100 * internal_classifier.score(X_test_trans, y_test):.2f}%\"\n",
},
{
"cell_type": "markdown",
"source": [
"Random forests are fast and shine with high dimensional data (many features).\n",
"\n",
"<div class=\"alert alert-block alert-info\">\n",
" <p><i class=\"fa fa-info-circle\"></i>\n",
" Random Forest can estimate <em>out-of-bag error</em> (OOB) while learning; set <code>oob_score=True</code>. (The out-of-bag (OOB) error is the average error for each data sample, calculated using predictions from the trees that do not contain that sample in their respective bootstrap samples.)\n",
" OOB is a generalisation/predictive error that, together with <code>warm_start=True</code>, can be used for efficient search for a good-enough number of trees, i.e. the <code>n_estimators</code> hyperparameter value (see: <a href=https://scikit-learn.org/stable/auto_examples/ensemble/plot_ensemble_oob.html>OOB Errors for Random Forests</a>).\n",
" </p>\n",
},
{
"cell_type": "markdown",
"source": [
"## Boosting: AdaBoost\n",
"\n",
"<span style=\"font-size: 125%;\">What is it?</span>\n",
"\n",
"Boosting is another sub-type of ensemble learning. Same as in averaging, the idea is to generate many **weak classifiers to create a single strong classifier**, but in contrast to averaging, the classifiers are learnt **iteratively**.\n",
"\n",
"<span style=\"font-size: 125%;\">How does it work?</span>\n",
"Each iteration focuses more on **previously misclassified samples**. To that end, **data samples are weighted**, and after each learning iteration the data weights are readjusted.\n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/AdaBoost.png\" width=800px></td></tr>\n",
" <tr><td><center><sub>Source: Marsh, B., (2016), <em>Multivariate Analysis of the Vector Boson Fusion Higgs Boson</em>.</sub></center></td></tr>\n",
"</table>\n",
"\n",
"The final prediction is a weighted majority vote or weighted sum of predictions of the weighted weak classifiers.\n",
"\n",
"Boosting works very well out of the box. There is usually no need to fine tune method hyperparameters to get good performance.\n",
"\n",
"<span style=\"font-size: 125%;\">Where do i start?</span>\n",
"\n",
"**AdaBoost (“Adaptive Boosting”) is a baseline boosting algorithm** that originally used decisoin trees as weak classifiers, but, in principle, works with any classification method (`base_estimator` parameter).\n",
"\n",
"In each AdaBoost learning iteration, additionally to samples weights, the **weak classifiers are weighted**. Their weights are readjusted, such that **the more accurate a weak classifier is, the larger its weight is**.\n"
},
{
"cell_type": "markdown",
"### Demonstration\n",
"\n",
"You will find AdaBoost algorithm implementation in the `sklearn.ensemble` module.\n",
"\n",
"We'll use `n_estimators` parameter to determine number of weak classifiers. These by default are single node decision trees (`base_estimator = DecisionTreeClassifier(max_depth=1)`). We can examine them via `.estimators_` property of a trained method.\n",
"\n",
"For presentation, in order to weight the classifiers, we will use the original discrete AdaBoost learning method (`algorithm=\"SAMME\"`). Because the classifiers learn iteratively on differently weighted samples, to understand the weights we have to look at internal train errors and not at the final scores on the training data."
"from math import ceil, floor\n",
"\n",
"import pandas as pd\n",
"from sklearn.ensemble import AdaBoostClassifier\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.tree import plot_tree\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"print(df.head(2))\n",
"\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(features, labelv, random_state=10)\n",
"# 9 single node decision trees\n",
"# total: 9*1 decision nodes, 9*2 class nodes\n",
"# (Note: with default real AdaBoost \"SAMME.R\" algorithm all weights are 1 at the end)\n",
"n_trees = 9\n",
"classifier = AdaBoostClassifier(n_estimators=n_trees, algorithm=\"SAMME\", random_state=0)\n",
"pipeline = make_pipeline(StandardScaler(), classifier)\n",
"pipeline.fit(X_train, y_train)\n",
"print(f\"train score: {100 * pipeline.score(X_train, y_train):.2f}%\")\n",
"print(f\"test score: {100 * pipeline.score(X_test, y_test):.2f}%\")\n",
"\n",
"# to evaluate ensemble estimators, we need to use transformed data\n",
"X_train_trans = pipeline[:-1].transform(X_train)\n",
"X_test_trans = pipeline[:-1].transform(X_test)\n",
"\n",
"fig, ax_arr = plt.subplots(ncols=n_trees, nrows=1, figsize=(5 * n_trees, 4))\n",
"for i, internal_classifier in enumerate(classifier.estimators_):\n",
" plot_tree(internal_classifier, feature_names=features.columns.values, ax=ax)\n",
" ax.set_title(\n",
" (\n",
" f\"Tree #{i}, weight: {classifier.estimator_weights_[i]:.2f}\\n\"\n",
" f\"train error: {classifier.estimator_errors_[i]:.2f}\\n\"\n",
" f\"(train score: {100 * internal_classifier.score(X_train_trans, y_train):.2f}%)\\n\"\n",
" f\"test score: {100 * internal_classifier.score(X_test_trans, y_test):.2f}%\"\n",
},
{
"cell_type": "markdown",
"In practice you will mostly want to use other than AdaBoost methods for boosting.\n",
"#### Gradient Tree Boosting (GTB)\n",
"\n",
"It re-formulates boosting problem as an optimization problem which is solved with efficient Stochastic Gradient Descent optimization method (more on that in the neuronal networks script).\n",
"\n",
"In contrast to AdaBoost, GTB relies on using decision trees.\n",
"\n",
Mikolaj Rybinski
committed
"In particular, try out [XGboost](https://xgboost.readthedocs.io/en/latest/); it's a package that won many competitions, cf. [XGboost@Kaggle](https://www.kaggle.com/dansbecker/xgboost). It is not part of scikit-learn, but it offers a `scikit-learn` API (see https://www.kaggle.com/stuarthallows/using-xgboost-with-scikit-learn ); a `scikit-learn` equivalent is [`GradientBoostingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html).\n",
"\n",
"#### Histogram-based Gradient Boosting Classification Tree.\n",
"\n",
Mikolaj Rybinski
committed
"A new `scikit-learn` implementation of boosting based on decision trees is [`HistGradientBoostingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingClassifier.html). It is much faster then `GradientBoostingClassifier` for big datasets (`n_samples >= 10 000`).\n",
{
"cell_type": "markdown",
"source": [
"## Ensemble Stacking: a honorary mention\n",
"\n",
"Stacking is used often in case of different types of base models, when it's not clear which type of model will perform best.\n",
"\n",
"**The base models learn in parallel and their (cross-validated) predictions are used to train a meta-model** (as opposed e.g. to selecting only one model or doing a naive voting). The meta-model (called also combiner, blender, or generalizer), never \"sees\" the input data.\n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/ensemble-learning-stacking.png\" width=\"400px\"></td></tr>\n",
" <tr><td><center><sub><a href=\"https://data-science-blog.com/blog/2017/12/03/ensemble-learning/\">https://data-science-blog.com/blog/2017/12/03/ensemble-learning/</a></sub></center></td></tr>\n",
"</table>\n",
"\n",
"Stacking combines strengths of different models and usually slightly outperforms best individual model. In practice often multiple stacking layers are used with groups of different but repeating types of classifiers.\n",
Mikolaj Rybinski
committed
"\n",
"<table>\n",
" <tr><td><center><img src=\"./images/ensemble-learning-stacking-kdd_2015_winner.png\" width=\"800px\"></center></td></tr>\n",
" <tr><td><center><sub>KDD Cup 2015 winner</sub></center></td></tr>\n",
" <tr><td><center><sub>GBM: Gradient Boosting Machine; NN: Neural Network; FM: Factorization Machine; LR: Logistic Regression; KRR: Kernel Ridge Regression; ET: Extra Trees; RF: Random Forests; KNN: K-Nearest Neighbors</sub></center></td></tr>\n",
" <tr><td><center><sub><a href=\"https://www.slideshare.net/jeongyoonlee/winning-data-science-competitions-74391113\"> Jeong-Yoon Lee, <em>Winning Data Science Competitions</em>, Apr 2017</a></sub></center></td></tr>\n",
"</table>\n",
"\n",
"In the `sklearn.ensemble` the stacking is implemented by `StackingClassifier` and `StackingRegressor`."
"## Why does ensemble learning work?\n",
"* Probability of making an error by majority of the classifiers in the ensemble is much lower then error that each of the weak classifiers makes alone.\n",
"* An ensemble classifier is more roboust (has lower variance) with respect to the training data.\n",
Mikolaj Rybinski
committed
"* The weak classifiers are small, fast to learn, and, in case of averaging, they can be learnt in parallel.\n",
"\n",
"In general, **usually ensemble classifier performs better than any of the weak classifiers in the ensemble**."
},
{
"cell_type": "markdown",
"source": [
"## Coding session\n",
"For the beers data compare mean cross validation accuracy, precision, recall and f1 scores for all classifiers shown so far. Try to squeeze better than default performance out of the classifiers by tuning their hyperparameters. Which ones perform best?"
]
"metadata": {},
"outputs": [],
Mikolaj Rybinski
committed
"from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier\n",
"from sklearn.linear_model import LogisticRegression\n",
Mikolaj Rybinski
committed
"from sklearn.model_selection import cross_val_score\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
Mikolaj Rybinski
committed
"from sklearn.svm import SVC, LinearSVC\n",
"from sklearn.tree import DecisionTreeClassifier\n",
"\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"\n",
"# pipeline = make_pipeline(StandardScaler(), classifier)\n",
"# scores = cross_val_score(pipeline, features, labelv, scoring=\"f1\", cv=5)\n",
"# ..."
]
"metadata": {
"tags": [
"solution"
]
},
Mikolaj Rybinski
committed
"from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier\n",
"from sklearn.linear_model import LogisticRegression\n",
Mikolaj Rybinski
committed
"from sklearn.model_selection import cross_val_score\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
Mikolaj Rybinski
committed
"from sklearn.svm import SVC, LinearSVC\n",
"from sklearn.tree import DecisionTreeClassifier\n",
"\n",
"classifiers = [\n",
" LogisticRegression(C=100),\n",
" LinearSVC(C=10, max_iter=30000),\n",
" SVC(C=30, gamma=0.1),\n",
" DecisionTreeClassifier(max_depth=7, random_state=0),\n",
Mikolaj Rybinski
committed
" RandomForestClassifier(\n",
" max_depth=4,\n",
" n_estimators=10,\n",
" max_features=2,\n",
" random_state=0,\n",
Mikolaj Rybinski
committed
" ),\n",
" AdaBoostClassifier(n_estimators=20, random_state=0),\n",
Mikolaj Rybinski
committed
"]\n",
"\n",
"df = pd.read_csv(\"data/beers.csv\")\n",
"features = df.iloc[:, :-1]\n",
"labelv = df.iloc[:, -1]\n",
"\n",
"for classifier in classifiers:\n",
" print(classifier.__class__.__name__)\n",
" pipeline = make_pipeline(StandardScaler(), classifier)\n",
" for scoring in [\"accuracy\", \"precision\", \"recall\", \"f1\"]:\n",
" scores = cross_val_score(pipeline, features, labelv, scoring=scoring, cv=5)\n",
" print(f\"\\t5-fold CV mean {scoring}: {scores.mean():.2f} +/- {scores.std():.2f}\")\n",
Mikolaj Rybinski
committed
" print()"
},
{
"cell_type": "markdown",
},
{
"cell_type": "markdown",
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
"source": [
"Below you will find a table with some guidelines, as well as pros and cons of different classication methods available in scikit-learn.\n",
"\n",
"<div class=\"alert alert-block alert-warning\">\n",
" <p><i class=\"fa fa-warning\"></i> <strong>Summary table</strong></p>\n",
"\n",
"<p>\n",
"<em>Disclaimer</em>: this table is neither a single source of truth nor complete - it's intended only to provide some first considerations when starting out. At the end of the day, you have to try and pick a method that works for your problem/data.\n",
"</p>\n",
"\n",
"<table>\n",
"<thead>\n",
"<tr>\n",
"<th style=\"text-align: center;\">Classifier type</th>\n",
"<th style=\"text-align: center;\">When?</th>\n",
"<th style=\"text-align: center;\">Advantages</th>\n",
"<th style=\"text-align: center;\">Disadvantages</th>\n",
"</tr>\n",
"</thead>\n",
"<tbody>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Nearest Neighbors<br><br><code>KNeighborsClassifier</code></td>\n",
"<td style=\"text-align: left;\">- numeric data<br> - when (fast) linear classifiers do not work</td>\n",
"<td style=\"text-align: left;\">- simple (not many parameters to tweak), hence, a good baseline classifier</td>\n",
"<td style=\"text-align: left;\">- known not to work well for many dimensions (20 or even less features)</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Logistic Regression<br><br><code>LogisticRegression</code></td>\n",
"<td style=\"text-align: left;\">- high-dimensional data<br> - a lot of data</td>\n",
"<td style=\"text-align: left;\">- fast, also in high dimensions<br> - weights can be interpreted</td>\n",
"<td style=\"text-align: left;\">- data has to be linearly separable (happens often in higher dimensions)<br> - not very efficient with large number of samples</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Linear SVM<br><br><code>LinearSVC</code></td>\n",
"<td style=\"text-align: left;\">same as above but might be better for text analysis (many features)</td>\n",
"<td style=\"text-align: left;\">same as above but might be better with very large number of features</td>\n",
"<td style=\"text-align: left;\">same as above but possibly a bit better with large number of samples</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Kernel SVM<br><br><code>SVC</code></td>\n",
"<td style=\"text-align: left;\">same as above but when linear SVM does not work<br>- not too many data points</td>\n",
"<td style=\"text-align: left;\">same as above but learns non-linear boundaries</td>\n",
"<td style=\"text-align: left;\">same as above but much slower and requires data scaling<br>- model is not easily interpretable</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Decision Tree<br><br><code>DecisionTreeClassifier</code></td>\n",
"<td style=\"text-align: left;\">- for illustration/insight<br> - with multi-class problems <br> - with categorical or mixed categorical and numerical data</td>\n",
"<td style=\"text-align: left;\">- simple to interpret<br> - good classification speed and performance</td>\n",
"<td style=\"text-align: left;\">- prone to overfitting<br> - unstable: small change in the training data can give very different model</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Ensemble Averaging<br><br><code>RandomForestClassifier</code></td>\n",
"<td style=\"text-align: left;\">- when decision tree would be used but for performance</td>\n",
"<td style=\"text-align: left;\">- fixes decision tree issues: does not overfit easily and is stable with respect to training data<br> - takes into account features dependencies<br> - can compute predicitve error when learning<br> ...</td>\n",
"<td style=\"text-align: left;\">- harder to interpret than a single decision tree</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Boosting<br><br><code>AdaBoostClassifier</code> (<code>XGBClassifier</code>, <code>HistGradientBoostingClassifier</code>)</td>\n",
"<td style=\"text-align: left;\">same as above</td>\n",
"<td style=\"text-align: left;\">- works very well out-of-the-box<br>- better performance and more interpretable than random forest when using depth 1 trees</td>\n",
"<td style=\"text-align: left;\">- more prone to overfitting than random forest</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Stacking<br><br><code>StackingClassifier</code></td>\n",
"<td style=\"text-align: left;\">- when having multiple various learners (with different weaknesses)<br>- when not having enough data to use neuronal networks</td>\n",
"<td style=\"text-align: left;\">- works well out-of-the-box<br>- improves performance of even already good learners</td>\n",
"<td style=\"text-align: left;\">- complicates interpretability of results<br>- takes time to train and to build a multi-layer architecture (if enough data, it's easier to use neuronal networks)</td>\n",
"</tr>\n",
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
"<tr style=\"border-bottom:1px solid black\">\n",
" <td colspan=\"100%\"></td>\n",
"</tr>\n",
"<tr>\n",
"<td colspan=\"100%\" style=\"text-align: center;\"><em>[not shown here]</em></td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Naive Bayes<br><br><code>ComplementNB</code>, ...</td>\n",
"<td style=\"text-align: left;\">- with text data</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Stochastic Gradient<br><br><code>SGDClassifier</code></td>\n",
"<td style=\"text-align: left;\">- with really big data</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"</tr>\n",
"<tr>\n",
"<td style=\"text-align: left;\">Kernel Approximation<br><br>pipeline: <code>RBFSampler</code> or <code>Nystroem</code> + <code>LinearSVC</code></td>\n",
"<td style=\"text-align: left;\">- with really big data and on-line training</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"<td style=\"text-align: left;\">...</td>\n",
"</tr>\n",
"</tbody>\n",
"</table>\n",
"\n",
"</div>"
},
{
"cell_type": "markdown",
"source": [
"You should be able now to understand better the classification part of the [\"Choosing the right estimator\" scikit-learn chart ](https://scikit-learn.org/stable/tutorial/machine_learning_map/):\n",
"\n",
"\n",
"<table>\n",
" <tr><td><img src=\"./images/scikit-learn_ml_map-classification.png\" width=800px></td></tr>\n",
" <tr><td><center><sub>Source: <a href=\"https://scikit-learn.org/stable/tutorial/machine_learning_map/\">https://scikit-learn.org/stable/tutorial/machine_learning_map/</a></sub></center></td></tr>\n",
"</table>"
},
{
"cell_type": "markdown",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
Mikolaj Rybinski
committed
"nbformat_minor": 4