From b43df15f53ce719a153b9e8162a11b54f59931d0 Mon Sep 17 00:00:00 2001
From: Mikolaj Rybinski <mikolaj.rybinski@id.ethz.ch>
Date: Wed, 3 Mar 2021 16:11:50 +0100
Subject: [PATCH] In classifiers: bonus Q&A for first excercise

---
 06_classifiers_overview-part_2.ipynb | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/06_classifiers_overview-part_2.ipynb b/06_classifiers_overview-part_2.ipynb
index ad72235..2b34ccf 100644
--- a/06_classifiers_overview-part_2.ipynb
+++ b/06_classifiers_overview-part_2.ipynb
@@ -69049,7 +69049,8 @@
    "source": [
     "### Exercise section\n",
     "\n",
-    "1. In theory for XOR dataset it should suffice to use each feature exactly once with splits at `0`, but the decision tree learning algorithm is unable to find such a solution. Play around with `max_depth` to get a smaller but similarly performing decision tree for the XOR dataset.\n",
+    "1. In theory for XOR dataset it should suffice to use each feature exactly once with splits at `0`, but the decision tree learning algorithm is unable to find such a solution. Play around with `max_depth` to get a smaller but similarly performing decision tree for the XOR dataset.<br/>\n",
+    "  Bonus question: which other hyperparameter you could have used to get the same result?\n",
     "\n",
     "2. Build a decision tree for the `\"data/beers.csv\"` dataset. Use maximum depth and tree pruning strategies to get a much smaller tree that performs as well as the default tree.<br/>\n",
     "  Note: `classifier.tree_` instance has attributes such as `max_depth`, `node_count`, or `n_leaves`, which measure size of the tree."
@@ -259985,7 +259986,9 @@
     "        test_features_2d=X_test,\n",
     "        test_labels=y_test,\n",
     "        plt=ax,\n",
-    "    )"
+    "    )\n",
+    "\n",
+    "# We could have used equivalently `min_impurity_split` early stopping criterium with any (gini) value between 0.15 and 0.4"
    ]
   },
   {
-- 
GitLab