diff --git a/06_classifiers_overview-part_2.ipynb b/06_classifiers_overview-part_2.ipynb
index ad722359bcef8e2cb87cefe18f5dcdd5e835783a..2b34ccf99eb61b5b4536bda61158037a650066b2 100644
--- a/06_classifiers_overview-part_2.ipynb
+++ b/06_classifiers_overview-part_2.ipynb
@@ -69049,7 +69049,8 @@
    "source": [
     "### Exercise section\n",
     "\n",
-    "1. In theory for XOR dataset it should suffice to use each feature exactly once with splits at `0`, but the decision tree learning algorithm is unable to find such a solution. Play around with `max_depth` to get a smaller but similarly performing decision tree for the XOR dataset.\n",
+    "1. In theory for XOR dataset it should suffice to use each feature exactly once with splits at `0`, but the decision tree learning algorithm is unable to find such a solution. Play around with `max_depth` to get a smaller but similarly performing decision tree for the XOR dataset.<br/>\n",
+    "  Bonus question: which other hyperparameter you could have used to get the same result?\n",
     "\n",
     "2. Build a decision tree for the `\"data/beers.csv\"` dataset. Use maximum depth and tree pruning strategies to get a much smaller tree that performs as well as the default tree.<br/>\n",
     "  Note: `classifier.tree_` instance has attributes such as `max_depth`, `node_count`, or `n_leaves`, which measure size of the tree."
@@ -259985,7 +259986,9 @@
     "        test_features_2d=X_test,\n",
     "        test_labels=y_test,\n",
     "        plt=ax,\n",
-    "    )"
+    "    )\n",
+    "\n",
+    "# We could have used equivalently `min_impurity_split` early stopping criterium with any (gini) value between 0.15 and 0.4"
    ]
   },
   {