diff --git a/04_measuring_quality_of_a_classifier.ipynb b/04_measuring_quality_of_a_classifier.ipynb
index 8597d27909b00bd5ea108c66d4e31953edb5284a..3380a0845c7d6093710879c5142704cefe220902 100644
--- a/04_measuring_quality_of_a_classifier.ipynb
+++ b/04_measuring_quality_of_a_classifier.ipynb
@@ -312,7 +312,7 @@
    "source": [
     "## Exercise block 1\n",
     "\n",
-    "1.1 A classifier predicts labels `[0, 0, 1, 1, 1, 0, 1, 1]` whereas true labels are `[0, 1, 0, 1, 1, 0, 1, 0]`. First write these values as a two columned table using pen & paper and assign `FP`, `TP`, ... to each row. Now create the confusion matrix and compute accuracy.\n",
+    "1.1 A classifier predicts labels `[0, 1, 0, 1, 1, 0, 1, 0]` whereas true labels are `[0, 0, 1, 1, 1, 0, 1, 1]`. First write these values as a two columned table using pen & paper and assign `FP`, `TP`, ... to each row. Now create the confusion matrix and compute accuracy.\n",
     "\n",
     "1.2 A random classfier just assign a randomly chosen label `0` or `1` for a given feature. What is the average accuracy of such a classifier?"
    ]
@@ -809,6 +809,7 @@
   }
  ],
  "metadata": {
+  "celltoolbar": "Tags",
   "kernelspec": {
    "display_name": "Python 3",
    "language": "python",
@@ -824,7 +825,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.2"
+   "version": "3.7.3"
   },
   "latex_envs": {
    "LaTeX_envs_menu_present": true,