diff --git a/02_classification.ipynb b/02_classification.ipynb index af4939395a1fbede6da5270bf7d17a0425d2d5ad..613d93a546a24ef7bb5d4ebad38afee6dd2f1825 100644 --- a/02_classification.ipynb +++ b/02_classification.ipynb @@ -233,7 +233,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Consequence: A simple classifier could use these scores and use a threshold around 3.5 to assign a class label." + "Consequence: A simple classifier could use these scores and use a threshold around 3.5 to assign a class label:" ] }, { @@ -280,7 +280,7 @@ "\n", "Modify the weights in the beer classifiers and check if you can improve separation in the histogram.\n", "\n", - "In `scikit-lear` the weights of a trained linear are availble via the `coef_` attribute. Extract the weights from the `LogisticRegression` classifier example from the last script and try them." + "In `scikit-learn` the weights of a trained linear are availble via the `coef_` attribute. Extract the weights from the `LogisticRegression` classifier example from the last script and try them." ] }, { @@ -289,7 +289,7 @@ "source": [ "## Geometrical interpretation of feature vectors\n", "\n", - "If you take the values of a input-feature vector you can imagine this as a point in a d-dimensional space.\n", + "If you take the values of a input-feature vector you can imagine this as a point in a n-dimensional space.\n", "\n", "\n", "E.g. if a data set consists of feature vectors of length 2, you can interpret the first feature value as a x-coordinate and the second value as a y-coordinate.\n",