diff --git a/01_introduction.ipynb b/01_introduction.ipynb
index 29df5c9c4d577ac372984341aace97efd2c5a5fb..1583f89af9173904297cd29bcd3df0df4e27aa35 100644
--- a/01_introduction.ipynb
+++ b/01_introduction.ipynb
@@ -20,9 +20,9 @@
    "source": [
     "- Discipline in the overlap of computer science and statistics\n",
     "- Subset of Artificial Intelligence (AI)\n",
-    "- Learn models from data\n",
     "- Term \"Machine Learning\" was first used in 1959 by AI pioneer Arthur Samuel\n",
-    " "
+    "\n",
+    "- **Learn models from data**\n"
    ]
   },
   {
@@ -36,11 +36,11 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## About models\n",
+    "## About  models\n",
     "\n",
     "Model examples: \n",
     "\n",
-    "   1. Will the sun shine tomorrow ?\n",
+    "   1. Where will my car stop when I break now ?\n",
     "   2. Where on the night sky will I see the moon tonight ?\n",
     "   2. Is the email I received spam ? \n",
     "   4. What article X should I recommend to my customers Y ?\n",
@@ -65,6 +65,11 @@
     "**In such cases machine learning offers approaches to build models based on example data.**\n"
    ]
   },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": []
+  },
   {
    "cell_type": "markdown",
    "metadata": {},
@@ -80,7 +85,7 @@
     "    1969: Book \"Perceptrons\": Limitations of Neural Networks\n",
     "    1986: Backpropagation to learn neural networks\n",
     "    1995: Randomized Forests and Support Vector Machines\n",
-    "    1998: Naive Bayes Classifier for Spam detection\n",
+    "    1998: Application of naive Bayes Classifier for Spam detection\n",
     "    2000+: Deep learning"
    ]
   },
@@ -88,19 +93,14 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Features\n",
-    "\n",
-    "(Almost) all machine learning algorithms require that your data is numerical. In some applications it is not obvious how to transform data to a numerical presentation.\n",
+    "## What are features ?\n",
     "\n",
-    "In most cases we can arange our data as a matrix:\n",
-    "- every row of such a matrix is called a **sample** or **feature vector**. \n",
-    "- every column name is called a **feature name** or **attribute**.\n",
-    "- the cells are **feature values**."
+    "In most cases we can arange data used for machine learning as a matrix:"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 20,
+   "execution_count": 3,
    "metadata": {},
    "outputs": [
     {
@@ -185,7 +185,7 @@
        "4         4.148710    0.570586  1.461568    0.260218         0"
       ]
      },
-     "execution_count": 20,
+     "execution_count": 3,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -197,6 +197,17 @@
     "features.head()"
    ]
   },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "\n",
+    "\n",
+    "- every row of such a matrix is called a **sample** or **feature vector**. \n",
+    "- every column name is called a **feature name** or **attribute**.\n",
+    "- the cells are **feature values**."
+   ]
+  },
   {
    "cell_type": "markdown",
    "metadata": {},
@@ -210,7 +221,18 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Images -> Feature vectors\n",
+    "(Almost) all machine learning algorithms require that your data is numerical and/or categorial. In some applications it is not obvious how to transform data to a numerical presentation.\n",
+    "\n",
+    "Definition:\n",
+    "\n",
+    "*Categorical data*: data which has only a limited set of allowed values. A `taste` feature could only allow values `sour`, `bitter`, `sweet`, `salty`."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### How to represent images as  feature vectors\n",
     "\n",
     "Computers represent images as matrices. Every cell in the matrix represents one pixel, and the value in the matrix cell its color.\n",
     "\n",
@@ -219,7 +241,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 23,
+   "execution_count": 4,
    "metadata": {},
    "outputs": [],
    "source": [
@@ -230,7 +252,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 24,
+   "execution_count": 5,
    "metadata": {},
    "outputs": [],
    "source": [
@@ -246,7 +268,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 25,
+   "execution_count": 6,
    "metadata": {},
    "outputs": [
     {
@@ -279,7 +301,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 28,
+   "execution_count": 7,
    "metadata": {},
    "outputs": [
     {
@@ -337,7 +359,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Textual data -> Feature vector"
+    "### How to present textual data as feature vectors ?"
    ]
   },
   {
@@ -360,7 +382,7 @@
     "\n",
     "E.g. `\"I dislike american pizza, but american beer is nice\"`:\n",
     "\n",
-    "| Word     | Index |  Count |\n",
+    "| Word     | Index | Count |\n",
     "|----------|-------|-------|\n",
     "| like     | 0     | 1     |\n",
     "| dislike  | 1     | 1     |\n",
@@ -369,7 +391,7 @@
     "| beer     | 4     | 1     |\n",
     "| pizza    | 5     | 1     |\n",
     "\n",
-    "So this text can be encoded as the word vector\n",
+    "The according feature vector is the `Count` column, which is:\n",
     "\n",
     "`[0, 1, 2, 0, 1, 1]`"
    ]
@@ -383,7 +405,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 29,
+   "execution_count": 8,
    "metadata": {},
    "outputs": [
     {
@@ -418,7 +440,7 @@
     "\n",
     "In **supervised learning** the the data comes with additional attributes that we want to predict. Such a problem can be either \n",
     "\n",
-    "- **classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data. \n",
+    "- **classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data. This is the same as saying, that the output is categorical.\n",
     "    \n",
     "- **regression**: if the desired output consists of one or more continuous variables, then the task is called regression.\n",
     "    \n",
@@ -442,9 +464,265 @@
     "This course will only introduce concepts and methods from **supervised learning**."
    ]
   },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## How to apply machine learning in practice ?\n",
+    "\n",
+    "Application of machine learning in practice consists of several phases:\n",
+    "\n",
+    "1. Learn / train a model from example data\n",
+    "2. Analyze model for its quality / performance\n",
+    "2. Apply this model to new incoming data\n",
+    "\n",
+    "In practice steps 1. and 2. are iterated for different machine learning algorithms until performance is optimal or sufficient. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Exercise section"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Our example beer data set reflects the very personal opinion of one of the tutors which beer he likes and which not. To learn a predictive model and to understand influential factors all beers went through some lab analysis to measure alcohol content, bitterness, darkness and fruitiness."
+   ]
+  },
   {
    "cell_type": "code",
-   "execution_count": 30,
+   "execution_count": 28,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/html": [
+       "<div>\n",
+       "<style scoped>\n",
+       "    .dataframe tbody tr th:only-of-type {\n",
+       "        vertical-align: middle;\n",
+       "    }\n",
+       "\n",
+       "    .dataframe tbody tr th {\n",
+       "        vertical-align: top;\n",
+       "    }\n",
+       "\n",
+       "    .dataframe thead th {\n",
+       "        text-align: right;\n",
+       "    }\n",
+       "</style>\n",
+       "<table border=\"1\" class=\"dataframe\">\n",
+       "  <thead>\n",
+       "    <tr style=\"text-align: right;\">\n",
+       "      <th></th>\n",
+       "      <th>alcohol_content</th>\n",
+       "      <th>bitterness</th>\n",
+       "      <th>darkness</th>\n",
+       "      <th>fruitiness</th>\n",
+       "      <th>is_yummy</th>\n",
+       "    </tr>\n",
+       "  </thead>\n",
+       "  <tbody>\n",
+       "    <tr>\n",
+       "      <th>0</th>\n",
+       "      <td>3.739295</td>\n",
+       "      <td>0.422503</td>\n",
+       "      <td>0.989463</td>\n",
+       "      <td>0.215791</td>\n",
+       "      <td>0</td>\n",
+       "    </tr>\n",
+       "    <tr>\n",
+       "      <th>1</th>\n",
+       "      <td>4.207849</td>\n",
+       "      <td>0.841668</td>\n",
+       "      <td>0.928626</td>\n",
+       "      <td>0.380420</td>\n",
+       "      <td>0</td>\n",
+       "    </tr>\n",
+       "    <tr>\n",
+       "      <th>2</th>\n",
+       "      <td>4.709494</td>\n",
+       "      <td>0.322037</td>\n",
+       "      <td>5.374682</td>\n",
+       "      <td>0.145231</td>\n",
+       "      <td>1</td>\n",
+       "    </tr>\n",
+       "    <tr>\n",
+       "      <th>3</th>\n",
+       "      <td>4.684743</td>\n",
+       "      <td>0.434315</td>\n",
+       "      <td>4.072805</td>\n",
+       "      <td>0.191321</td>\n",
+       "      <td>1</td>\n",
+       "    </tr>\n",
+       "    <tr>\n",
+       "      <th>4</th>\n",
+       "      <td>4.148710</td>\n",
+       "      <td>0.570586</td>\n",
+       "      <td>1.461568</td>\n",
+       "      <td>0.260218</td>\n",
+       "      <td>0</td>\n",
+       "    </tr>\n",
+       "  </tbody>\n",
+       "</table>\n",
+       "</div>"
+      ],
+      "text/plain": [
+       "   alcohol_content  bitterness  darkness  fruitiness  is_yummy\n",
+       "0         3.739295    0.422503  0.989463    0.215791         0\n",
+       "1         4.207849    0.841668  0.928626    0.380420         0\n",
+       "2         4.709494    0.322037  5.374682    0.145231         1\n",
+       "3         4.684743    0.434315  4.072805    0.191321         1\n",
+       "4         4.148710    0.570586  1.461568    0.260218         0"
+      ]
+     },
+     "execution_count": 28,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "import pandas as pd\n",
+    "from sklearn.linear_model import LogisticRegression\n",
+    "from sklearn.svm import SVC\n",
+    "\n",
+    "# read some data\n",
+    "\n",
+    "beer_data = pd.read_csv(\"beers.csv\")\n",
+    "beer_data.head()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 29,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "   alcohol_content  bitterness  darkness  fruitiness\n",
+      "0         3.739295    0.422503  0.989463    0.215791\n",
+      "1         4.207849    0.841668  0.928626    0.380420\n",
+      "2         4.709494    0.322037  5.374682    0.145231\n",
+      "3         4.684743    0.434315  4.072805    0.191321\n",
+      "4         4.148710    0.570586  1.461568    0.260218\n",
+      "\n",
+      "0    0\n",
+      "1    0\n",
+      "2    1\n",
+      "3    1\n",
+      "4    0\n",
+      "Name: is_yummy, dtype: int64\n"
+     ]
+    }
+   ],
+   "source": [
+    "# split matrix into features and labels\n",
+    "features = beer_data.iloc[:, :-1]\n",
+    "labels = beer_data.iloc[:, -1]\n",
+    "\n",
+    "print(features.head())\n",
+    "print()\n",
+    "print(labels.head())"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 32,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "LogisticRegression(C=2, class_weight=None, dual=False, fit_intercept=True,\n",
+      "          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,\n",
+      "          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,\n",
+      "          verbose=0, warm_start=False)\n"
+     ]
+    }
+   ],
+   "source": [
+    "classifier = LogisticRegression(C=1)\n",
+    "classifier.fit(features, labels)\n",
+    "print(classifier)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 34,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "225 examples\n",
+      "199 labeled correctly\n"
+     ]
+    }
+   ],
+   "source": [
+    "predicted_labels = classifier.predict(features)\n",
+    "\n",
+    "\n",
+    "print(len(labels), \"examples\")\n",
+    "print(sum(predicted_labels == labels), \"labeled correctly\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Comment\n",
+    "Are you surprised that not all labels where predicted correctly ?\n",
+    "\n",
+    "Reasons for this can be:\n",
+    "- missing information: maybe other features of beer which contribute to the rating where not measured or can not be measured.\n",
+    "- noisy information: features can be noisy"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 27,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "(225,)\n",
+      "(225,)\n",
+      "205\n"
+     ]
+    }
+   ],
+   "source": [
+    "classifier = SVC()\n",
+    "classifier.fit(features, labels)\n",
+    "\n",
+    "predicted_labels = classifier.predict(features)\n",
+    "\n",
+    "print(predicted_labels.shape)\n",
+    "print(labels.shape)\n",
+    "print(sum(predicted_labels == labels))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
    "metadata": {},
    "outputs": [
     {
@@ -530,7 +808,7 @@
        "<IPython.core.display.HTML object>"
       ]
      },
-     "execution_count": 30,
+     "execution_count": 1,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -616,6 +894,13 @@
     "css_styling()\n",
     "#REMOVEEND"
    ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
   }
  ],
  "metadata": {