Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A \"model\" allows us to explain observations and to answer questions. For example:\n",
"\n",
" 1. Where will my car at given velocity stop when I break now ?\n",
" 2. Where on the night sky will I see the moon tonight ?\n",
" 2. Is the email I received spam ? \n",
" 4. What article X should I recommend to my customers Y ?\n",
"- The first two questions can be answered based on existing physical models (formulas). \n",
"\n",
"- For the questions 3 and 4 it is difficult to develop explicitly formulated models. \n",
"### What is needed to apply ML ?\n",
"\n",
"Problems 3 and 4 have the following in common:\n",
"\n",
"- No exact model known or implementable because we have a vague understanding of the problem domain.\n",
"- But enough data with sufficient and implicit information is available.\n",
"\n",
"E.g. for the spamming example:\n",
"\n",
"- We have no explicit formula for such a task\n",
"- We have a vague understanding of the problem domeani, because we know that some words are specific for spam emails, other words are specific for my personal and job emails.\n",
"- My mailbox is full with examples for spam vs non-spam.\n",
"\n",
"\n",
"**In such cases machine learning offers approaches to build models based on example data.**\n",
"\n",
"\n",
"\n",
"\n",
"## ML: what is \"learning\" ?\n",
"\n",
"To create a predictive model, we first must \"learn\" such a model on given data. \n",
"\n",
"All ML algorithms have in common that they rely on internal data structures and/or parameters. Learning then builds up such data structures or adjusts parameters based on the given data. After that such models can be used to explain observations or to answer questions.\n",
"\n",
"The important difference between explicit models and models learned from data:\n",
"\n",
"- Explicit models usually offer exact answers to questions\n",
"- Models we learn from data usually come with inherent uncertainty."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Some parts of ML are older than you might think. This is a rough time line with a few selected achievements from this field:\n",
"\n",
" \n",
" 1812: Bayes Theorem\n",
" 1913: Markov Chains\n",
" 1951: First neural network\n",
" 1959: first use or term \"machine learning\" AI pioneer Arthur Samuel\n",
" 1969: Book \"Perceptrons\": Limitations of Neural Networks\n",
" 1986: Backpropagation to learn neural networks\n",
" 1995: Randomized Forests and Support Vector Machines\n",
" 1998: Public appearance of ML: naive Bayes Classifier for Spam detection\n",
" 2000+: Deep learning\n",
" \n",
"So the field is not as new as one might think, but due to \n",
"\n",
"- more available data\n",
"- more processing power \n",
"- development of better algorithms \n",
"\n",
"more applications of machine learning appeared during the last 15 years."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Machine learning with Python\n",
"\n",
"Currently (2018) `Python` is the dominant programming language for ML. Especially the advent of deep-learning pushed this forward. First releases of frameworks such as `TensorFlow` or `PyTorch` were released with`Python` support early.\n",
"\n",
"The prevalent packages in the Python eco-system used for ML include:\n",
"\n",
"- `pandas` for handling tabualar data\n",
"- `matplotlib` and `seaborn` for plotting\n",
"- `scikit-learn` for classical (non-deep-learning) ML\n",
"- `tensorflow`, `PyTorch` and `Keras` for deep-learning.\n",
"\n",
"`scikit-learn` is very comprehensive and the online-documentation itself provides a good introducion into ML."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A typical and very common situation is that our data is presented as a table, as in the following example:"
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>alcohol_content</th>\n",
" <th>bitterness</th>\n",
" <th>darkness</th>\n",
" <th>fruitiness</th>\n",
" <th>is_yummy</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>3.739295</td>\n",
" <td>0.422503</td>\n",
" <td>0.989463</td>\n",
" <td>0.215791</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>4.207849</td>\n",
" <td>0.841668</td>\n",
" <td>0.928626</td>\n",
" <td>0.380420</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4.709494</td>\n",
" <td>0.322037</td>\n",
" <td>5.374682</td>\n",
" <td>0.145231</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>4.684743</td>\n",
" <td>0.434315</td>\n",
" <td>4.072805</td>\n",
" <td>0.191321</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>4.148710</td>\n",
" <td>0.570586</td>\n",
" <td>1.461568</td>\n",
" <td>0.260218</td>\n",
" <td>0</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" alcohol_content bitterness darkness fruitiness is_yummy\n",
"0 3.739295 0.422503 0.989463 0.215791 0\n",
"1 4.207849 0.841668 0.928626 0.380420 0\n",
"2 4.709494 0.322037 5.374682 0.145231 1\n",
"3 4.684743 0.434315 4.072805 0.191321 1\n",
"4 4.148710 0.570586 1.461568 0.260218 0"
]
},
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"features = pd.read_csv(\"beers.csv\")\n",
"features.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"- every row of such a matrix is called a **sample** or **feature vector**. \n",
"\n",
"- the cells in a row are **feature values**.\n",
"- every column name is called a **feature name** or **attribute**."
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The feature names are `alcohol_content`, `bitterness`, `darkness`, `fruitiness` and `is_yummy`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(Almost) all machine learning algorithms require that your data is numerical and/or categorial. In some applications it is not obvious how to transform data to a numerical presentation.\n",
"\n",
"\n",
"*Categorical data*: data which has only a limited set of allowed values. A `taste` feature could only allow values `sour`, `bitter`, `sweet`, `salty`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A straight-forward application for machine-learning on the previos beer dataset is: **\"can we predict `is_yummy` from the other features\"** ?\n",
"In this case we would call the features `alcohol_content`, `bitterness`, `darkness`, `fruitiness` our **input features** and `is_yummy` our **target value**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### How to represent images as feature vectors ?\n",
"\n",
"To simplify our explanations we consider gray images only here. Computers represent images as matrices. Every cell in the matrix represents one pixel, and the numerical value in the matrix cell its gray value.\n",
"\n",
"As we said, most machine learning algorithms require that every sample is represented as a vector containing numbers. \n",
"\n",
"So how can we represent images as vectors then ?\n",
"\n",
"`scikit-learn` includes some example data sets which we load now:"
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import load_digits\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline"
]
},
{
"cell_type": "code",
"dd = load_digits()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we plot the first nine digits from this data set:"
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABAsAAACBCAYAAACmXjMaAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAEttJREFUeJzt3V9o3ed9x/HPd/YCWRN8FNYs4IQcO2kGvbEyicLomOXOHt0fJl3MIQ0bOrmxbzpkCCzeleU7+SKzdjGGRdfIsI6AulUqo7SzmJXRmxK5PiaL3YZEHJOYjTREUsICMYmfXVhZ8/Ps6Pf1zqNznq/eLyiN1S+Pnp/fOr9z+uRIspSSAAAAAAAAPvUrvd4AAAAAAADoLxwWAAAAAACACg4LAAAAAABABYcFAAAAAACggsMCAAAAAABQwWEBAAAAAACo4LAAAAAAAABUcFgAAAAAAAAqOCwAAAAAAAAVO3MsamYpx7qfGhgYcM3v3r279uz777/vWvvatWuu+U8++cQ175VSsm6sk7uh1xNPPFF7dudO35e1t+H6+rpr/i68m1L6YjcW6reO9913X+3Zxx9/3LX2hx9+6Jp//fXXXfN3oZiODz30kGvec0/96KOPXGtfuXLFNc899e7s2LGj9myz2XSt/eabbzp3k10xj0XPc50kXb9+vfZsp9Nx7qbvFNPRK+drnMuXL3u3k1sxHR988EHXvOe+6v3/Mvfee69r3vvc+Oqrr9aevXHjhm7cuFHEc+Mjjzzimm80GrVn3333Xdfa77zzjms+9+sb1XwsZjksyO3gwYOu+ampqdqzi4uLrrWPHz/uml9dXXXN46aZmZnas54HuiSdOHHCNb+wsOCavwtXc3+CXhkeHq49Oz8/71q73W675kdGRlzzd6GYjuPj4655zz11ZWXFtbbna0Tinnq37r///tqzL7zwgmvtsbEx73ZyK+ax6Hmuk3wHAK1Wy7eZ/lNMR6+cr3EGBwe928mtmI7PPPOMa97Txnuf3Ldvn2ve+y+2PIfCH3zwgWvtXnruuedc854us7OzrrWnp6dd82tra675u1Drsci3IQAAAAAAgIpahwVm9nUz+7mZvWFmvn+Vjr5Bx/LRMAY6xkDH8tEwBjrGQMcY6BjLpocFZrZD0t9K+gNJX5b0DTP7cu6NobvoWD4axkDHGOhYPhrGQMcY6BgDHeOp886Cr0h6I6W0klK6LuklSaN5t4UM6Fg+GsZAxxjoWD4axkDHGOgYAx2DqXNYsFvSW5/589sbH6swsyNmtmxmy93aHLpq04407Hs8FmOgYwzcU8vHYzEGOsZAxxh4bgyma78NIaU0I2lG6r9fSYN6aBgDHWOgY/loGAMdY6BjDHQsHw3LUuedBdckffaXVD688TGUhY7lo2EMdIyBjuWjYQx0jIGOMdAxmDqHBa9I+pKZ7TGzeyQ9Len7ebeFDOhYPhrGQMcY6Fg+GsZAxxjoGAMdg9n02xBSSh+b2Tcl/UjSDknfTim9ln1n6Co6lo+GMdAxBjqWj4Yx0DEGOsZAx3hq/cyClNIPJP0g816QGR3LR8MY6BgDHctHwxjoGAMdY6BjLF37AYdbaWpqyjW/d+/e2rMDAwOutd977z3X/FNPPeWan5ubc81Htba2Vnt2//79rrUPHDjgml9YWHDNRzY4OOiaP3/+fO3Z9fV119rNZtM1H5n3Hnn48GHX/NGjR2vPnjlzxrX20NCQa35xcdE1j5tarVbt2Xa7nW8jqPDexzzPd+Pj4661r1696prnHvxLo6O+3xTn6Xjy5EnvdrBFPK9Vjx075lrbO99oNFzznr2XxPs61cPzPCpJIyMjWedzqfMzCwAAAAAAwDbCYQEAAAAAAKjgsAAAAAAAAFRwWAAAAAAAACo4LAAAAAAAABUcFgAAAAAAgAoOCwAAAAAAQAWHBQAAAAAAoILDAgAAAAAAUMFhAQAAAAAAqOCwAAAAAAAAVOzs9QYkaWhoyDW/d+9e1/xjjz1We3ZlZcW19rlz51zz3mudm5tzzZdicHDQNT8yMpJnI5La7Xa2taMbGxtzzV+6dKn27Pz8vGvtEydOuOYjm5mZcc2fOnXKNb+8vFx71ntPXVxcdM3jpkaj4ZpvtVq1Z6enp11rN5tN17xXp9PJun4vra2tueYfffTR2rPr6+uutZeWllzz3q9B77WW5OTJk9nW9j434u55730ek5OTrnnvfTXn6+aSeF/je55fPM+jkv+e523ovWfXxTsLAAAAAABAxaaHBWb2iJmdN7PLZvaamU1sxcbQXXQsHw1joGMMdCwfDWOgYwx0jIGO8dT5NoSPJT2XUvqpmd0v6YKZnUspXc68N3QXHctHwxjoGAMdy0fDGOgYAx1joGMwm76zIKX0nymln2788weSrkjanXtj6C46lo+GMdAxBjqWj4Yx0DEGOsZAx3hcP7PAzJqSnpT0kxybwdagY/loGAMdY6Bj+WgYAx1joGMMdIyh9m9DMLP7JP2TpGMppfdv878fkXSki3tDBp/XkYZl4LEYAx1j4J5aPh6LMdAxBjrGwHNjHLUOC8zsV3Uz+HdSSv98u5mU0oykmY351LUdoms260jD/sdjMQY6xsA9tXw8FmOgYwx0jIHnxljq/DYEk/T3kq6klP46/5aQAx3LR8MY6BgDHctHwxjoGAMdY6BjPHV+ZsFXJf25pK+ZWXvjP3+YeV/oPjqWj4Yx0DEGOpaPhjHQMQY6xkDHYDb9NoSU0o8l2RbsBRnRsXw0jIGOMdCxfDSMgY4x0DEGOsbj+m0IAAAAAAAgvtq/DSGngYEB1/yFCxdc8ysrK655D+9eojp27JhrfnJy0jW/a9cu17zH0tJStrWjm56eds13Op1say8sLLjmI/Pe8/bu3ZttfnFx0bW29/lgdXXVNR9Vq9VyzTebzdqzs7OzrrW9j921tTXXvPf5oySee6Qk7du3r/as93m03W675r0dI2s0Gq75S5cu1Z71dsEvjYyMZJ338L5u9hobG3PNe+/zpfBe18WLF2vPep5HJf890vt8kAvvLAAAAAAAABUcFgAAAAAAgAoOCwAAAAAAQAWHBQAAAAAAoILDAgAAAAAAUMFhAQAAAAAAqOCwAAAAAAAAVHBYAAAAAAAAKjgsAAAAAAAAFRwWAAAAAACAip293oAkDQwMuOYXFxcz7cTPu/fV1dVMO+mt6elp1/zs7KxrPuffW6PRyLZ2abx/F8eOHXPNj42NueY9Wq1WtrWjW1lZcc0/8MADtWfPnTvnWts7f+jQIdd8Kffg0dFR1/zp06dd82fPnnXNe0xMTLjmn3322Uw7KY/3HjkyMlJ7dnBw0LW292vKy/u6oSTe59JOp1N71vu8Oz8/n20vpfFem/cx43k8ennvDUtLS3k2Upicr/H379/vmt+zZ49rvl8ei7yzAAAAAAAAVHBYAAAAAAAAKmofFpjZDjO7aGb/knNDyIeGMdAxBjrGQMfy0TAGOsZAx/LRMBbPOwsmJF3JtRFsCRrGQMcY6BgDHctHwxjoGAMdy0fDQGodFpjZw5L+SNK38m4HudAwBjrGQMcY6Fg+GsZAxxjoWD4axlP3nQXTkv5S0o07DZjZETNbNrPlruwM3UbDGOgYAx1j+NyONCwCj8UY6BgDHctHw2A2PSwwsz+W9E5K6cLnzaWUZlJKwyml4a7tDl1BwxjoGAMdY6jTkYb9jcdiDHSMgY7lo2FMdd5Z8FVJf2JmHUkvSfqamf1D1l2h22gYAx1joGMMdCwfDWOgYwx0LB8NA9r0sCCl9FcppYdTSk1JT0v6t5TSn2XfGbqGhjHQMQY6xkDH8tEwBjrGQMfy0TAmz29DAAAAAAAA28BOz3BKaUnSUpadYEvQMAY6xkDHGOhYPhrGQMcY6Fg+GsbhOizIZXV11TU/NDSUaSfSwMCAa967l7m5Odc88hscHHTNt9vtTDvpvcnJSdf8xMREno1IGhsbc82vra1l2glu5blnHzp0yLX2mTNnXPPPP/+8a/748eOu+V5ZX1/POj8+Pl571nuP9Jqfn8+6fmRLS0u93sL/ajabvd5C3+h0Oq75/fv3155tNBqutU+fPu2af/LJJ13zJb0m8nbxvg5JKWVbu58e673kfT46f/68a/7kyZO1Z733PO9znfdrxPv1XRffhgAAAAAAACo4LAAAAAAAABUcFgAAAAAAgAoOCwAAAAAAQAWHBQAAAAAAoILDAgAAAAAAUMFhAQAAAAAAqOCwAAAAAAAAVHBYAAAAAAAAKjgsAAAAAAAAFRwWAAAAAACAip293oAkraysuOaHhoZc84cPH84yezdOnTqVdX3g/2N2dtY1PzIy4prft29f7dn5+XnX2gsLC675F198Mev6JZmamnLNLy4u1p4dGBhwrX3w4EHX/NzcnGu+FEtLS675RqPhmh8cHMy2l7Nnz7rm19bWXPORjY6OuubX19drz05OTjp34+O9Z0fmfS49ffp07dlOp+Nau9lsuubHxsZc8+122zVfkunpade85/H48ssve7cD+b/+PU0kX3PvY+vixYuu+Var5ZrPdY/nnQUAAAAAAKCCwwIAAAAAAFBR67DAzBpm9l0z+5mZXTGz3869MXQfHctHwxjoGAMdy0fDGOgYAx1joGMsdX9mwd9I+mFK6U/N7B5Jv5ZxT8iHjuWjYQx0jIGO5aNhDHSMgY4x0DGQTQ8LzGyXpN+V1JKklNJ1SdfzbgvdRsfy0TAGOsZAx/LRMAY6xkDHGOgYT51vQ9gj6ReSXjSzi2b2LTP7wq1DZnbEzJbNbLnru0Q3bNqRhn2Px2IMdIyBe2r5eCzGQMcY6BgDz43B1Dks2CnptyT9XUrpSUn/Len4rUMppZmU0nBKabjLe0R3bNqRhn2Px2IMdIyBe2r5eCzGQMcY6BgDz43B1DkseFvS2ymln2z8+bu6+UWAstCxfDSMgY4x0LF8NIyBjjHQMQY6BrPpYUFK6b8kvWVmv7nxod+TdDnrrtB1dCwfDWOgYwx0LB8NY6BjDHSMgY7x1P1tCH8h6TsbP9FyRdKz+baEjOhYPhrGQMcY6Fg+GsZAxxjoGAMdA6l1WJBSakvi+0oKR8fy0TAGOsZAx/LRMAY6xkDHGOgYS913FmS1srLimj9+/P/8vJPPNTU1VXv2woULrrWHh3ks3I21tTXX/MLCQu3Z0dFR19ojIyOu+dnZWdd8Sdrttmt+cHAw2/zk5KRrbW/3Tqfjmvd8DZZmdXXVNX/mzJlMO5Hm5uZc80ePHs20k9g89+Bdu3a51o58j8ztwIEDrvmJiYlMO5HOnj3rml9aWsqzkQJ5HwPNZrP2bKvVcq3t7TI/P++aj8z7+nB8fLz2rPd1MG7y/r15v/49r4fW19dda3tfR05PT7vmc6nzAw4BAAAAAMA2wmEBAAAAAACo4LAAAAAAAABUcFgAAAAAAAAqOCwAAAAAAAAVHBYAAAAAAIAKDgsAAAAAAEAFhwUAAAAAAKCCwwIAAAAAAFDBYQEAAAAAAKjgsAAAAAAAAFRYSqn7i5r9QtLVWz7865Le7fon61+9uN5HU0pf7MZCd2goba+OvbpWOnYXHWPgnhoDHcvHPTWGqB23U0OJe2oEff1YzHJYcNtPZLacUhrekk/WB6Jeb9Trup3I1xr52m4V+VojX9utol5r1Ou6k6jXG/W6bifytUa+tltFvdao13UnUa836nXdTr9fK9+GAAAAAAAAKjgsAAAAAAAAFVt5WDCzhZ+rH0S93qjXdTuRrzXytd0q8rVGvrZbRb3WqNd1J1GvN+p13U7ka418bbeKeq1Rr+tOol5v1Ou6nb6+1i37mQUAAAAAAKAMfBsCAAAAAACo2JLDAjP7upn93MzeMLPjW/E5e8XMOmb2qpm1zWy51/vpJjqWbzs1lOgYQdSGEh2joGP5tlNDiY4RRG0o0bHfZP82BDPbIel1SYckvS3pFUnfSCldzvqJe8TMOpKGU0qhfjcoHcu33RpKdIwgYkOJjlHQsXzbraFExwgiNpTo2I+24p0FX5H0RkppJaV0XdJLkka34POiu+hYPhrGQMcY6BgDHctHwxjoGAMd+8xWHBbslvTWZ/789sbHokqS/tXMLpjZkV5vpovoWL7t1lCiYwQRG0p0jIKO5dtuDSU6RhCxoUTHvrOz1xsI6HdSStfM7EFJ58zsZymlf+/1puBGxxjoWD4axkDHGOgYAx3LR8MY+r7jVryz4JqkRz7z54c3PhZSSunaxn+/I+l7uvl2mgjoWL5t1VCiYwRBG0p0pGOBgnbcVg0lOkYQtKFEx77ruBWHBa9I+pKZ7TGzeyQ9Len7W/B5t5yZfcHM7v/0nyX9vqT/6O2uuoaO5ds2DSU6RhC4oURHOhYmcMdt01CiYwSBG0p07LuO2b8NIaX0sZl9U9KPJO2Q9O2U0mu5P2+P/Iak75mZdPPv9h9TSj/s7Za6g47ld9xmDSU6RhCyoURHOhYpZMdt1lCiYwQhG0p07MeO2X91IgAAAAAAKMtWfBsCAAAAAAAoCIcFAAAAAACggsMCAAAAAABQwWEBAAAAAACo4LAAAAAAAABUcFgAAAAAAAAqOCwAAAAAAAAVHBYAAAAAAICK/wHIF1w8ycQXMQAAAABJRU5ErkJggg==\n",
"text/plain": [
"<Figure size 1296x360 with 9 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"\n",
"for i, image in enumerate(dd.images[:N]):\n",
" plt.subplot(1, N, i + 1)\n",
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And this is the first image from the data set, it is a 8 x 8 matrix with values 0 to 15. The range 0 to 15 is fixed for this specific data set. Other formats allow e.g. values 0..255 or floating point values in the range 0 to 1."
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(8, 8)\n",
"[[ 0. 0. 5. 13. 9. 1. 0. 0.]\n",
" [ 0. 0. 13. 15. 10. 15. 5. 0.]\n",
" [ 0. 3. 15. 2. 0. 11. 8. 0.]\n",
" [ 0. 4. 12. 0. 0. 8. 8. 0.]\n",
" [ 0. 5. 8. 0. 0. 9. 8. 0.]\n",
" [ 0. 4. 11. 0. 1. 12. 7. 0.]\n",
" [ 0. 2. 14. 5. 10. 12. 0. 0.]\n",
" [ 0. 0. 6. 13. 10. 0. 0. 0.]]\n"
]
}
],
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To transform such an image to a feature vector we just have to concatenate the rows to one single vector of size 64:"
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(64,)\n",
"[ 0. 0. 5. 13. 9. 1. 0. 0. 0. 0. 13. 15. 10. 15. 5. 0. 0. 3.\n",
" 15. 2. 0. 11. 8. 0. 0. 4. 12. 0. 0. 8. 8. 0. 0. 5. 8. 0.\n",
" 0. 9. 8. 0. 0. 4. 11. 0. 1. 12. 7. 0. 0. 2. 14. 5. 10. 12.\n",
" 0. 0. 0. 0. 6. 13. 10. 0. 0. 0.]\n"
]
}
],
"source": [
"vector = dd.images[0].flatten()\n",
"print(vector.shape)\n",
"print(vector)"
]
},
{
"cell_type": "markdown",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we start a machine learning project for texts, we first have to choose and fix an enumerated dictionary or words for this project. The final representation of texts as feature vectors depends on this dictionary. \n",
"\n",
"Such a dictionary can be very large, but for the sake of simplicity we use a very small enumerated dictionary to explain the overall procedure:\n",
"\n",
"\n",
"| Word | Index |\n",
"|----------|-------|\n",
"| like | 0 |\n",
"| dislike | 1 |\n",
"| american | 2 |\n",
"| italian | 3 |\n",
"| beer | 4 |\n",
"| pizza | 5 |\n",
"\n",
"To \"vectorize\" a given text we count the words in the text which also exist in the vocabulary and put the counts at the given position `Index`.\n",
"\n",
"E.g. `\"I dislike american pizza, but american beer is nice\"`:\n",
"\n",
"| dislike | 1 | 1 |\n",
"| american | 2 | 2 |\n",
"| italian | 3 | 0 |\n",
"| beer | 4 | 1 |\n",
"| pizza | 5 | 1 |\n",
"\n",
"The according feature vector is the `Count` column, which is:\n",
"`[0, 1, 2, 0, 1, 1]`\n",
"\n",
"In real case scenarios the dictionary is much bigger, this results then in vectors with only few non-zero entries (so called sparse vectors)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And this is how we can compute such a word vector using Python:"
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0 1 2 0 1 1]\n"
]
}
],
"source": [
"from sklearn.feature_extraction.text import CountVectorizer\n",
"from itertools import count\n",
"\n",
"vocabulary = {\"like\": 0, \"dislike\": 1, \"american\": 2, \"italian\": 3, \"beer\": 4, \"pizza\": 5}\n",
"# create count vector for a pice of text:\n",
"vector = vectorizer.fit_transform([\"I dislike american pizza. But american beer is nice\"]).toarray().flatten()\n",
"print(vector)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most applications of ML belong to two categories: **supervised** and **unsupervised** learning.\n",
"In supervised learning the the data comes with an additional target value that we want to predict. Such a problem can be either \n",
"\n",
"- **classification**: we want to predict a categorical value.\n",
"- **regression**: we want to predict numbers in a given range.\n",
"\n",
"Examples for supervised learning:\n",
"\n",
"- Classification: Predict the class `is_yummy` based on the attributes `alcohol_content`,\t`bitterness`, \t`darkness` and `fruitiness`. (two class problem).\n",
"\n",
"- Classification: predict the digit-shown based on a 8 x 8 pixel image (this is a multi-class problem).\n",
"\n",
"- Regression: Predict the length of a salmon based on its age and weight."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Unsupervised learning \n",
"\n",
"In unsupervised learning, in which the training data consists of samples without any corresponding target values, one tries to find structure in data. Some common applications are\n",
"\n",
"- Clustering: find groups in data.\n",
"- Density estimation: find a probability distribution in your data.\n",
"- Dimension reduction (e.g. PCA): find latent structures in your data.\n",
"\n",
"Examples for unsupervised learning:\n",
"\n",
"- Can we split up our beer data set into sub groups of similar beers ?\n",
"- Can we reduce our data set because groups of features are somehow correlated ?\n",
"<table>\n",
" <tr>\n",
" <td><img src=\"./cluster-image.png/\" width=60%></td>\n",
" <td><img src=\"./nonlin-pca.png/\" width=60%></td>\n",
" </tr>\n",
" <tr>\n",
" <td><center>Clustering</center></td>\n",
" <td><center>Dimension reduction: detecting 2D structure in 3D data</center></td>\n",
" </tr>\n",
"</table>\n",
"\n",
"\n",
"\n",
"This course will only introduce concepts and methods from **supervised learning**."
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to apply machine learning in practice ?\n",
"\n",
"Application of machine learning in practice consists of several phases:\n",
"\n",
"1. Understand and clean your data.\n",
"1. Learn / train a model \n",
"2. Analyze model for its quality / performance\n",
"2. Apply this model to new incoming data\n",
"\n",
Loading
Loading full blame...