"# Chapter 1: General introduction to machine learning (ML)"
"# Chapter 1: General Introduction to machine learning (ML)"
]
},
{
...
...
@@ -69,7 +69,6 @@
"source": [
"\n",
"## Some history\n",
"**ADD REFERENCE IF POSSIBLE**\n",
"\n",
"Some parts of ML are older than you might think. This is a rough time line with a few selected achievements from this field:\n",
"\n",
...
...
@@ -1097,7 +1096,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Comment**: `predicted_labels == labels` evaluates as a vector of values `True` or `False`. Python handles `True` as `1` and `False` as `0` when used as numbers. So the `sum(...)` just counts the correct results.\n"
"**Comment**: `predicted_labels == labels` evaluates as a vector of values `True` or `False`. Python handles `True` as `1` and `False` as `0` when used as numbers. So the `sum(...)` just counts the correct results.\n",
""
]
},
{
...
...
@@ -1129,7 +1129,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we play with a different ML algorithm, the so called `Support Vector Classifier` (which belongs to a class of algorithms named `SVM`s (`Support Vector Machines`). As already said, details about classifiers will follow later.\n"
"Now we play with a different ML algorithm, the so called `Support Vector Classifier` (which belongs to a class of algorithms named `SVM`s (`Support Vector Machines`).\n",
"\n",
"**we will discuss available ML algorithms in a following script**\n",
""
]
},
{
...
...
@@ -1184,13 +1187,14 @@
"\n",
"### Instructions:\n",
"\n",
"- Play with parameter `C` for `LogisticRegresseion` and `SVC`.\n",
"- Play with parameter `C` for `LogisticRegression` and `SVC`.\n",
"\n",
"### Optional exercise:\n",
"\n",
"Experiment with the so called \"iris datasedt\" which is included in `scikit-learn`:https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html\n",
"\n",
"\n"
"\n",
""
]
},
{
...
...
%% Cell type:markdown id: tags:
# Chapter 1: General introduction to machine learning (ML)
# Chapter 1: General Introduction to machine learning (ML)
%% Cell type:markdown id: tags:
## ML = "learning models from data"
### About models
A "model" allows us to explain observations and to answer questions. For example:
1. Where will my car at given velocity stop if I apply break now ?
2. Where on the night sky will I see the moon tonight ?
2. Is the email I received a spam ?
4. What article X should I recommend to my customers Y ?
- The first two questions can be answered based on existing physical models (formulas).
- For the questions 3 and 4 it is difficult to develop explicitly formulated models.
### What is needed to apply ML ?
Problems 3 and 4 have the following in common:
- No exact model known or implementable because we have a vague understanding of the problem domain.
- But enough data with sufficient and implicit information is available.
E.g. for the spam email example:
- We have no explicit formula for such a task.
- We have a vague understanding of the problem domain because we know that some words are specific to spam emails and others are specific to my personal and work-related emails.
- My mailbox is full with examples of both spam and non-spam emails.
**In such cases machine learning offers approaches to build models based on example data.**
## ML: what is "learning" ?
To create a predictive model, we must first train such a model on given data.
**Alternative names for "to train" a model are "to fit" or "to learn" a model.**
All ML algorithms have in common that they rely on internal data structures and/or parameters. Learning then builds up such data structures or adjusts parameters based on the given data. After that such models can be used to explain observations or to answer questions.
The important difference between explicit models and models learned from data:
- Explicit models usually offer exact answers to questions
- Models we learn from data usually come with inherent uncertainty.
%% Cell type:markdown id: tags:
## Some history
**ADD REFERENCE IF POSSIBLE**
Some parts of ML are older than you might think. This is a rough time line with a few selected achievements from this field:
1812: Bayes Theorem
1913: Markov Chains
1951: First neural network
1959: first use or term "machine learning" AI pioneer Arthur Samuel
1969: Book "Perceptrons": Limitations of Neural Networks
1986: Backpropagation to learn neural networks
1995: Randomized Forests and Support Vector Machines
1998: Public appearance of ML: naive Bayes Classifier for Spam detection
2000+: Deep learning
So the field is not as new as one might think, but due to
- more available data
- more processing power
- development of better algorithms
more applications of machine learning appeared during the last 15 years.
%% Cell type:markdown id: tags:
## Machine learning with Python
Currently (2018) `Python` is the dominant programming language for ML. Especially the advent of deep-learning pushed this forward. First versions of frameworks such as `TensorFlow` or `PyTorch` got early `Python` releases.
The prevalent packages in the Python eco-system used for ML include:
-`pandas` for handling tabular data
-`matplotlib` and `seaborn` for plotting
-`scikit-learn` for classical (non-deep-learning) ML
-`TensorFlow`, `PyTorch` and `Keras` for deep-learning.
`scikit-learn` is very comprehensive and the online-documentation itself provides a good introducion into ML.
%% Cell type:markdown id: tags:
## ML terms: What are "features" ?
A typical and very common situation is that our data is presented as a table, as in the following example:
- every row of such a matrix is called a **sample** or **feature vector**.
- the cells in a row are **feature values**.
- every column name is called a **feature name** or **attribute**.
%% Cell type:markdown id: tags:
This table shown holds five samples.
The feature names are `alcohol_content`, `bitterness`, `darkness`, `fruitiness` and `is_yummy`.
%% Cell type:markdown id: tags:
(Almost) all machine learning algorithms require that your data is numerical and/or categorial. In some applications it is not obvious how to transform data to a numerical presentation.
**Definition**:
*Categorical data*: data which has only a limited set of allowed values. For e.g. a `taste` feature could only allow values `sour`, `bitter`, `sweet`, `salty`.
%% Cell type:markdown id: tags:
A straight-forward application of machine-learning on the previous beer dataset is: **"can we predict `is_yummy` from the other features"** ?
In this case we would call the features `alcohol_content`, `bitterness`, `darkness`, `fruitiness` our **input features** and `is_yummy` our **target value**.
%% Cell type:markdown id: tags:
We show now two examples how one can create feature vectors from data which is not naturally given as vectors:
1. Feature vectors from images
2. Feature vectors from text.
### 1st Example: How to represent images as feature vectors ?
In order to simplify our explanations we only consider grayscale images in this section.
Computers represent images as matrices. Every cell in the matrix represents one pixel, and the numerical value in the matrix cell its gray value.
As mentioned above, most of the machine learning algorithms require that every sample is represented as a vector containing numbers.
So how can we represent images as vectors?
To demonstrate this we will now load a sample dataset that is included in `scikit-learn`:
%% Cell type:code id: tags:
``` python
fromsklearn.datasetsimportload_digits
importmatplotlib.pyplotasplt
%matplotlibinline
```
%% Cell type:code id: tags:
``` python
dd=load_digits()
```
%% Cell type:markdown id: tags:
Next we plot the first nine digits from this data set:
%% Cell type:code id: tags:
``` python
N=9
plt.figure(figsize=(2*N,5))
fori,imageinenumerate(dd.images[:N]):
plt.subplot(1,N,i+1)
plt.imshow(image,cmap="gray")
```
%% Output
%% Cell type:markdown id: tags:
Following is the first image from the data set, it is a 8 x 8 matrix with values 0 to 15 (black to white). The range 0 to 15 is fixed for this specific data set. Other formats allow e.g. values 0..255 or floating point values in the range 0 to 1.
%% Cell type:code id: tags:
``` python
print(dd.images[0].shape)
print(dd.images[0])
```
%% Output
(8, 8)
[[ 0. 0. 5. 13. 9. 1. 0. 0.]
[ 0. 0. 13. 15. 10. 15. 5. 0.]
[ 0. 3. 15. 2. 0. 11. 8. 0.]
[ 0. 4. 12. 0. 0. 8. 8. 0.]
[ 0. 5. 8. 0. 0. 9. 8. 0.]
[ 0. 4. 11. 0. 1. 12. 7. 0.]
[ 0. 2. 14. 5. 10. 12. 0. 0.]
[ 0. 0. 6. 13. 10. 0. 0. 0.]]
%% Cell type:markdown id: tags:
To transform such an image to a feature vector we just have to concatenate the rows to one single vector of size 64:
### 2nd Example: How to present textual data as feature vectors ?
%% Cell type:markdown id: tags:
If we start a machine learning project for texts, we first have to choose and fix an enumerated dictionary or words for this project. The final representation of texts as feature vectors depends on this dictionary.
Such a dictionary can be very large, but for the sake of simplicity we use a very small enumerated dictionary to explain the overall procedure:
| Word | Index |
|----------|-------|
| like | 0 |
| dislike | 1 |
| american | 2 |
| italian | 3 |
| beer | 4 |
| pizza | 5 |
To "vectorize" a given text we count the words in the text which also exist in the vocabulary and put the counts at the given position `Index`.
E.g. `"I dislike american pizza, but american beer is nice"`:
| Word | Index | Count |
|----------|-------|-------|
| like | 0 | 0 |
| dislike | 1 | 1 |
| american | 2 | 2 |
| italian | 3 | 0 |
| beer | 4 | 1 |
| pizza | 5 | 1 |
The according feature vector is the `Count` column, which is:
`[0, 1, 2, 0, 1, 1]`
In real case scenarios the dictionary is much bigger, this results then in vectors with only few non-zero entries (so called sparse vectors).
%% Cell type:markdown id: tags:
**Note**: Such vectorization is unsually not done manually. Below you find is a short code example to demonstrate how text feature vectors can be created with `scikit-learn`. Actually there are improved but more complicated procedures which compute multiplicative weights for the vector entries to emphasize informative words (see also https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)
# this how one can create a count vector for a given piece of text:
vector=vectorizer.fit_transform(["I dislike american pizza. But american beer is nice"]).toarray().flatten()
print(vector)
```
%% Output
[0 1 2 0 1 1]
%% Cell type:markdown id: tags:
## Taxonomy of machine learning
Most applications of ML belong to two categories: **supervised** and **unsupervised** learning.
### Supervised learning
In supervised learning the data comes with an additional target/label value that we want to predict. Such a problem can be either
-**classification**: we want to predict a categorical value.
-**regression**: we want to predict numbers in a given range.
Examples of supervised learning:
- Classification: Predict the class `is_yummy` based on the attributes `alcohol_content`, `bitterness`, `darkness` and `fruitiness`. (two class problem).
- Classification: predict the digit-shown based on a 8 x 8 pixel image (this is a multi-class problem).
- Regression: Predict the length of a salmon based on its age and weight.
%% Cell type:markdown id: tags:
### Unsupervised learning
In unsupervised learning the training data consists of samples without any corresponding target/label values and the aim is to find structure in data. Some common applications are:
- Clustering: find groups in data.
- Density estimation, novelty detection: find a probability distribution in your data.
- Dimension reduction (e.g. PCA): find latent structures in your data.
Examples of unsupervised learning:
- Can we split up our beer data set into sub-groups of similar beers?
- Can we reduce our data set because groups of features are somehow correlated?
<table>
<tr>
<td><imgsrc="./cluster-image.png/"width=60%></td>
<td><imgsrc="./nonlin-pca.png/"width=60%></td>
</tr>
<tr>
<td><center>Clustering</center></td>
<td><center>Dimension reduction: detecting 2D structure in 3D data</center></td>
</tr>
</table>
This course will only introduce concepts and methods from **supervised learning**.
%% Cell type:markdown id: tags:
## How to apply machine learning in practice ?
Application of machine learning in practice consists of several phases:
1. Understand and clean your data.
1. Learn / train a model
2. Analyze model for its quality / performance
2. Apply this model to new incoming data
In practice steps 1. and 2. are iterated for different machine learning algorithms with different configurations until performance is optimal or sufficient.
%% Cell type:markdown id: tags:
# Exercise section 1
%% Cell type:markdown id: tags:
Our example beer data set reflects the very personal opinion of one of the tutors which beer he likes and which not. To learn a predictive model and to understand influential factors all beers went through some lab analysis to measure alcohol content, bitterness, darkness and fruitiness.
%% Cell type:markdown id: tags:
### 1. Load the data and show the overall structure
Such checks are very useful before you start throwning ML on your data. Some vague understanding how features are distributed and correlate can later be very helpfull to optimize performance of ML procedures.
%% Cell type:code id: tags:
``` python
importseabornassns
sns.set(style="ticks")
for_plot=beer_data.copy()
deftranslate_label(value):
# seaborn has issues if labes are numbers or strings which represent numbers,
- Points and colors don't look randomly distributed.
- We can see that some pairs like `darkness` vs `bitterness` seem to carry information which could support building a classifier.
- We also see that `bitterness` and `fruitiness` show correlation.
Features which show no structure can also decrease performance of ML and often it makes sense to discard them.
%% Cell type:markdown id: tags:
### 3. Prepare data: split features and labels
%% Cell type:code id: tags:
``` python
# all columns up to the last one:
input_features=beer_data.iloc[:,:-1]
# only the last column:
labels=beer_data.iloc[:,-1]
print(input_features.head(5))
print()
print(labels.head(5))
```
%% Output
alcohol_content bitterness darkness fruitiness
0 3.739295 0.422503 0.989463 0.215791
1 4.207849 0.841668 0.928626 0.380420
2 4.709494 0.322037 5.374682 0.145231
3 4.684743 0.434315 4.072805 0.191321
4 4.148710 0.570586 1.461568 0.260218
0 0
1 0
2 1
3 1
4 0
Name: is_yummy, dtype: int64
%% Cell type:markdown id: tags:
### 4. Some experiments with classifiers
%% Cell type:markdown id: tags:
We now perform first experiments with so called `LogisticRegression` and `SVC` classifiers. Details about these will follow later during the course. The intention of this section is to make very first experiments, but not to understand what these algorithms actually do.
Note:
The name `LogisticRegression` is misleading: logistic regression internally uses a kind of regression algorithm for probabilities with the final goal to classify data. So even if the name contains "regression" it still is a classifier.
%% Cell type:code id: tags:
``` python
fromsklearn.linear_modelimportLogisticRegression
```
%% Cell type:code id: tags:
``` python
classifier=LogisticRegression(C=1)
```
%% Cell type:markdown id: tags:
In `scikit-learn` all classifiers have a `fit` method to learn / train from data:
If you want to learn more about `LogisticRegression` you can use `help(LogisticRegression)` or `LogistigRegression?` to see the related documenation. The latter version only works in jupyter notebooks.
%% Cell type:markdown id: tags:
Trained `scikit-learn` classifiers have a `predict` method for predicting classes for input features.
**Comment**: `predicted_labels == labels` evaluates as a vector of values `True` or `False`. Python handles `True` as `1` and `False` as `0` when used as numbers. So the `sum(...)` just counts the correct results.
%% Cell type:markdown id: tags:
## What happened ?
Why were not all labels predicted correctly ?
Neither `Python` nor `scikit-learn` is broken. What we observed above is very typical for machine-learning applications.
Reasons could be:
- we have incomplete information: other features of beer which also contribute to the rating (like "maltiness") were not measured or can not be measured.
- the used classifiers might have been not suitable for the given problem.
- noise in the data as incorrectly assigned labels also affect results.
**Finding good features is crucial for the performance of ML algorithms!**
Another important requirement is to make sure that you have clean data: input-features might be corrupted by flawed entries, feeding such data into a ML algorithm will usually lead to reduced performance.
%% Cell type:markdown id: tags:
Now we play with a different ML algorithm, the so called `Support Vector Classifier` (which belongs to a class of algorithms named `SVM`s (`Support Vector Machines`). As already said, details about classifiers will follow later.
Now we play with a different ML algorithm, the so called `Support Vector Classifier` (which belongs to a class of algorithms named `SVM`s (`Support Vector Machines`).
**we will discuss available ML algorithms in a following script**
This is a better result ! **But this does not indicate that `SVC` is always superior to `LogisticRegression`.**
Here `SVC` just seems to fit better to our current machine learning task.
### Instructions:
- Play with parameter `C` for `LogisticRegresseion` and `SVC`.
- Play with parameter `C` for `LogisticRegression` and `SVC`.
### Optional exercise:
Experiment with the so called "iris datasedt" which is included in `scikit-learn`:https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html
%% Cell type:code id: tags:
``` python
fromsklearn.datasetsimportload_iris
data=load_iris()
# labels as text
print(data.target_names)
# (rows, columns) of the feature matrix:
print(data.data.shape)
```
%% Output
['setosa' 'versicolor' 'virginica']
(150, 4)
%% Cell type:code id: tags:
``` python
# transform the scikit-learn data structure into a data frame: