"- Discipline in the overlap of computer science and statistics\n",
"- Subset of Artificial Intelligence (AI)\n",
"- Learn models from data\n",
"- Term \"Machine Learning\" was first used in 1959 by AI pioneer Arthur Samuel\n",
" "
"\n",
"- **Learn models from data**\n"
]
},
{
...
...
@@ -36,11 +36,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## About models\n",
"## About models\n",
"\n",
"Model examples: \n",
"\n",
" 1. Will the sun shine tomorrow ?\n",
" 1. Where will my car stop when I break now ?\n",
" 2. Where on the night sky will I see the moon tonight ?\n",
" 2. Is the email I received spam ? \n",
" 4. What article X should I recommend to my customers Y ?\n",
...
...
@@ -65,6 +65,11 @@
"**In such cases machine learning offers approaches to build models based on example data.**\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
...
...
@@ -80,7 +85,7 @@
" 1969: Book \"Perceptrons\": Limitations of Neural Networks\n",
" 1986: Backpropagation to learn neural networks\n",
" 1995: Randomized Forests and Support Vector Machines\n",
" 1998: Naive Bayes Classifier for Spam detection\n",
" 1998: Application of naive Bayes Classifier for Spam detection\n",
" 2000+: Deep learning"
]
},
...
...
@@ -88,19 +93,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Features\n",
"\n",
"(Almost) all machine learning algorithms require that your data is numerical. In some applications it is not obvious how to transform data to a numerical presentation.\n",
"## What are features ?\n",
"\n",
"In most cases we can arange our data as a matrix:\n",
"- every row of such a matrix is called a **sample** or **feature vector**. \n",
"- every column name is called a **feature name** or **attribute**.\n",
"- the cells are **feature values**."
"In most cases we can arange data used for machine learning as a matrix:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 3,
"metadata": {},
"outputs": [
{
...
...
@@ -185,7 +185,7 @@
"4 4.148710 0.570586 1.461568 0.260218 0"
]
},
"execution_count": 20,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
...
...
@@ -197,6 +197,17 @@
"features.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"- every row of such a matrix is called a **sample** or **feature vector**. \n",
"- every column name is called a **feature name** or **attribute**.\n",
"- the cells are **feature values**."
]
},
{
"cell_type": "markdown",
"metadata": {},
...
...
@@ -210,7 +221,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Images -> Feature vectors\n",
"(Almost) all machine learning algorithms require that your data is numerical and/or categorial. In some applications it is not obvious how to transform data to a numerical presentation.\n",
"\n",
"Definition:\n",
"\n",
"*Categorical data*: data which has only a limited set of allowed values. A `taste` feature could only allow values `sour`, `bitter`, `sweet`, `salty`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### How to represent images as feature vectors\n",
"\n",
"Computers represent images as matrices. Every cell in the matrix represents one pixel, and the value in the matrix cell its color.\n",
"\n",
...
...
@@ -219,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -230,7 +252,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -246,7 +268,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 6,
"metadata": {},
"outputs": [
{
...
...
@@ -279,7 +301,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 7,
"metadata": {},
"outputs": [
{
...
...
@@ -337,7 +359,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Textual data -> Feature vector"
"### How to present textual data as feature vectors ?"
]
},
{
...
...
@@ -360,7 +382,7 @@
"\n",
"E.g. `\"I dislike american pizza, but american beer is nice\"`:\n",
"\n",
"| Word | Index | Count |\n",
"| Word | Index | Count |\n",
"|----------|-------|-------|\n",
"| like | 0 | 1 |\n",
"| dislike | 1 | 1 |\n",
...
...
@@ -369,7 +391,7 @@
"| beer | 4 | 1 |\n",
"| pizza | 5 | 1 |\n",
"\n",
"So this text can be encoded as the word vector\n",
"The according feature vector is the `Count` column, which is:\n",
"\n",
"`[0, 1, 2, 0, 1, 1]`"
]
...
...
@@ -383,7 +405,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 8,
"metadata": {},
"outputs": [
{
...
...
@@ -418,7 +440,7 @@
"\n",
"In **supervised learning** the the data comes with additional attributes that we want to predict. Such a problem can be either \n",
"\n",
"- **classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data. \n",
"- **classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data. This is the same as saying, that the output is categorical.\n",
" \n",
"- **regression**: if the desired output consists of one or more continuous variables, then the task is called regression.\n",
" \n",
...
...
@@ -442,9 +464,265 @@
"This course will only introduce concepts and methods from **supervised learning**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to apply machine learning in practice ?\n",
"\n",
"Application of machine learning in practice consists of several phases:\n",
"\n",
"1. Learn / train a model from example data\n",
"2. Analyze model for its quality / performance\n",
"2. Apply this model to new incoming data\n",
"\n",
"In practice steps 1. and 2. are iterated for different machine learning algorithms until performance is optimal or sufficient. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise section"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our example beer data set reflects the very personal opinion of one of the tutors which beer he likes and which not. To learn a predictive model and to understand influential factors all beers went through some lab analysis to measure alcohol content, bitterness, darkness and fruitiness."
- Discipline in the overlap of computer science and statistics
- Subset of Artificial Intelligence (AI)
- Learn models from data
- Term "Machine Learning" was first used in 1959 by AI pioneer Arthur Samuel
-**Learn models from data**
%% Cell type:markdown id: tags:
So the field is not as new as one might think, but due to more available data, processing power and development of better algorithms more applications of machine learning appeared during the last 15 years.
%% Cell type:markdown id: tags:
## About models
## About models
Model examples:
1. Will the sun shine tomorrow ?
1. Where will my car stop when I break now ?
2. Where on the night sky will I see the moon tonight ?
2. Is the email I received spam ?
4. What article X should I recommend to my customers Y ?
The first two questions can be answered based on existing mathematically explicit models (formulas).
For the questions 3 and 4 it is difficult to develop explicitly formulated models.
These problems 3 and 4 have the following in common:
- No exact model known or implementable
- Vague understanding of the problem domain
- Enough data with sufficient (implicit) information available
E.g. for the spamming example:
- We have no explicit formula for such a task
- We know that specific words are specific for spam emails, other words are specific for my personal and job emails.
- My mailbox is full with examples for spam vs non-spam.
**In such cases machine learning offers approaches to build models based on example data.**
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## Some history
Rough time line with a few examples
1812: Bayes Theorem
1913: Markov Chains
1951: First neural network
1969: Book "Perceptrons": Limitations of Neural Networks
1986: Backpropagation to learn neural networks
1995: Randomized Forests and Support Vector Machines
1998: Naive Bayes Classifier for Spam detection
1998: Application of naive Bayes Classifier for Spam detection
2000+: Deep learning
%% Cell type:markdown id: tags:
## Features
(Almost) all machine learning algorithms require that your data is numerical. In some applications it is not obvious how to transform data to a numerical presentation.
## What are features ?
In most cases we can arange our data as a matrix:
- every row of such a matrix is called a **sample** or **feature vector**.
- every column name is called a **feature name** or **attribute**.
- the cells are **feature values**.
In most cases we can arange data used for machine learning as a matrix:
- every row of such a matrix is called a **sample** or **feature vector**.
- every column name is called a **feature name** or **attribute**.
- the cells are **feature values**.
%% Cell type:markdown id: tags:
This table holds five samples.
The feature names are `alcohol_content`, `bitterness`, `darkness`, `fruitiness` and `is_yummy`.
%% Cell type:markdown id: tags:
### Images -> Feature vectors
(Almost) all machine learning algorithms require that your data is numerical and/or categorial. In some applications it is not obvious how to transform data to a numerical presentation.
Definition:
*Categorical data*: data which has only a limited set of allowed values. A `taste` feature could only allow values `sour`, `bitter`, `sweet`, `salty`.
%% Cell type:markdown id: tags:
### How to represent images as feature vectors
Computers represent images as matrices. Every cell in the matrix represents one pixel, and the value in the matrix cell its color.
`scikit-learn` includes some example data sets which we load now:
%% Cell type:code id: tags:
``` python
fromsklearn.datasetsimportload_digits
importmatplotlib.pyplotasplt
%matplotlibinline
```
%% Cell type:code id: tags:
``` python
dd=load_digits()
```
%% Cell type:markdown id: tags:
Next we plot the first nine digits from this data set:
%% Cell type:code id: tags:
``` python
N=9
plt.figure(figsize=(2*N,5))
fori,imageinenumerate(dd.images[:N],1):
plt.subplot(1,N,i)
plt.imshow(image,cmap="gray")
```
%% Output
%% Cell type:markdown id: tags:
And this is the first image from the data set, it is a 8 x 8 matrix with values 0 to 15:
%% Cell type:code id: tags:
``` python
print(dd.images[0].shape)
print(dd.images[0])
```
%% Output
(8, 8)
[[ 0. 0. 5. 13. 9. 1. 0. 0.]
[ 0. 0. 13. 15. 10. 15. 5. 0.]
[ 0. 3. 15. 2. 0. 11. 8. 0.]
[ 0. 4. 12. 0. 0. 8. 8. 0.]
[ 0. 5. 8. 0. 0. 9. 8. 0.]
[ 0. 4. 11. 0. 1. 12. 7. 0.]
[ 0. 2. 14. 5. 10. 12. 0. 0.]
[ 0. 0. 6. 13. 10. 0. 0. 0.]]
%% Cell type:markdown id: tags:
To transform such an image to a feature vectore we just have to concatenate the rows to one single vector of size 64:
### How to present textual data as feature vectors ?
%% Cell type:markdown id: tags:
To transform some text into a feature vector, we first need a enumerated dictionary. Such a dictionary can be very large, but for the sake of simplicity we use a very small dictionary to explain the overall procedure:
| Word | Index |
|----------|-------|
| like | 0 |
| dislike | 1 |
| american | 2 |
| italian | 3 |
| beer | 4 |
| pizza | 5 |
To "vectorize" a given text we count the words in the text which also exist in the vocabulary and put the counts at the given position `Index`.
E.g. `"I dislike american pizza, but american beer is nice"`:
| Word | Index | Count |
| Word | Index | Count |
|----------|-------|-------|
| like | 0 | 1 |
| dislike | 1 | 1 |
| american | 2 | 2 |
| italian | 3 | 0 |
| beer | 4 | 1 |
| pizza | 5 | 1 |
So this text can be encoded as the word vector
The according feature vector is the `Count` column, which is:
`[0, 1, 2, 0, 1, 1]`
%% Cell type:markdown id: tags:
And this is how we can compute such a word vector using Python:
vector=vectorizer.fit_transform(["I dislike american pizza. But american beer is nice"]).toarray()[0]
print(vector)
```
%% Output
[0 1 2 0 1 1]
%% Cell type:markdown id: tags:
## Taxonomy of machine learning
We can separate learning problems in a few large categories: **supervised** and **unsupervised** learning.
In **supervised learning** the the data comes with additional attributes that we want to predict. Such a problem can be either
-**classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data.
-**classification**: samples belong to two or more discrete classes and we want to learn from already labeled data how to predict the class of unlabeled data. This is the same as saying, that the output is categorical.
-**regression**: if the desired output consists of one or more continuous variables, then the task is called regression.
Examples for supervised learning:
- Classification: Predict the class `is_yummy` based on the attributes `alcohol_content`, `bitterness`, `darkness` and `fruitiness`. (two class problem).
- Classification: predict the digit-shown based on a 8 x 8 pixel image (this is a multi-class problem).
- Regression: Predict the length of a salmon based on its age and weight.
In **unsupervised learning**, in which the training data consists of samples without any corresponding target values, one tries to find structure in data. Common applications are
- Clustering
- Density estimation
- Dimension reduction (PCA, ...)
This course will only introduce concepts and methods from **supervised learning**.
%% Cell type:markdown id: tags:
## How to apply machine learning in practice ?
Application of machine learning in practice consists of several phases:
1. Learn / train a model from example data
2. Analyze model for its quality / performance
2. Apply this model to new incoming data
In practice steps 1. and 2. are iterated for different machine learning algorithms until performance is optimal or sufficient.
%% Cell type:markdown id: tags:
## Exercise section
%% Cell type:markdown id: tags:
Our example beer data set reflects the very personal opinion of one of the tutors which beer he likes and which not. To learn a predictive model and to understand influential factors all beers went through some lab analysis to measure alcohol content, bitterness, darkness and fruitiness.
/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv3.6/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: get_ipython_dir has moved to the IPython.paths module since IPython 4.0.