Skip to content
Snippets Groups Projects
Commit 2c66f16a authored by schmittu's avatar schmittu :beer:
Browse files

Merge branch 'review_ch_03_04_Tarun' into 'master'

Review ch 03 04 tarun

See merge request !4
parents fa0a4279 e0bb9609
No related branches found
No related tags found
1 merge request!4Review ch 03 04 tarun
%% Cell type:markdown id: tags:
 
# Chapter 1: General Introduction to machine learning (ML)
 
%% Cell type:markdown id: tags:
 
## ML = "learning models from data"
 
 
### About models
 
A "model" allows us to explain observations and to answer questions. For example:
 
1. Where will my car at given velocity stop if I apply break now?
2. Where on the night sky will I see the moon tonight?
3. Is the email I received spam?
4. Which article "X" should I recommend to a customer "Y"?
 
- The first two questions can be answered based on existing physical models (formulas).
 
- For the questions 3 and 4 it is difficult to develop explicitly formulated models.
 
### What is needed to apply ML ?
 
Problems 3 and 4 have the following in common:
 
- No exact model known or implementable because we have a vague understanding of the problem domain.
- But enough data with sufficient and implicit information is available.
 
 
 
E.g. for the spam email example:
 
- We have no explicit formula for such a task (and devising one would boil down to lots of trial with different statistics or scores and possibly weighting of them).
- We have a vague understanding of the problem domain because we know that some words are specific to spam emails and others are specific to my personal and work-related emails.
- My mailbox is full with examples of both spam and non-spam emails.
 
**In such cases machine learning offers approaches to build models based on example data.**
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
The closely-related concept of <strong>data mining</strong> usually means use of predictive machine learning models to explicitly discover previously unknown knowledge from a specific data set, such as, for instance, association rules between customer and article types in the Problem 4 above.
</div>
 
 
 
## ML: what is "learning" ?
 
To create a predictive model, we must first **train** such a model on given data.
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
Alternative names for "to train" a model are "to <strong>fit</strong>" or "to <strong>learn</strong>" a model.
</div>
 
 
All ML algorithms have in common that they rely on internal data structures and/or parameters. Learning then builds up such data structures or adjusts parameters based on the given data. After that such models can be used to explain observations or to answer questions.
 
The important difference between explicit models and models learned from data:
 
- Explicit models usually offer exact answers to questions
- Models we learn from data usually come with inherent uncertainty.
 
%% Cell type:markdown id: tags:
 
 
## Some history
 
Some parts of ML are older than you might think. This is a rough time line with a few selected achievements from this field:
 
1805: Least squares regression
1812: Bayes' rule
1913: Markov Chains
 
1951: First neural network
1957-65: "k-means" clustering algorithm
1959: Term "machine learning" is coined by Arthur Samuel, an AI pioneer
1969: Book "Perceptrons": Limitations of Neural Networks
1974-86: Neural networks learning breakthrough: backpropagation method
1984: Book "Classification And Regression Trees"
1995: Randomized Forests and Support Vector Machines methods
1998: Public appearance: first ML implementations of spam filtering methods; naive Bayes Classifier method
2006-12: Neural networks learning breakthrough: deep learning
 
So the field is not as new as one might think, but due to
 
- more available data
- more processing power
- development of better algorithms
 
more applications of machine learning appeared during the last 15 years.
 
%% Cell type:markdown id: tags:
 
## Machine learning with Python
 
Currently (2018) `Python` is the dominant programming language for ML. Especially the advent of deep-learning pushed this forward. First versions of frameworks such as `TensorFlow` or `PyTorch` got early `Python` releases.
 
The prevalent packages in the Python eco-system used for ML include:
 
- `pandas` for handling tabular data
- `matplotlib` and `seaborn` for plotting
- `scikit-learn` for classical (non-deep-learning) ML
- `TensorFlow`, `PyTorch` and `Keras` for deep-learning.
 
`scikit-learn` is very comprehensive and the online-documentation itself provides a good introducion into ML.
 
%% Cell type:markdown id: tags:
 
## ML lingo: What are "features" ?
 
A typical and very common situation is that our data is presented as a table, as in the following example:
 
%% Cell type:code id: tags:
 
``` python
import pandas as pd
 
features = pd.read_csv("beers.csv")
features.head()
```
 
%% Output
 
alcohol_content bitterness darkness fruitiness is_yummy
0 3.739295 0.422503 0.989463 0.215791 0
1 4.207849 0.841668 0.928626 0.380420 0
2 4.709494 0.322037 5.374682 0.145231 1
3 4.684743 0.434315 4.072805 0.191321 1
4 4.148710 0.570586 1.461568 0.260218 0
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>Definitions</strong>
<ul>
<li>every row of such a matrix is called a <strong>sample</strong> or <strong>feature vector</strong>;</li>
<li>the cells in a row are <strong>feature values</strong>;</li>
<li>every column name is called a <strong>feature name</strong> or <strong>attribute</strong>.</li>
</ul>
 
Features are also commonly called <strong>variables</strong>.
</div>
 
%% Cell type:markdown id: tags:
 
This table shown holds five samples.
 
The feature names are `alcohol_content`, `bitterness`, `darkness`, `fruitiness` and `is_yummy`.
 
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>More definitions</strong>
<ul>
<li>The first four features have continuous numerical values within some ranges - these are called <strong>numerical features</strong>,</li>
<li>the <code>is_yummy</code> feature has only a finite set of values ("categories"): <code>0</code> ("no") and <code>1</code> ("yes") - this is called a <strong>categorical feature</strong>.</li>
</ul>
 
%% Cell type:markdown id: tags:
 
A straight-forward application of machine-learning on the previous beer dataset is: **"can we predict `is_yummy` from the other features"** ?
 
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>Even more definitions</strong>
 
In context of the question above we call:
<ul>
<li>the <code>alcohol_content</code>, <code>bitterness</code>, <code>darkness</code>, <code>fruitiness</code> features our <strong>input features</strong>, and</li>
<li>the <code>is_yummy</code> feature our <strong>target/output feature</strong> or a <strong>label</strong> of our data samples.
<ul>
<li>Values of categorical labels, such as <code>0</code> ("no") and <code>1</code> ("yes") here, are often called <strong>classes</strong>.</li>
</ul>
</li>
</ul>
 
%% Cell type:markdown id: tags:
 
Most of the machine learning algorithms require that every sample is represented as a vector containing numbers. Let's look now at two examples of how one can create feature vectors from data which is not naturally given as vectors:
 
1. Feature vectors from images
2. Feature vectors from text.
 
### 1st Example: How to represent images as feature vectors ?
 
In order to simplify our explanations we only consider grayscale images in this section.
Computers represent images as matrices. Every cell in the matrix represents one pixel, and the numerical value in the matrix cell its gray value.
 
So how can we represent images as vectors?
 
To demonstrate this we will now load a sample dataset that is included in `scikit-learn`:
 
%% Cell type:code id: tags:
 
``` python
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
 
%% Cell type:code id: tags:
 
``` python
dd = load_digits()
print(dir(dd))
```
 
%% Output
 
['DESCR', 'data', 'images', 'target', 'target_names']
 
%% Cell type:code id: tags:
 
``` python
print("DESCR:\n", dd.DESCR[:500], "\n[...]") # description of the dataset
```
 
%% Output
 
DESCR:
Optical Recognition of Handwritten Digits Data Set
===================================================
Notes
-----
Data Set Characteristics:
:Number of Instances: 5620
:Number of Attributes: 64
:Attribute Information: 8x8 image of integer pixels in the range 0..16.
:Missing Attribute Values: None
:Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
:Date: July; 1998
This is a copy of the test set of the UCI ML hand-written digits datasets
http://archive.ics.uci.edu/ml/datas
[...]
 
%% Cell type:markdown id: tags:
 
Let's plot the first ten digits from this data set:
 
%% Cell type:code id: tags:
 
``` python
N = 10
 
plt.figure(figsize=(2 * N, 5))
 
for i, image in enumerate(dd.images[:N]):
plt.subplot(1, N, i + 1).set_title(dd.target[i])
plt.imshow(image, cmap="gray")
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
The data is a set of 8 x 8 matrices with values 0 to 15 (black to white). The range 0 to 15 is fixed for this specific data set. Other formats allow e.g. values 0..255 or floating point values in the range 0 to 1.
 
%% Cell type:code id: tags:
 
``` python
print("images[0].shape:", dd.images[0].shape) # dimensions of a first sample array
print()
print("images[0]:\n", dd.images[0]) # first sample array
```
 
%% Output
 
images[0].shape: (8, 8)
images[0]:
[[ 0. 0. 5. 13. 9. 1. 0. 0.]
[ 0. 0. 13. 15. 10. 15. 5. 0.]
[ 0. 3. 15. 2. 0. 11. 8. 0.]
[ 0. 4. 12. 0. 0. 8. 8. 0.]
[ 0. 5. 8. 0. 0. 9. 8. 0.]
[ 0. 4. 11. 0. 1. 12. 7. 0.]
[ 0. 2. 14. 5. 10. 12. 0. 0.]
[ 0. 0. 6. 13. 10. 0. 0. 0.]]
 
%% Cell type:markdown id: tags:
 
To transform such an image to a feature vector we just have to flatten the matrix by concatenating the rows to one single vector of size 64:
 
%% Cell type:code id: tags:
 
``` python
image_vector = dd.images[0].flatten()
print("image_vector.shape:", image_vector.shape)
print("image_vector:", image_vector)
```
 
%% Output
 
image_vector.shape: (64,)
image_vector: [ 0. 0. 5. 13. 9. 1. 0. 0. 0. 0. 13. 15. 10. 15. 5. 0. 0. 3.
15. 2. 0. 11. 8. 0. 0. 4. 12. 0. 0. 8. 8. 0. 0. 5. 8. 0.
0. 9. 8. 0. 0. 4. 11. 0. 1. 12. 7. 0. 0. 2. 14. 5. 10. 12.
0. 0. 0. 0. 6. 13. 10. 0. 0. 0.]
 
%% Cell type:markdown id: tags:
 
### 2nd Example: How to present textual data as feature vectors?
 
%% Cell type:markdown id: tags:
 
If we start a machine learning project for texts, we first have to choose a dictionary (a set of words) for this project. The words in the dictionary are enumerated. The final representation of a text as a feature vector depends on this dictionary.
 
Such a dictionary can be very large, but for the sake of simplicity we use a very small enumerated dictionary to explain the overall procedure:
 
 
| Word | Index |
|----------|-------|
| like | 0 |
| dislike | 1 |
| american | 2 |
| italian | 3 |
| beer | 4 |
| pizza | 5 |
 
To "vectorize" a given text we count the words in the text which also exist in the vocabulary and put the counts at the given `Index`.
 
E.g. `"I dislike american pizza, but american beer is nice"`:
 
| Word | Index | Count |
|----------|-------|-------|
| like | 0 | 0 |
| dislike | 1 | 1 |
| american | 2 | 2 |
| italian | 3 | 0 |
| beer | 4 | 1 |
| pizza | 5 | 1 |
 
The respective feature vector is the `Count` column, which is:
 
`[0, 1, 2, 0, 1, 1]`
 
In real case scenarios the dictionary is much bigger, which often results in vectors with only few non-zero entries (so called **sparse vectors**).
 
%% Cell type:markdown id: tags:
 
Below you find is a short code example to demonstrate how text feature vectors can be created with `scikit-learn`.
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
Such vectorization is usually not done manually. Actually there are improved but more complicated procedures which compute multiplicative weights for the vector entries to emphasize informative words such as, e.g., <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html">"term frequency-inverse document frequency" vectorizer</a>.
</div>
 
%% Cell type:code id: tags:
 
``` python
from sklearn.feature_extraction.text import CountVectorizer
from itertools import count
 
vocabulary = {
"like": 0,
"dislike": 1,
"american": 2,
"italian": 3,
"beer": 4,
"pizza": 5,
}
 
vectorizer = CountVectorizer(vocabulary=vocabulary)
 
# this how one can create a count vector for a given piece of text:
vector = vectorizer.fit_transform([
"I dislike american pizza. But american beer is nice"
]).toarray().flatten()
print(vector)
```
 
%% Output
 
[0 1 2 0 1 1]
 
%% Cell type:markdown id: tags:
 
## ML lingo: What are the different types of datasets?
<div class="alert alert-block alert-danger">
<strong>TODO:</strong> move to later section about cross validation.</div>
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>Definitions</strong>
Subset of data used for:
<ul>
<li>learning (training) a model is called a <strong>training set</strong>;</li>
<li>improving ML method performance by adjusting its parameters is called <strong>validation set</strong>;</li>
<li>assesing final performance is called <strong>test set</strong>.</li>
</ul>
</div>
<table>
<tr>
<td><img src="./data_split.png" width=300px></td>
</tr>
<tr>
<td style="font-size:75%"><center>Img source: https://dziganto.github.io</center></td>
</tr>
</table>
You will learn more on how to select wisely subsets of your data and about related issues later in the course. For now just remember that:
1. the training and validation datasets must be disjunct during each iteration of the method improvement, and
1. the test dataset must be independent from the model (hence, from the other datasets), i.e. it is indeed used only for the final assesment of the method's performance (think: locked in the safe until you're done with model tweaking).
%% Cell type:markdown id: tags:
## Taxonomy of machine learning
 
Most applications of ML belong to two categories: **supervised** and **unsupervised** learning.
 
### Supervised learning
 
In supervised learning the data comes with an additional target/label value that we want to predict. Such a problem can be either
 
- **classification**: we want to predict a categorical value.
 
- **regression**: we want to predict numbers in a given range.
 
 
 
Examples of supervised learning:
 
- Classification: predict the class `is_yummy` based on the attributes `alcohol_content`, `bitterness`, `darkness` and `fruitiness` (a standard two-class problem).
 
- Classification: predict the digit-shown based on a 8 x 8 pixel image (a multi-class problem).
 
- Regression: predict temperature based on how long sun was shining in the last 10 minutes.
 
 
 
<table>
<tr>
<td><img src="./classification-svc-2d-poly.png" width=400px></td>
<td><img src="./regression-lin-1d.png" width=400px></td>
</tr>
<tr>
<td><center>Classification</center></td>
<td><center>Linear regression</center></td>
</tr>
</table>
 
%% Cell type:markdown id: tags:
 
### Unsupervised learning
 
In unsupervised learning the training data consists of samples without any corresponding target/label values and the aim is to find structure in data. Some common applications are:
 
- Clustering: find groups in data.
- Density estimation, novelty detection: find a probability distribution in your data.
- Dimension reduction (e.g. PCA): find latent structures in your data.
 
Examples of unsupervised learning:
 
- Can we split up our beer data set into sub-groups of similar beers?
- Can we reduce our data set because groups of features are somehow correlated?
 
<table>
<tr>
<td><img src="./cluster-image.png/" width=400px></td>
<td><img src="./nonlin-pca.png/" width=400px></td>
</tr>
<tr>
<td><center>Clustering</center></td>
<td><center>Dimension reduction: detecting 2D structure in 3D data</center></td>
</tr>
</table>
 
 
 
This course will only introduce concepts and methods from **supervised learning**.
 
%% Cell type:markdown id: tags:
 
## How to apply machine learning in practice?
 
Application of machine learning in practice consists of several phases:
 
1. Understand and clean your data.
1. Learn / train a model
2. Analyze model for its quality / performance
2. Apply this model to new incoming data
 
In practice steps 1. and 2. are iterated for different machine learning algorithms with different configurations until performance is optimal or sufficient.
 
%% Cell type:markdown id: tags:
 
# Hands-on section
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-danger">
<strong>TODO:</strong> hands-on or exercise? If latter, then transform to a set of small exercises and mark solutions cells w/ <code>#SOLUTION</code> first line.
</div>
 
 
%% Cell type:markdown id: tags:
 
Our example beer data set reflects the very personal opinion of one of the tutors which beer he likes and which not. To learn a predictive model and to understand influential factors all beers went through some lab analysis to measure alcohol content, bitterness, darkness and fruitiness.
 
%% Cell type:markdown id: tags:
 
### 1. Load the data and show the overall structure using `pandas`
 
%% Cell type:code id: tags:
 
``` python
import pandas as pd
 
# read some data
beer_data = pd.read_csv("beers.csv")
print(beer_data.shape)
```
 
%% Output
 
(225, 5)
 
%% Cell type:code id: tags:
 
``` python
# show first 5 rows
beer_data.head(5)
 
# there is alos beer_data.tail(5) !
```
 
%% Output
 
alcohol_content bitterness darkness fruitiness is_yummy
0 3.739295 0.422503 0.989463 0.215791 0
1 4.207849 0.841668 0.928626 0.380420 0
2 4.709494 0.322037 5.374682 0.145231 1
3 4.684743 0.434315 4.072805 0.191321 1
4 4.148710 0.570586 1.461568 0.260218 0
 
%% Cell type:code id: tags:
 
``` python
# show basic statistics of the data
beer_data.describe()
```
 
%% Output
 
alcohol_content bitterness darkness fruitiness is_yummy
count 225.000000 225.000000 225.000000 225.000000 225.000000
mean 4.711873 0.463945 2.574963 0.223111 0.528889
std 0.437040 0.227366 1.725916 0.117272 0.500278
min 3.073993 0.000000 0.000000 0.000000 0.000000
25% 4.429183 0.281291 1.197640 0.135783 0.000000
50% 4.740846 0.488249 2.026548 0.242396 1.000000
75% 5.005170 0.631056 4.043995 0.311874 1.000000
max 5.955272 1.080170 7.221285 0.535315 1.000000
 
%% Cell type:markdown id: tags:
 
### 2. Visualy inspect data using `seaborn`
 
Such checks are very useful before you start throwning ML on your data. Some vague understanding how features are distributed and correlate can later be very helpfull to optimize performance of ML procedures.
 
 
%% Cell type:code id: tags:
 
``` python
import seaborn as sns
sns.set(style="ticks")
 
for_plot = beer_data.copy()
 
def translate_label(value):
# seaborn has issues if labes are numbers or strings which represent numbers,
# for whatever reason "real" text labels work
return "no" if value == 0 else "yes"
 
for_plot["is_yummy"] = for_plot["is_yummy"].apply(translate_label)
 
sns.pairplot(for_plot, hue="is_yummy", diag_kind="hist");
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
What do we see?
 
- Points and colors don't look randomly distributed.
- We can see that some pairs like `darkness` vs `bitterness` seem to carry information which could support building a classifier.
- We also see that `bitterness` and `fruitiness` show correlation.
 
Features which show no structure can also decrease performance of ML and often it makes sense to discard them.
 
%% Cell type:markdown id: tags:
 
### 3. Prepare data: split features and labels
 
%% Cell type:code id: tags:
 
``` python
# all columns up to the last one:
input_features = beer_data.iloc[:, :-1]
 
# only the last column:
labels = beer_data.iloc[:, -1]
 
print('# INPUT FEATURES')
print(input_features.head(5))
print('...')
print(input_features.shape)
print()
print('# LABELS')
print(labels.head(5))
print('...')
print(labels.shape)
```
 
%% Output
 
# INPUT FEATURES
alcohol_content bitterness darkness fruitiness
0 3.739295 0.422503 0.989463 0.215791
1 4.207849 0.841668 0.928626 0.380420
2 4.709494 0.322037 5.374682 0.145231
3 4.684743 0.434315 4.072805 0.191321
4 4.148710 0.570586 1.461568 0.260218
...
(225, 4)
# LABELS
0 0
1 0
2 1
3 1
4 0
Name: is_yummy, dtype: int64
...
(225,)
 
%% Cell type:markdown id: tags:
 
### 4. Start machine learning using `scikit-learn`
 
%% Cell type:markdown id: tags:
 
Let's finally do some machine learning starting with the so called `LogisticRegression` classifier from `scikit-learn` package. The intention here is to experiment first. Details of this and further ML algorithms are not necessary at this point, but do not worry, they will come later during the course.
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
<code>LogisticRegression</code> is a classification method, even so the name contains "regression"-as the other group of unsupervised learning methods. In fact, in logistic regression method the (linear) regression is used internally and the result is then transformed (using logistic function) to probability of belonging to one of the two classes.
</div>
 
%% Cell type:code id: tags:
 
``` python
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
```
 
%% Output
 
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>Built-in documentation</strong>
 
If you want to learn more about <code>LogisticRegression</code> you can use <code>help(LogisticRegression)</code> or <code>?LogisticRegression</code> to see the related documenation. The latter version works only in Jupyter Notebooks (or in IPython shell).
</div>
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-warning">
<i class="fa fa-warning"></i>&nbsp;<strong>`scikit-learn` API</strong>
 
In <code>scikit-learn</code> all classifiers have:
<ul>
<li>a <strong><code>fit()</code></strong> method to learn from data, and</li>
<li>and a subsequent <strong><code>predict()</code></strong> method for predicting classes from input features.</li>
</ul>
</div>
 
%% Cell type:code id: tags:
 
``` python
# Sanity check: can't predict if not fitted (trained)
classifier.predict(input_features)
```
 
%% Output
 
---------------------------------------------------------------------------
NotFittedError Traceback (most recent call last)
<ipython-input-15-9e1ed3d39774> in <module>()
1 # Sanity check: can't predict if not fitted (trained)
----> 2 classifier.predict(input_features)
 
~/Projects/machinelearning-introduction-workshop/venv3.6/lib/python3.6/site-packages/sklearn/linear_model/base.py in predict(self, X)
322 Predicted class label per sample.
323 """
--> 324 scores = self.decision_function(X)
325 if len(scores.shape) == 1:
326 indices = (scores > 0).astype(np.int)
~/Projects/machinelearning-introduction-workshop/venv3.6/lib/python3.6/site-packages/sklearn/linear_model/base.py in decision_function(self, X)
296 if not hasattr(self, 'coef_') or self.coef_ is None:
297 raise NotFittedError("This %(name)s instance is not fitted "
--> 298 "yet" % {'name': type(self).__name__})
299
300 X = check_array(X, accept_sparse='csr')
NotFittedError: This LogisticRegression instance is not fitted yet
 
%% Cell type:code id: tags:
 
``` python
# Fit
classifier.fit(input_features, labels)
 
# Predict
predicted_labels = classifier.predict(input_features)
print(predicted_labels.shape)
```
 
%% Output
 
(225,)
 
%% Cell type:markdown id: tags:
 
Here we've just re-classified our training data. Lets check our result with a few examples:
 
%% Cell type:code id: tags:
 
``` python
for i in range(5):
print(labels[i], predicted_labels[i])
```
 
%% Output
 
0 0
0 1
1 1
1 1
0 0
 
%% Cell type:markdown id: tags:
 
This looks suspicious !
 
Lets investigate this further:
 
%% Cell type:code id: tags:
 
``` python
print(len(labels), "examples")
print(sum(predicted_labels == labels), "labeled correctly")
```
 
%% Output
 
225 examples
187 labeled correctly
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
<code>predicted_labels == labels</code> evaluates to a vector of <code>True</code> or <code>False</code> Boolean values. When used as numbers, Python handles <code>True</code> as <code>1</code> and <code>False</code> as <code>0</code>. So, <code>sum(...)</code> simply counts the correctly predicted labels.
</div>
 
%% Cell type:markdown id: tags:
 
## What happened?
 
Why were not all labels predicted correctly?
 
Neither `Python` nor `scikit-learn` is broken. What we observed above is very typical for machine-learning applications.
 
Reasons could be:
 
- we have incomplete information: other features of beer which also contribute to the rating (like "maltiness") were not measured or can not be measured.
 
- the used classifiers might have been not suitable for the given problem.
 
- noise in the data as incorrectly assigned labels also affect results.
 
 
**Finding good features is crucial for the performance of ML algorithms!**
 
 
Another important requirement is to make sure that you have clean data: input-features might be corrupted by flawed entries, feeding such data into a ML algorithm will usually lead to reduced performance.
 
%% Cell type:markdown id: tags:
 
# Exercise section 1
 
%% Cell type:markdown id: tags:
 
### 1. Compare with alternative machine learning method from `scikit-learn`
 
%% Cell type:markdown id: tags:
 
Now, using previously loaded and prepared beer data, train a different `scikit-learn` classifier - the so called **Support Vector Classifier** `SVC`, and evaluate its "re-classification" performance again.
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
<code>SVC</code> belongs to a class of algorithms named "Support Vector Machines" (SVMs). Again, it will be discussed in more detail in the following scripts.
</div>
 
%% Cell type:code id: tags:
 
``` python
from sklearn.svm import SVC
classifier = SVC()
# ...
```
 
%% Cell type:code id: tags:
 
``` python
#SOLUTION
classifier = SVC()
classifier.fit(input_features, labels)
 
predicted_labels = classifier.predict(input_features)
 
assert(predicted_labels.shape == labels.shape)
print(len(labels), "examples")
print(sum(predicted_labels == labels), "labeled correctly")
```
 
%% Output
 
225 examples
205 labeled correctly
 
%% Cell type:markdown id: tags:
 
Better?
 
<div class="alert alert-block alert-info">
<i class="fa fa-info-circle"></i>
Better re-classification in our example does not indicate here that <code>SVC</code> is better than <code>LogisticRegression</code> in all cases. The performance of a classifier strongly depends on the data set.
</div>
 
 
 
%% Cell type:markdown id: tags:
 
### 2. Experiment with (hyper)parameters of ML methods
 
%% Cell type:markdown id: tags:
 
Both `LogisticRegression` and `SVC` classifiers have a parameter `C` which allows to enforce a "simplification" (often called **regularization**) of the resulting model. Test the beers data "re-classification" with different values of this parameter.
 
 
**TOBE discussed**: is "regularization" to technical here ? decision surfaces and details of classifers come later. Original purpose (Uwe) was to demonstrate that classifiers can be tuned to the data set.
 
%% Cell type:code id: tags:
 
``` python
# Recall: ?LogisticRegression
# ...
```
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-danger">
<strong>TODO:</strong> prepare a solution.
 
**TODO**: Consider the case C=2 as this is used when describing overfitting. Or: if we find a better C here, don't forget to adapt the examples in the overfitting script.
 
Also explain that details about classifiers and parameters come in script 05
**TODO**: Explain that C is not available for all classifiers, it is more a coincidence taht LogisticRegression and SVC offer this setting.
 
 
</div>
 
%% Cell type:markdown id: tags:
 
# Exercise section 2 (optional)
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-danger">
<strong>TODO:</strong> finish solution - missing classification and "re-classification" assesment.
</div>
 
%% Cell type:markdown id: tags:
 
Load and inspect the cannonical Fisher's "Iris" data set, which is included in `scikit-learn`: see [docs for `sklearn.datasets.load_iris`](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html). What's conceptually diffferent?
 
Apply `LogisticRegression` or `SVC` classifiers. Is it easier or more difficult than classification of the beers data?
 
 
%% Cell type:code id: tags:
 
``` python
from sklearn.datasets import load_iris
 
data = load_iris()
 
# labels as text
print(data.target_names)
 
# (rows, columns) of the feature matrix:
print(data.data.shape)
```
 
%% Output
 
['setosa' 'versicolor' 'virginica']
(150, 4)
 
%% Cell type:code id: tags:
 
``` python
# transform the scikit-learn data structure into a data frame:
df = pd.DataFrame(data.data, columns=data.feature_names)
df["class"] = data.target
df.head()
```
 
%% Output
 
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
class
0 0
1 0
2 0
3 0
4 0
 
%% Cell type:code id: tags:
 
``` python
df.describe()
```
 
%% Output
 
sepal length (cm) sepal width (cm) petal length (cm) \
count 150.000000 150.000000 150.000000
mean 5.843333 3.054000 3.758667
std 0.828066 0.433594 1.764420
min 4.300000 2.000000 1.000000
25% 5.100000 2.800000 1.600000
50% 5.800000 3.000000 4.350000
75% 6.400000 3.300000 5.100000
max 7.900000 4.400000 6.900000
petal width (cm) class
count 150.000000 150.000000
mean 1.198667 1.000000
std 0.763161 0.819232
min 0.100000 0.000000
25% 0.300000 0.000000
50% 1.300000 1.000000
75% 1.800000 2.000000
max 2.500000 2.000000
 
%% Cell type:code id: tags:
 
``` python
import seaborn as sns
sns.set(style="ticks")
 
for_plot = df.copy()
 
def transform_label(class_):
return data.target_names[class_]
 
# seaborn does not work here if we use numeric values in the class
# column, or strings which represent numbers. To fix this we
# create textual class labels
for_plot["class"] = for_plot["class"].apply(transform_label)
sns.pairplot(for_plot, hue="class", diag_kind="hist") ;
```
 
%% Output
 
 
%% Cell type:markdown id: tags:
 
<div class="alert alert-block alert-danger">
<strong>TODO:</strong> hide tech stuff below.
</div>
 
%% Cell type:code id: tags:
 
``` python
#REMOVEBEGIN
# THE LINES BELOW ARE JUST FOR STYLING THE CONTENT ABOVE !
 
from IPython import utils
from IPython.core.display import HTML
import os
def css_styling():
"""Load default custom.css file from ipython profile"""
base = utils.path.get_ipython_dir()
styles = """<style>
 
@import url('http://fonts.googleapis.com/css?family=Source+Code+Pro');
 
@import url('http://fonts.googleapis.com/css?family=Kameron');
@import url('http://fonts.googleapis.com/css?family=Crimson+Text');
 
@import url('http://fonts.googleapis.com/css?family=Lato');
@import url('http://fonts.googleapis.com/css?family=Source+Sans+Pro');
 
@import url('http://fonts.googleapis.com/css?family=Lora');
 
 
body {
font-family: 'Lora', Consolas, sans-serif;
 
-webkit-print-color-adjust: exact important !;
 
 
 
}
 
.alert-block {
width: 95%;
margin: auto;
}
 
.rendered_html code
{
color: black;
background: #eaf0ff;
background: #f5f5f5;
padding: 1pt;
font-family: 'Source Code Pro', Consolas, monocco, monospace;
}
 
p {
line-height: 140%;
}
 
strong code {
background: red;
}
 
.rendered_html strong code
{
background: #f5f5f5;
}
 
.CodeMirror pre {
font-family: 'Source Code Pro', monocco, Consolas, monocco, monospace;
}
 
.cm-s-ipython span.cm-keyword {
font-weight: normal;
}
 
strong {
background: #f5f5f5;
margin-top: 4pt;
margin-bottom: 4pt;
padding: 2pt;
border: 0.5px solid #a0a0a0;
font-weight: bold;
color: darkred;
}
 
 
div #notebook {
# font-size: 10pt;
line-height: 145%;
}
 
li {
line-height: 145%;
}
 
div.output_area pre {
background: #fff9d8 !important;
padding: 5pt;
 
-webkit-print-color-adjust: exact;
 
}
 
 
 
h1, h2, h3, h4 {
font-family: Kameron, arial;
}
 
div#maintoolbar {display: none !important;}
</style>"""
return HTML(styles)
css_styling()
#REMOVEEND
```
 
%% Output
 
/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv3.6/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: get_ipython_dir has moved to the IPython.paths module since IPython 4.0.
if __name__ == '__main__':
 
<IPython.core.display.HTML object>
......
This diff is collapsed.
%% Cell type:markdown id: tags:
# Chapter 4: Metrics for evaluating the performance of a classifier
%% Cell type:code id: tags:
``` python
import sklearn.metrics as metrics
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
%% Cell type:markdown id: tags:
Up to now we used _accuracy_, the percentage of correct classifcations, to compute the quality of a classifier.
Up to now we used _accuracy_, the percentage of correct classifcations, to evaluate the quality of a classifier.
Regrettably accuracy can produce very misleading results.
Regrettably _accuracy_ can produce very misleading results.
This chapter and the next chapter will discuss how to asset the quality of a classifier including possible pitfalls.
This and the next chapter will discuss other metrics to asses the quality of a classifier including the possible pitfalls.
%% Cell type:markdown id: tags:
## 1. The confusion matrix
%% Cell type:markdown id: tags:
To set the stage we must introduce some terms. After applying a classifier to a data set with known labels `0` and `1`:
Before we define the **confusion matrix** we must introduce some additional terms.
**TP (true positives)**: labels which where predicted as `1` and actually are `1`.
After applying a classifier to a data set with known labels `0` and `1`:
**TN (true negatives)**: labels which where predicted as `0` and actually are `0`.
**TP (true positives)**: labels which were predicted as `1` and actually are `1`.
**FP (false positives)**: labels which where predicted as `1` and actually are `0`.
**TN (true negatives)**: labels which were predicted as `0` and actually are `0`.
**FN (false negatives)**: labels which where predicted as `0` and actually are `1`.
**FP (false positives)**: labels which were predicted as `1` and actually are `0`.
**FN (false negatives)**: labels which were predicted as `0` and actually are `1`.
%% Cell type:markdown id: tags:
To memorize this: the second word "positives" and "negatives" refer to the prediction computed by the classifier.
To memorize this: the second word "positives"/"negatives" refers to the prediction computed by the classifier.
The first word "true" or "false" express if the classification was correct or not.
The first word "true"/"false" expresses if the classification was correct or not.
Using these terms we can no define the so called confusion matrix:
Using these terms we can now define the so called **confusion matrix**:
%% Cell type:code id: tags:
``` python
pd.DataFrame(np.array([["TP", "FP"], ["FN", "TN"]]) ,
index=["Predicted T", "Predicted F"],
columns=["Actual T", "Actual F"])
```
%% Output
Actual T Actual F
Predicted T TP FP
Predicted F FN TN
%% Cell type:markdown id: tags:
So the total number of predictions can we expressed as `TP` + `FP` + `FN` + `TN`.
So the total number of predictions can be expressed as `TP` + `FP` + `FN` + `TN`.
The number of correct predictions is `TP` + `TN`.
This allows us to define **accuracy** as (`TP` + `TN`) / (`TP` + `FP` + `FN` + `TN`).
Beyond that: `TP` + `FN` is the number of positive examples in our data set, `FP` + `TN` is the number of negative examples.
%% Cell type:markdown id: tags:
## Pitfalls
Accuracy can be very misleading if classe sizes are imbalanced.
**Accuracy can be very misleading if classe sizes are imbalanced.**
Let us demonstrate this with an extreme example:
- On average 10 out of 10000 people are infected with a disease `X`.
- A medical test `Z` diagnoses 10 % of infected people as `not infected` ?
- A medical test `Z` diagnoses 50 % of infected people as `not infected` ?
- The test is correct on all not-infected people.
For $10000$ people
Among $10000$ people
- $10$ will be infected, $1$ gets a correct result.
- $10$ will be infected, $5$ gets a correct result.
- $9990$ will be not infected with a correct test result.
Thus accuracy is $\frac{9991}{10000} = 99.91 \% $
Thus accuracy is $\frac{9995}{10000} = 99.95 \% $
This is also called the **accuracy paradox** (<a href="https://en.wikipedia.org/wiki/Accuracy_paradox">see also here</a>).
To evaluate this test on such an unbalanced dataset we need different numbers:
1. Does our test miss infected people: How many infected people are actually discovered to be infected ?
2. Does our test predict people as infected which are actually not: How many positive diagnoses are correct ?
We come back to this example later.
**TODO**: in a later chapter or in a extra box provide links to strategies for imbalanced data sets.
%% Cell type:markdown id: tags:
## Exercise block 1
1. A classifier predicts labels `[0, 0, 1, 1, 1, 0, 1, 1]` whereas true labels are `[0, 1, 0, 1, 1, 0, 1, 1]`. Use pen and paper and write these values as a two columned table. Then assign `FP`, `TP`, ... to each pair. Determine confusion matrix and accuracy.
1.1 A classifier predicts labels `[0, 0, 1, 1, 1, 0, 1, 1]` whereas true labels are `[0, 1, 0, 1, 1, 0, 1, 1]`. Write these values as a two columned table using pen & paper and assign `FP`, `TP`, ... to each pair. Determine confusion matrix and accuracy.
1.2 A random classfier just assign a randomly chosen label `0` or `1` for a given feature. What is the average accuracy of such a classifier?
### Optional exercise
2. Assume the previously described test also produces wrong results on not-infected people. thus that 5 out of 10000 will be diages as infected. Compute the confusion matrix and the accuracy of this test.
1.2 Assume the previously described test also produces wrong results on not-infected people, such that 5 out of 10000 will be diagnosed as infected. Compute the confusion matrix and the accuracy of this test.
%% Cell type:markdown id: tags:
## 2. Precision and Recall
To introduce the concept of "precision" and "recall", imagine the following scenario:
A few days before thanksgiving you open an online recipe website and enter "turkey thanksgiving". You see some suitable recommendations but also unusable results related to turkish recipes.
In order to understand the concept of **precision** and **recall**, imagine the following scenario:
Such a serach engine works like a filter applied on a collection of document.
A few days before thanksgiving you open an online recipe website and enter "turkey thanksgiving". You see some suitable recommendations but also unusable results related to Turkish recipes.
As a scientist you want to assess the reliablity of service:
Such a search engine works like a filter applied on a collection of documents.
1. What fraction of relevant recipes stored in the underlying database do I see ?
As a scientist you want to assess the reliablity of this service:
2. How many of the shown results are relevant recipes and not to recipes from turkey ?
1. What fraction of relevant recipes stored in the underlying database do I see?
2. How many of the shown results are relevant recipes and not the recipes from Turkey?
In this context **recall** is the fraction of available documents found by the engine.
And **precision** is the fraction of shown results which are correct.
### Trade-off between precision and recall.
The more results the search engine delivers, the less available results will be ignored. But at the same time the fraction of wrong results increases.
In this context,
The following two pictures show the mentioned trade-off.
**recall**: is the fraction of all the relevant documents found by the engine.
A filter with a high precision also restrains some correct results:
And
<br>
<img src="./precision_high_recall_low.svg">
**precision**: is the fraction of shown results that are correct.
If we "open" such a filter, we increase recall, we also risk that incorrect results show up:
<br>
<img src="./precision_low_recall_high.svg">
### Trade-off between precision and recall.
The more results the search engine delivers, lesser will be the number of relevant documents which are ignored. But at the same time the fraction of wrong results will increase.
%% Cell type:markdown id: tags:
### How to compute precision and recall
To transfer this concept to classification, we can interpret a classifier as a filter. The classifier classifies every document in a collection as relevant or not relevant.
The number of shown documents is TP + FP, thus **precision** is computed as TP / (TP + FP).
The number of relevant documents is TP + FN, thus **recall** is computed as TP / (TP + FN).
The confusion matrix for the medical test `Z` is then:
<table style="border: 1px solid black">
<tr style="border: 1px black">
<td style="border: 1px solid black; background: white; padding: 1em">1</td>
<td style="border: 1px solid black; background: white; ">0</td>
<td style="border: 1px solid black; background: white; padding: 1em">TP = 5</td>
<td style="border: 1px solid black; background: white; ">FP = 0</td>
</tr>
<tr style="border: 1px black">
<td style="border: 1px solid black; background: white; padding: 1em ">9</td>
<td style="border: 1px solid black; background: white; ">9900</td>
<td style="border: 1px solid black; background: white; padding: 1em ">FN = 5</td>
<td style="border: 1px solid black; background: white; ">TN = 9900</td>
</tr>
</table>
Here precision is `1.0` and recall is `0.1`.
Here precision is `1.0` and recall is `0.5`.
### F1-score
Sometimes we want one single number instead of two numbers to compare the performace of multiple classifiers.
Sometimes we want a single number instead of two numbers to compare the performace of multiple classifiers.
A common approach to combine precision and recall to one single number is the **F1 metric** which is defined as `F1 = 2 * (precision * recall) / (precision + recall)`
A common approach to combine precision and recall is to compute their harmonic mean. This metric is called **F1 score**.
(which is the harmonic mean of precision and recall, in case you know what that is).
`F1 = 2 * (precision * recall) / (precision + recall)`
For the medical test `Z` the `F1` score is `0.2 / 1.1 = 0.181..`.
For the medical test `Z` the `F1` score is `1 / 1.5 = 0.6666..`.
%% Cell type:markdown id: tags:
## Exercise block 2
Use your results from exercise 1.1 to compute precision, recall and F1 score.
### Optional exercise:
Compute precision, recall and F1-score for the test described in exercise 1.2.
%% Cell type:markdown id: tags:
## Other metrics
This was just a quick introduction to measuring the accuracy of a classifier. We skipped `ROC` and `AUC` amongst others.
The discussion above was just a quick introduction to measuring the accuracy of a classifier. We skipped other metrics such as `ROC` and `AUC` amongst others.
A good introduction to <a href="https://classeval.wordpress.com/introduction/introduction-to-the-roc-receiver-operating-characteristics-plot/">can be found here.</a>
A good introduction to `ROC` <a href="https://classeval.wordpress.com/introduction/introduction-to-the-roc-receiver-operating-characteristics-plot/">can be found here.</a>
%% Cell type:markdown id: tags:
## 3. Metrics in scikit-learn
%% Cell type:code id: tags:
``` python
from sklearn.metrics import precision_score, recall_score, f1_score, confusion_matrix, accuracy_score
```
%% Cell type:markdown id: tags:
`skelearn.metrics` contains many metrics like `precision_score` etc., `classification_report` prints an overall report.
`sklearn.metrics` contains many metrics like `precision_score` etc., `classification_report` prints an overall report.
%% Cell type:code id: tags:
``` python
from sklearn.metrics import (precision_score, recall_score, f1_score,
confusion_matrix, accuracy_score, classification_report)
# these numbers are from exercise 1.1:
predicted = [0, 0, 1, 1, 1, 0, 1, 1]
labels = [0, 1, 0, 1, 1, 0, 1, 1]
print(confusion_matrix(labels, predicted))
print()
#
# The first argument of the metrics functions is the exact labels,
# the second argument is the predictions:
#
print("{:20s} {:.3f}".format("precision", precision_score(labels, predicted)))
print("{:20s} {:.3f}".format("recall", recall_score(labels, predicted)))
print("{:20s} {:.3f}".format("f1", f1_score(labels, predicted)))
print("{:20s} {:.3f}".format("accuracy", accuracy_score(labels, predicted)))
print()
print(classification_report(labels, predicted))
```
%% Output
[[2 1]
[1 4]]
precision 0.800
recall 0.800
f1 0.800
accuracy 0.750
precision recall f1-score support
precision recall f1-score support
0 0.67 0.67 0.67 3
1 0.80 0.80 0.80 5
0 0.67 0.67 0.67 3
1 0.80 0.80 0.80 5
avg / total 0.75 0.75 0.75 8
micro avg 0.75 0.75 0.75 8
macro avg 0.73 0.73 0.73 8
weighted avg 0.75 0.75 0.75 8
%% Cell type:markdown id: tags:
Comment: The `micro avg` and `macro avg` outputs account for class inbalances, in case you want to learn more about this [read here](https://datascience.stackexchange.com/questions/15989/micro-average-vs-macro-average-performance-in-a-multiclass-classification-settin)
%% Cell type:markdown id: tags:
The function `cross_val_score` (introduced in the last script) allows to use other metrics than `accuray`.
We demonstrate usage of different metrics on two data sets:
- the known beer data samples in which labels distribution is almost 50:50.
- an unbalanced subset of the beer data samples.
%% Cell type:code id: tags:
``` python
import pandas as pd
beer_data = pd.read_csv("beers.csv")
print(beer_data.shape)
features = beer_data.iloc[:, :-1]
labels = beer_data.iloc[:, -1];
```
%% Output
(225, 5)
%% Cell type:code id: tags:
``` python
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer, confusion_matrix
from sklearn.linear_model import LogisticRegression
def assess(classifier, beer_data):
features = beer_data.iloc[:, :-1]
labels = beer_data.iloc[:, -1]
n = len(labels)
print("{:.1f} % of the beers are yummy".format(100 * sum(labels == 1) /n))
print()
for metric in ["accuracy", "f1", "precision", "recall"]:
scores = cross_val_score(classifier, features, labels, scoring=metric, cv=5)
print(" {:12s}: mean value: {:.2f}".format(metric, scores.mean()))
print()
classifier = LogisticRegression(C=1)
classifier = LogisticRegression(C=1, solver="lbfgs")
print("balanced data")
assess(classifier, beer_data)
# we sort by label, then removing samples| is easier:
beer_data = beer_data.sort_values(by="is_yummy")
print("unbalanced data")
beer_data_unbalanced = beer_data.iloc[:-80, :]
assess(classifier, beer_data_unbalanced)
```
%% Output
balanced data
52.9 % of the beers are yummy
accuracy : mean value: 0.80
f1 : mean value: 0.83
precision : mean value: 0.78
recall : mean value: 0.89
accuracy : mean value: 0.91
f1 : mean value: 0.92
precision : mean value: 0.89
recall : mean value: 0.96
unbalanced data
26.9 % of the beers are yummy
accuracy : mean value: 0.79
f1 : mean value: 0.41
precision : mean value: 0.87
recall : mean value: 0.28
accuracy : mean value: 0.85
f1 : mean value: 0.63
precision : mean value: 0.82
recall : mean value: 0.56
%% Cell type:markdown id: tags:
You can see that for the balanced data set the values for `f1` and for `accuracy` are almost equal, but differ significantly for the unbalanced data set.
%% Cell type:markdown id: tags:
## Exercise section 3
1. Play with the previous examples, use different classifiers with different settings
### Optional exercise
2. Modify the code from secton 5 from the previous script ("Training the final classifier") to use different metrics.
%% Cell type:markdown id: tags:
You can see that for the balanced data set the values for `f1` and for `accuracy` are almost equal, but differ significantly for the unbalanced data set.
2. Modify the code from secton 5 of the previous script ("Training the final classifier") to use different metrics.
%% Cell type:code id: tags:
``` python
#REMOVEBEGIN
# THE LINES BELOW ARE JUST FOR STYLING THE CONTENT ABOVE !
from IPython import utils
from IPython.core.display import HTML
import os
def css_styling():
"""Load default custom.css file from ipython profile"""
base = utils.path.get_ipython_dir()
styles = """<style>
@import url('http://fonts.googleapis.com/css?family=Source+Code+Pro');
@import url('http://fonts.googleapis.com/css?family=Kameron');
@import url('http://fonts.googleapis.com/css?family=Crimson+Text');
@import url('http://fonts.googleapis.com/css?family=Lato');
@import url('http://fonts.googleapis.com/css?family=Source+Sans+Pro');
@import url('http://fonts.googleapis.com/css?family=Lora');
body {
font-family: 'Lora', Consolas, sans-serif;
-webkit-print-color-adjust: exact important !;
}
.alert-block {
width: 95%;
margin: auto;
}
.rendered_html code
{
color: black;
background: #eaf0ff;
background: #f5f5f5;
padding: 1pt;
font-family: 'Source Code Pro', Consolas, monocco, monospace;
}
p {
line-height: 140%;
}
strong code {
background: red;
}
.rendered_html strong code
{
background: #f5f5f5;
}
.CodeMirror pre {
font-family: 'Source Code Pro', monocco, Consolas, monocco, monospace;
}
.cm-s-ipython span.cm-keyword {
font-weight: normal;
}
strong {
background: #f5f5f5;
margin-top: 4pt;
margin-bottom: 4pt;
padding: 2pt;
border: 0.5px solid #a0a0a0;
font-weight: bold;
color: darkred;
}
div #notebook {
# font-size: 10pt;
line-height: 145%;
}
li {
line-height: 145%;
}
div.output_area pre {
background: #fff9d8 !important;
padding: 5pt;
-webkit-print-color-adjust: exact;
}
h1, h2, h3, h4 {
font-family: Kameron, arial;
}
div#maintoolbar {display: none !important;}
</style>"""
return HTML(styles)
css_styling()
#REMOVEEND
```
%% Output
/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv3.6/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: get_ipython_dir has moved to the IPython.paths module since IPython 4.0.
/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv37/lib/python3.7/site-packages/ipykernel_launcher.py:9: UserWarning: get_ipython_dir has moved to the IPython.paths module since IPython 4.0.
if __name__ == '__main__':
<IPython.core.display.HTML object>
%% Cell type:code id: tags:
``` python
```
......
# Open a terminal and execute thf following command to create the conda environment
# for the workshop
# 'conda env create -f mlw_packages.yml'
name: machine_learning_workshop
channels:
- conda-forge
dependencies:
- python==3.6
- pandas
- matplotlib
- scikit-learn
- seaborn
- jupyter
- keras
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment