"Principal component analysis is a technique to reduce the dimensionality of a multi variate data set. One benefit of PCA is to remove redundancy in your data set, such as correlating columns or linear dependencies between columns.\n",
"\n",
"We discussed before that reducing redundancy and noise can help to avoid overfitting.\n",
"We've discussed before that reducing redundancy and noise can help to avoid overfitting.\n",
"<i class=\"fa fa-info-circle\"></i> One benefit of using a pipeline is that you will not mistakenly scale the full data set first, instead we follow the strategy we described above automatically.\n",
"<i class=\"fa fa-info-circle\"></i> One benefit of using a pipeline is that you will not mistakenly scale the full data set first, instead we follow the strategy we've described above automatically.\n",
"Classifiers and pipelines have parameters which must be adapted for improving performance (e.g. `gamma` or `C`). Finding good parameters is also called *hyperparameter optimization* to distinguish from the optimization done during learning of many classification algorithms.\n",
"\n",
"### Up to now we adapted such hyperparameters manually, but there are more systematic approaches !\n",
"\n",
"<img src=\"https://i.imgflip.com/3040hg.jpg\" title=\"made at imgflip.com\" width=50%/>"
"### Up to now we adapted such hyperparameters manually, but there are more systematic approaches !"
]
},
{
...
...
@@ -801,10 +804,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The specification of the grid id now a bit more complicated: \n",
"The specification of the grid id now a bit more complicated `PROCESSOR__ARGUMENT`: \n",
"\n",
"- first the name of the processor / classifier in lower case letters\n",
"- then two underscores `__`\n",
"- first the name of the processor / classifier in lower case letters,\n",
"- then two underscores `__`,\n",
"- finally the name of the argument of the processor / classifier.\n",
"\n",
"`StandardScaler` e.g. has parameters `with_mean` and `with_std` which can be `True` or `False`:"
"/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv37/lib/python3.7/site-packages/sklearn/svm/base.py:931: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.\n",
" \"the number of iterations.\", ConvergenceWarning)\n"
# Chapter 6: Preprocessing, pipelines and hyperparameters optimization
%% Cell type:markdown id: tags:
## About transformations / preprocessing
%% Cell type:markdown id: tags:
We've seen before that adding polynomial features to the 2D `xor` and `circle` problem made both tasks treatable by a simple linear classifier.
Comment: we use *transformation* and *preprocessing* interchangably.
Beyond adding polynomial features, there are other important preprocessors / transformers to mention:
### Scaler
A scaler applies a linear transformation on every feature. Those transformations are individual per column.
The two most important ones in `scikit-learn` are
-`MinMaxScaler`: after applying this scaler, the minumum in every column is 0, the maximum is 1.
-`StandardScaler`: scales columns to mean value 0 and standard deviation 1.
The reason to use a scaler is to compensate for different orders of magnitudes of the features. Some classifiers like `SVC` and `KNeighborsClassifier` use eucledian distances between features internally which would impose more weight on features having large values. So **don't forget to scale your features when using SVC or KNeighborsClassifier** !
### PCA
Principal component analysis is a technique to reduce the dimensionality of a multi variate data set. One benefit of PCA is to remove redundancy in your data set, such as correlating columns or linear dependencies between columns.
We discussed before that reducing redundancy and noise can help to avoid overfitting.
We've discussed before that reducing redundancy and noise can help to avoid overfitting.
### Function transformers
It can help to apply functions like `log` or `exp` or `1/x` to features to improve classification performance.
Lets assume you want to forecast the outcome of car crash experiments and one variable is the time $t$ needed for the distance $l$ from start to crash. Transforming this to the actual speed $\frac{l}{t}$ could be a more informative feature then $t$.
### Imputing missing values
Sometimes data contain missing values. Data imputation is a strategy to fill up missing values, e.g. by the columnwise mean or by applying another strategy.
%% Cell type:markdown id: tags:
## About scaling
%% Cell type:markdown id: tags:
As an example we demonstrante how a scaler can be implemented. Our scaling strategy will scale given values to the range 0 to 1.
First we create a random data matrix and compute columnwise min and max values:
We've seen before that we can swap `scikit-learn` classifiers easily without changing much code.
This is possible, because all classifiers have methods `.fit` and `.predict` which also have the same function signature (this means number and meaning of arguments is always the same for every implementation of `.fit` respectively `.predict`.)
This consistend design within `scikit-learn` also applies for preprocessors transformers, which all have methods`.fit`, `.transform` and `.fit_transform`.
This consistent API allows setting up **processing pipelines**:
%% Cell type:markdown id: tags:
## Pipelines
A so called classifiation pipeline consists of 0 or more pre processors plus a final classifier.
Let us start with the following pipeline:
1. Use PCA to reduce data to 3 dimensions
2. Apply scaling to mean 0 and std deviation 1
3. Train `SVC` classifier.
%% Cell type:code id: tags:
``` python
fromsklearn.preprocessingimportStandardScaler
fromsklearn.svmimportSVC
fromsklearn.decompositionimportPCA
fromsklearn.pipelineimportmake_pipeline
p=make_pipeline(PCA(3),StandardScaler(),SVC())
```
%% Cell type:markdown id: tags:
Such a pipeline now "behaves" like a single classifier, as it implements `.fit` and `.predict`:
%% Cell type:code id: tags:
``` python
print("p.fit ",p.fitisnotNone)
print("p.predict",p.predictisnotNone)
```
%% Output
p.fit True
p.predict True
%% Cell type:markdown id: tags:
Because of this we can also use cross-validation in the same way as we did before:
<iclass="fa fa-info-circle"></i> One benefit of using a pipeline is that you will not mistakenly scale the full data set first, instead we follow the strategy we described above automatically.
<iclass="fa fa-info-circle"></i> One benefit of using a pipeline is that you will not mistakenly scale the full data set first, instead we follow the strategy we've described above automatically.
</div>
%% Cell type:markdown id: tags:
### How to setup a good pipeline ?
Regrettably there is no recipe how to setup a good performing classification pipeline except reasonable preprocessing, especially feature engineering. After that it is up to experimentation and the advice on how to choose classifiers we gave in the last script.
Let us try out different pipeplines and evaluate them:
<iclass="fa fa-info-circle"></i> Up to now we applied preprocessing to the full feature table. `scikit-learn` also allows preprocessing of single columns or a subset of them. the concept in `scikit-learn` is called `ColumnTransformer`, more about this
[can be found here](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html)
</div>
%% Cell type:markdown id: tags:
## Hyperparameter optimization
Classifiers and pipelines have parameters which must be adapted for improving performance (e.g. `gamma` or `C`). Finding good parameters is also called *hyperparameter optimization* to distinguish from the optimization done during learning of many classification algorithms.
### Up to now we adapted such hyperparameters manually, but there are more systematic approaches !
<imgsrc="https://i.imgflip.com/3040hg.jpg"title="made at imgflip.com"width=50%/>
%% Cell type:markdown id: tags:
The simplest approach is to specify valid values for each parameter involved and then try out all possible combinations. This is called *grid search*:
%% Cell type:code id: tags:
``` python
fromsklearn.model_selectionimportGridSearchCV
# optimize parameters of one single classifier
parameters={'kernel':('linear','rbf','poly'),
'C':[1,5,10,15]
}
svc=SVC()
# run gridsearch, use CV to assess quality and determine best parameter
# set:
# tries all 3 x 4 = 12 combinations:
search=GridSearchCV(svc,parameters,cv=5)
search.fit(features,labels)
print(search.best_score_,search.best_params_)
```
%% Output
0.9822222222222222 {'C': 5, 'kernel': 'poly'}
%% Cell type:markdown id: tags:
Such an optimization can also be applied to a full pipeline:
This grid has `4 x 2 x 2 x 5` thus `80` points. So we muss run crossvalidation for 80 different classifiers.
To speed this up, we can specify `n_jobs = 2` to use `2` extra processor cores to run gridsearch in parallel (you might want to use more cores depending on your computer):
/Users/uweschmitt/Projects/machinelearning-introduction-workshop/venv37/lib/python3.7/site-packages/sklearn/svm/base.py:931: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.