Skip to content
Snippets Groups Projects
Commit bf7cd636 authored by Franziska Oschmann's avatar Franziska Oschmann
Browse files

Remove unused files

parent 107f48f1
No related branches found
No related tags found
1 merge request!69Update installation instructions
......@@ -17,7 +17,7 @@ The setup is based on the Conda distribution for Python called Anaconda
### II. Install Conda environment
#### on MacOS (Intel, aka "non M1 Mac"):
#### on MacOS (Intel, aka "non M1 Mac") or Linux:
1. Open Terminal.app, change directory to the directory with workshop materials
(`$ cd path/to/dir`) and run:
......@@ -35,41 +35,16 @@ The setup is based on the Conda distribution for Python called Anaconda
$ conda run -n machine_learning_workshop jupyter lab
#### on MacOS (ARM, aka "M1 Mac"):
#### on MacOS (ARM, aka "M1/M2 Mac"):
1. Open Terminal.app, change directory to the directory with workshop materials
(`$ cd path/to/dir`).
In case you did not install miniforge before (check if the folder `miniformge3` exists in your home directory),
run first:
$ ./install_miniforge.sh
2. Then run
$ conda env create -f environment_m1.yml
2. In case the following line prints `OK` your setup is fine:
$ conda run -n machine_learning_workshop_2022 python -c "import tensorflow, matplotlib, numpy, pandas; print('OK')"
3. Start jupyter lab using
$ conda run -n machine_learning_workshop_2022 jupyter lab
#### on Linux
1. Open Terminal, change directory to the directory with workshop materials
(`$ cd path/to/dir`) and run:
$ run conda env create -f environment_linux.yml
2. In case the following line prints `OK` your setup is fine:
$ conda run -n machine_learning_workshop_2021 python -c "import tensorflow, matplotlib, numpy, pandas; print('OK')"
3. Start jupyter lab using
$ conda create --name machine_learning_workshop python=3.10
$ conda activate machine_learning_workshop
$ pip install -r requirements_local_arm.txt
$ conda run -n machine_learning_workshop_2021 jupyter lab
Then continue with steps 2 and 3 named above.
#### on Windows
Please follow the instructions for Linux (above) or:
......
# Open a terminal and execute the following command to create the conda environment
# for the workshop
# 'conda env create -f environment_m1.yml'
name: machine_learning_workshop_2022
channels:
- apple
- https://repo.anaconda.com/pkgs/main
- conda-forge
dependencies:
- python==3.10.5
- pandas
- matplotlib
- scikit-learn==1.1.1
- seaborn
- jupyterlab==3.4.4
- libcblas
- pydot
- pillow
- pip
- tensorflow-deps==2.9.0
- pip:
- tensorflow-estimator==2.9.0
- tensorflow-macos==2.9.0
- tensorflow_datasets
- nltk
- gensim
- prettytable
- jupyterlab-code-formatter
- jupyterlab-git
- keras-tuner
- tensorflow_hub
- scikeras
- ipympl
name: machine_learning_workshop_2023
channels:
- apple
- defaults
dependencies:
- blas=1.0=openblas
- bzip2=1.0.8=h620ffc9_4
- c-ares=1.18.1=h1a28f6b_0
- ca-certificates=2023.01.10=hca03da5_0
- certifi=2022.12.7=py310hca03da5_0
- grpcio=1.42.0=py310h95c9599_0
- h5py=3.6.0=py310h181c318_0
- hdf5=1.12.1=h160e8cb_2
- krb5=1.19.4=h8380606_0
- libcurl=7.88.1=h0f1d93c_0
- libcxx=14.0.6=h848a8c0_0
- libedit=3.1.20221030=h80987f9_0
- libev=4.33=h1a28f6b_1
- libffi=3.4.2=hca03da5_6
- libgfortran=5.0.0=11_3_0_hca03da5_28
- libgfortran5=11.3.0=h009349e_28
- libnghttp2=1.46.0=h95c9599_0
- libopenblas=0.3.21=h269037a_0
- libssh2=1.10.0=hf27765b_0
- llvm-openmp=14.0.6=hc6e5704_0
- ncurses=6.4=h313beb8_0
- openssl=1.1.1t=h1a28f6b_0
- pip=23.0.1=py310hca03da5_0
- python=3.10.9=hc0d8a6c_2
- readline=8.2=h1a28f6b_0
- setuptools=65.6.3=py310hca03da5_0
- six=1.16.0=pyhd3eb1b0_1
- sqlite=3.40.1=h7a7dc30_0
- tensorflow-deps=2.9.0=0
- tk=8.6.12=hb8d0fd4_0
- tzdata=2022g=h04d1e81_0
- wheel=0.38.4=py310hca03da5_0
- xz=5.2.10=h80987f9_1
- zlib=1.2.13=h5a0b063_0
- pip:
- absl-py==1.4.0
- anyio==3.6.2
- appnope==0.1.3
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- asttokens==2.2.1
- astunparse==1.6.3
- attrs==22.2.0
- babel==2.12.1
- backcall==0.2.0
- beautifulsoup4==4.11.2
- bleach==6.0.0
- cachetools==5.3.0
- cffi==1.15.1
- charset-normalizer==3.1.0
- colorama==0.4.6
- comm==0.1.2
- contourpy==1.0.7
- cycler==0.11.0
- debugpy==1.6.6
- decorator==5.1.1
- defusedxml==0.7.1
- dm-tree==0.1.8
- etils==1.0.0
- executing==1.2.0
- fastjsonschema==2.16.3
- fasttext==0.9.2
- flatbuffers==23.3.3
- fonttools==4.39.0
- gast==0.4.0
- gensim==4.3.1
- gitdb==4.0.10
- gitpython==3.1.31
- google-auth==2.16.2
- google-auth-oauthlib==0.4.6
- google-pasta==0.2.0
- googleapis-common-protos==1.58.0
- idna==3.4
- importlib-resources==5.12.0
- ipykernel==6.21.3
- ipython==8.11.0
- ipython-genutils==0.2.0
- jedi==0.18.2
- jinja2==3.1.2
- joblib==1.2.0
- json5==0.9.11
- jsonschema==4.17.3
- jupyter-client==8.0.3
- jupyter-core==5.2.0
- jupyter-server==1.23.6
- jupyter-server-mathjax==0.2.6
- jupyterlab==3.4.4
- jupyterlab-code-formatter==1.5.3
- jupyterlab-git==0.41.0
- jupyterlab-pygments==0.2.2
- jupyterlab-server==2.20.0
- keras==2.11.0
- keras-tuner==1.1.3
- kiwisolver==1.4.4
- kt-legacy==1.0.4
- libclang==15.0.6.1
- markdown==3.4.1
- markupsafe==2.1.2
- matplotlib==3.7.1
- matplotlib-inline==0.1.6
- mistune==2.0.5
- nbclassic==0.5.3
- nbclient==0.7.2
- nbconvert==7.2.9
- nbdime==3.1.1
- nbformat==5.7.3
- nest-asyncio==1.5.6
- nltk==3.8.1
- notebook==6.5.3
- notebook-shim==0.2.2
- numpy==1.24.2
- oauthlib==3.2.2
- opt-einsum==3.3.0
- pandas==1.5.3
- pandocfilters==1.5.0
- parso==0.8.3
- pexpect==4.8.0
- pickleshare==0.7.5
- pillow==9.4.0
- platformdirs==3.1.0
- prettytable==3.6.0
- prometheus-client==0.16.0
- promise==2.3
- prompt-toolkit==3.0.38
- protobuf==3.19.6
- psutil==5.9.4
- ptyprocess==0.7.0
- pure-eval==0.2.2
- pyasn1==0.4.8
- pyasn1-modules==0.2.8
- pybind11==2.10.3
- pycparser==2.21
- pydot==1.4.2
- pygments==2.14.0
- pyparsing==3.0.9
- pyrsistent==0.19.3
- python-dateutil==2.8.2
- pytz==2022.7.1
- pyzmq==25.0.0
- regex==2022.10.31
- requests==2.28.2
- requests-oauthlib==1.3.1
- rsa==4.9
- scikeras==0.10.0
- scikit-learn==1.1.1
- scipy==1.10.1
- seaborn==0.12.2
- send2trash==1.8.0
- smart-open==6.3.0
- smmap==5.0.0
- sniffio==1.3.0
- soupsieve==2.4
- stack-data==0.6.2
- tensorboard==2.11.2
- tensorboard-data-server==0.6.1
- tensorboard-plugin-wit==1.8.1
- tensorflow-datasets==4.8.3
- tensorflow-estimator==2.11.0
- tensorflow-hub==0.12.0
- tensorflow-macos==2.11.0
- tensorflow-metadata==1.12.0
- tensorflow-metal==0.7.1
- termcolor==2.2.0
- terminado==0.17.1
- threadpoolctl==3.1.0
- tinycss2==1.2.1
- toml==0.10.2
- tornado==6.2
- tqdm==4.65.0
- traitlets==5.9.0
- typing-extensions==4.5.0
- urllib3==1.26.14
- wcwidth==0.2.6
- webencodings==0.5.1
- websocket-client==1.5.1
- werkzeug==2.2.3
- wrapt==1.15.0
- zipp==3.15.0
prefix: /Users/franziskaoschmann/miniconda3/envs/ml_ws_test
#!/bin/bash
PATH_MINIFORGE=$HOME'/miniforge3'
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
chmod +x Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh -b -p $PATH_MINIFORGE
appnope>=0.1.0
backcall>=0.1.0
bleach>=2.1.4
cycler>=0.10.0
decorator>=4.3.0
defusedxml>=0.5.0
entrypoints>=0.2.3
graphviz
html5lib>=1.0.1
ipykernel>=4.9.0
ipympl>=0.2.1
ipython>=6.5.0
ipython-genutils>=0.2.0
ipywidgets>=7.4.1
jedi>=0.12.1
Jinja2>=2.10
jsonschema>=2.6.0
jupyter>=1.0.0
jupyter-client>=5.2.3
jupyter-console>=5.2.0
jupyter-contrib-nbextensions>=0.5.1
jupyter-core>=4.4.0
kiwisolver>=1.0.1
MarkupSafe>=1.0
matplotlib>=3.1.1
mistune>=0.8.3
nbconvert>=5.4.0
nbformat>=4.4.0
nb-filter-cells>=0.0.2
notebook>=5.6.0
numpy>=1.15.1
pandas>=0.25.0
pandocfilters>=1.4.2
parso>=0.3.1
pexpect>=4.6.0
pickleshare>=0.7.4
prometheus-client>=0.3.1
prompt-toolkit>=1.0.15
ptyprocess>=0.6.0
pydot>=1.4.1
Pygments>=2.2.0
pyparsing>=2.2.0
python-dateutil>=2.7.3
pytz>=2018.5
pyzmq>=17.1.2
qtconsole>=4.4.1
scikit-learn>=0.21.3
scipy>=1.1.0
seaborn>=0.9.0
Send2Trash>=1.5.0
simplegeneric>=0.8.1
six>=1.11.0
terminado>=0.8.1
testpath>=0.3.1
tornado>=5.1
traitlets>=4.3.2
wcwidth>=0.1.7
webencodings>=0.5.1
widgetsnbextension>=3.4.1
pre-commit
nbclient
%% Cell type:code id: tags:
``` python
# IGNORE THIS CELL WHICH CUSTOMIZES LAYOUT AND STYLING OF THE NOTEBOOK !
from numpy.random import seed
seed(42)
import tensorflow as tf
tf.random.set_seed(42)
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
mpl.rcParams["lines.linewidth"] = 3
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
%config IPCompleter.greedy=True
import warnings
warnings.filterwarnings('ignore')
warnings.filterwarnings("ignore", category=FutureWarning)
from IPython.core.display import HTML
HTML(open("custom.html", "r").read())
```
%% Output
<IPython.core.display.HTML object>
%% Cell type:markdown id: tags:
# Chapter 8d: Introduction to Neural Networks
## Using pre-defined models in TensorFlow
%% Cell type:code id: tags:
``` python
from tensorflow.keras import applications
help(applications)
```
%% Output
Help on package keras.api._v2.keras.applications in keras.api._v2.keras:
NAME
keras.api._v2.keras.applications - AUTOGENERATED. DO NOT EDIT.
PACKAGE CONTENTS
convnext (package)
densenet (package)
efficientnet (package)
efficientnet_v2 (package)
imagenet_utils (package)
inception_resnet_v2 (package)
inception_v3 (package)
mobilenet (package)
mobilenet_v2 (package)
mobilenet_v3 (package)
nasnet (package)
regnet (package)
resnet (package)
resnet50 (package)
resnet_rs (package)
resnet_v2 (package)
vgg16 (package)
vgg19 (package)
xception (package)
FILE
/Users/franziskaoschmann/miniconda3/envs/mlws_arm/lib/python3.10/site-packages/keras/api/_v2/keras/applications/__init__.py
%% Cell type:markdown id: tags:
### ImageNet
[ImageNet](http://image-net.org/) is a very large (> 14 million!! images) and easily accessible image database. More than 14 million annotated images indicating the object in the image and more than 1 million images with bounding box information.
Summary and statistics: http://image-net.org/about-stats
%% Cell type:code id: tags:
``` python
from tensorflow.keras.applications import VGG16
```
%% Cell type:code id: tags:
``` python
?VGG16
```
%% Output
%% Cell type:code id: tags:
``` python
model = VGG16(weights="imagenet")
```
%% Output
2024-03-14 13:42:45.630907: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M2 Max
2024-03-14 13:42:45.630929: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 32.00 GB
2024-03-14 13:42:45.630936: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 10.67 GB
2024-03-14 13:42:45.631005: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:303] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-03-14 13:42:45.631038: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:269] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
2024-03-21 19:23:21.855320: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M2 Max
2024-03-21 19:23:21.855342: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 32.00 GB
2024-03-21 19:23:21.855352: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 10.67 GB
2024-03-21 19:23:21.855404: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:303] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-03-21 19:23:21.855433: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:269] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
%% Cell type:code id: tags:
``` python
model.summary()
```
%% Output
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
flatten (Flatten) (None, 25088) 0
fc1 (Dense) (None, 4096) 102764544
fc2 (Dense) (None, 4096) 16781312
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138357544 (527.79 MB)
Trainable params: 138357544 (527.79 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
%% Cell type:code id: tags:
``` python
from IPython.display import Image as Img
from IPython.display import TextDisplayObject as text
display(Img(filename="./images/mr_panda_free.jpg", width=600))
print(text("source: GEORGE LU/FLICKR (CC BY 2.0)").data)
```
%% Output
source: GEORGE LU/FLICKR (CC BY 2.0)
%% Cell type:code id: tags:
``` python
from tensorflow.keras.applications.vgg16 import decode_predictions, preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array, load_img
image = load_img("./images/mr_panda_free.jpg", target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# Prepare data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
# prediction of probability of belonging to the output classes
prediction = model.predict(image)
# converting the probabilities to class labels
label = decode_predictions(prediction)
# Top 5 classes
label = label[0]
for pred in label:
# print the classification
print("It is: {} with probability {:.4f}%".format(pred[1], pred[2] * 100))
```
%% Output
2024-03-14 13:42:47.853212: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
2024-03-21 19:23:24.296143: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
1/1 [==============================] - 0s 412ms/step
1/1 [==============================] - 0s 297ms/step
It is: giant_panda with probability 100.0000%
It is: skunk with probability 0.0000%
It is: badger with probability 0.0000%
It is: toilet_tissue with probability 0.0000%
It is: Angora with probability 0.0000%
%% Cell type:markdown id: tags:
## Transfering knowledge
%% Cell type:markdown id: tags:
Recap: Convolutional Neural Networks can be seen as being comprised of 2 parts:
**A feature extractor (convolution , Maxpooling layers) and a classifier part (Dense layers)**
Different possibilities to work with pre-trained/pre-existing models trained on a very large datasets such as Imagenet:
* Freezing the convolution part and throwing away the classifer part. Adding your own dense layers and training them.
* Freezing only some layers in the convolution part and throwing away the classifer part. Adding your own dense layers and training the unfreezed and the dense layers.
* Only using the architecture and training the whole network again.
%% Cell type:markdown id: tags:
## Realistic example
### Histopathological Cancer Detection
https://www.kaggle.com/c/histopathologic-cancer-detection/overview
**Download data**: https://www.kaggle.com/competitions/histopathologic-cancer-detection/data
Identification of metastatic cancer in small image patches taken from larger digital pathology scans.
%% Cell type:code id: tags:
``` python
%matplotlib inline
# Plotting a few images from this dataset
import os
import matplotlib.pyplot as plt
import numpy as np
from numpy import random
from PIL import Image
random.seed(42)
import tensorflow as tf
tf.random.set_seed(42)
def plot_data(samples, top_dir):
sub_directories = ["benign", "malign"]
fig, ax = plt.subplots(
len(sub_directories),
samples,
sharex=True,
sharey=True,
figsize=(3 * samples, 3 * len(sub_directories)),
)
labels = ["0", "1"]
assert len(sub_directories) == 2
for i in range(samples):
for j, k in enumerate(sub_directories):
tmp = os.path.join(top_dir, k)
tmp_img = Image.open(os.path.join(tmp, random.choice(os.listdir(tmp))))
ax[j, i].imshow(np.asarray(tmp_img))
ax[j, i].set_title("{}: label={}".format(k, j))
ax[j, i].grid(False)
#data_dir = "PATH_TO_histopathologic_cancer_detection_FOLDER"
data_dir = "/cluster/project/workshops/machine_learning/machinelearning-introduction-workshop/data/histopathologic_cancer_detection/"
plot_data(4, os.path.join(data_dir, "train"))
```
%% Output
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[9], line 38
36 #data_dir = "PATH_TO_histopathologic_cancer_detection_FOLDER"
37 data_dir = "/cluster/project/workshops/machine_learning/machinelearning-introduction-workshop/data/histopathologic_cancer_detection/"
---> 38 plot_data(4, os.path.join(data_dir, "train"))
Cell In[9], line 30, in plot_data(samples, top_dir)
28 for j, k in enumerate(sub_directories):
29 tmp = os.path.join(top_dir, k)
---> 30 tmp_img = Image.open(os.path.join(tmp, random.choice(os.listdir(tmp))))
31 ax[j, i].imshow(np.asarray(tmp_img))
32 ax[j, i].set_title("{}: label={}".format(k, j))
FileNotFoundError: [Errno 2] No such file or directory: '/cluster/project/workshops/machine_learning/machinelearning-introduction-workshop/data/histopathologic_cancer_detection/train/benign'
%% Cell type:code id: tags:
``` python
# Data preprocessing
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_data = ImageDataGenerator(rescale=1 / 255.0)
train_directory = os.path.join(data_dir, "train")
train_data_generator = train_data.flow_from_directory(
train_directory, target_size=(96, 96), batch_size=256, class_mode="binary"
)
validation_data = ImageDataGenerator(rescale=1 / 255.0)
validation_directory = os.path.join(data_dir, "validation")
validation_data_generator = validation_data.flow_from_directory(
validation_directory, target_size=(96, 96), batch_size=256, class_mode="binary"
)
```
%% Cell type:code id: tags:
``` python
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
```
%% Cell type:code id: tags:
``` python
from tensorflow.keras.applications import VGG16
```
%% Cell type:code id: tags:
``` python
feature_extractor = VGG16(weights=None, include_top=False, input_shape=(96, 96, 3))
# feature_extractor = MobileNetV2(weights=None, include_top=False, input_shape=(96,96,3))
feature_extractor.summary()
```
%% Cell type:code id: tags:
``` python
model = models.Sequential()
model.add(feature_extractor)
model.add(layers.Flatten())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(512, activation="relu"))
model.add(layers.Dense(1, activation="sigmoid"))
```
%% Cell type:code id: tags:
``` python
model.summary()
```
%% Cell type:code id: tags:
``` python
model.compile(
optimizer=optimizers.RMSprop(learning_rate=0.0001),
loss="binary_crossentropy",
metrics=["accuracy"],
)
```
%% Cell type:code id: tags:
``` python
num_epochs = 10
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=2, min_lr=0.000001
)
mcp_save = ModelCheckpoint("./test/", save_freq="epoch")
```
%% Cell type:code id: tags:
``` python
# CPU times: user 1h 21min 11s, sys: 17min 41s, total: 1h 38min 53s
# Wall time: 1h 58min 20s wo dropout
model_run = model.fit(
train_data_generator,
steps_per_epoch=len(train_data_generator),
epochs=num_epochs,
validation_data=validation_data_generator,
validation_steps=len(validation_data_generator),
callbacks=[reduce_lr, mcp_save],
)
```
%% Cell type:code id: tags:
``` python
import pickle
# with open("./data/histopathology_run_history", "wb") as filehandler:
# pickle.dump(model_run.history, filehandler)
```
%% Cell type:code id: tags:
``` python
history_file = open("./data/histopathology_run_history", "rb")
history = pickle.load(history_file)
num_epochs = 10
plt.plot(
np.arange(0, num_epochs),
history["val_accuracy"],
label="Validation accuracy",
)
plt.plot(np.arange(0, num_epochs), history["accuracy"], label="Train accuracy")
plt.xlabel("epoch")
plt.ylabel("Accuracy")
plt.legend()
plt.ylim([0.6, 1])
plt.grid()
```
%% Cell type:code id: tags:
``` python
# Data Augmentation
train_data = ImageDataGenerator(
rescale=1 / 255.0,
rotation_range=90,
width_shift_range=0.0,
height_shift_range=0.0,
shear_range=0.1,
horizontal_flip=True,
fill_mode="nearest",
)
# Visualizing what our data generator is doing
# Choosing an image randomly
from numpy import random
pic_malignant = np.asarray(
Image.open(
train_directory
+ "/malign/"
+ random.choice(os.listdir(train_directory + "/malign/"))
)
)
fig, ax = plt.subplots(1, 8, sharex=True, sharey=True, figsize=(3 * 8, 3))
ax = ax.flatten()
ax[0].imshow(pic_malignant)
ax[0].grid(False)
pic_malignant = pic_malignant[np.newaxis, :]
for i, img in enumerate(train_data.flow(pic_malignant)):
ax[i + 1].imshow(img[0])
ax[i + 1].grid(False)
if i == 6:
break
```
%% Cell type:markdown id: tags:
## TensorFlow Hub
A great repository of trained machine learning models!
The models can be downloaded and used with just a few lines of code.
Find models here: https://tfhub.dev/
%% Cell type:code id: tags:
``` python
import tensorflow_hub as hub
```
%% Cell type:code id: tags:
``` python
layer = hub.KerasLayer(
"https://tfhub.dev/google/imagenet/resnet_v2_50/classification/4", trainable=True
)
```
%% Cell type:code id: tags:
``` python
from tensorflow.keras.models import Sequential
model = Sequential([layer])
model.build([None, 224, 224, 3])
model.summary()
```
%% Cell type:code id: tags:
``` python
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment