fastbook/clean/05_pet_breeds.ipynb
Jeremy Howard dd985841b6 clean
2020-09-03 15:58:27 -07:00

686 lines
15 KiB
Plaintext

{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"!pip install -Uqq fastbook\n",
"import fastbook\n",
"fastbook.setup_book()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"from fastbook import *"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Image Classification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## From Dogs and Cats to Pet Breeds"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fastai.vision.all import *\n",
"path = untar_data(URLs.PETS)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"Path.BASE_PATH = path"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"path.ls()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"(path/\"images\").ls()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fname = (path/\"images\").ls()[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"re.findall(r'(.+)_\\d+.jpg$', fname.name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pets = DataBlock(blocks = (ImageBlock, CategoryBlock),\n",
" get_items=get_image_files, \n",
" splitter=RandomSplitter(seed=42),\n",
" get_y=using_attr(RegexLabeller(r'(.+)_\\d+.jpg$'), 'name'),\n",
" item_tfms=Resize(460),\n",
" batch_tfms=aug_transforms(size=224, min_scale=0.75))\n",
"dls = pets.dataloaders(path/\"images\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Presizing"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),\n",
" get_y=parent_label,\n",
" item_tfms=Resize(460))\n",
"dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)\n",
"dls1.train.get_idxs = lambda: Inf.ones\n",
"x,y = dls1.valid.one_batch()\n",
"_,axs = subplots(1, 2)\n",
"\n",
"x1 = TensorImage(x.clone())\n",
"x1 = x1.affine_coord(sz=224)\n",
"x1 = x1.rotate(draw=30, p=1.)\n",
"x1 = x1.zoom(draw=1.2, p=1.)\n",
"x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)\n",
"\n",
"tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),\n",
" Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])\n",
"x = Pipeline(tfms)(x)\n",
"#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)\n",
"TensorImage(x[0]).show(ctx=axs[0])\n",
"TensorImage(x1[0]).show(ctx=axs[1]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Checking and Debugging a DataBlock"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dls.show_batch(nrows=1, ncols=3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),\n",
" get_items=get_image_files, \n",
" splitter=RandomSplitter(seed=42),\n",
" get_y=using_attr(RegexLabeller(r'(.+)_\\d+.jpg$'), 'name'))\n",
"pets1.summary(path/\"images\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"learn.fine_tune(2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cross-Entropy Loss"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Viewing Activations and Labels"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x,y = dls.one_batch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"preds,_ = learn.get_preds(dl=[(x,y)])\n",
"preds[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(preds[0]),preds[0].sum()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Softmax"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plot_function(torch.sigmoid, min=-4,max=4)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"torch.random.manual_seed(42);"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"acts = torch.randn((6,2))*2\n",
"acts"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"acts.sigmoid()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"(acts[:,0]-acts[:,1]).sigmoid()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sm_acts = torch.softmax(acts, dim=1)\n",
"sm_acts"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Log Likelihood"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"targ = tensor([0,1,0,1,1,0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sm_acts"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"idx = range(6)\n",
"sm_acts[idx, targ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
"df = pd.DataFrame(sm_acts, columns=[\"3\",\"7\"])\n",
"df['targ'] = targ\n",
"df['idx'] = idx\n",
"df['loss'] = sm_acts[range(6), targ]\n",
"t = df.style.hide_index()\n",
"#To have html code compatible with our script\n",
"html = t._repr_html_().split('</style>')[1]\n",
"html = re.sub(r'<table id=\"([^\"]+)\"\\s*>', r'<table >', html)\n",
"display(HTML(html))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"-sm_acts[idx, targ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"F.nll_loss(sm_acts, targ, reduction='none')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Taking the Log"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plot_function(torch.log, min=0,max=4)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loss_func = nn.CrossEntropyLoss()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loss_func(acts, targ)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"F.cross_entropy(acts, targ)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nn.CrossEntropyLoss(reduction='none')(acts, targ)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Interpretation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"interp = ClassificationInterpretation.from_learner(learn)\n",
"interp.plot_confusion_matrix(figsize=(12,12), dpi=60)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"interp.most_confused(min_val=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Improving Our Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The Learning Rate Finder"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"learn.fine_tune(1, base_lr=0.1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"lr_min,lr_steep = learn.lr_find()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"learn.fine_tune(2, base_lr=3e-3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Unfreezing and Transfer Learning"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.fine_tune??"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"learn.fit_one_cycle(3, 3e-3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.unfreeze()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.lr_find()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.fit_one_cycle(6, lr_max=1e-5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Discriminative Learning Rates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
"learn.fit_one_cycle(3, 3e-3)\n",
"learn.unfreeze()\n",
"learn.fit_one_cycle(12, lr_max=slice(1e-6,1e-4))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.recorder.plot_loss()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Selecting the Number of Epochs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deeper Architectures"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fastai.callback.fp16 import *\n",
"learn = cnn_learner(dls, resnet50, metrics=error_rate).to_fp16()\n",
"learn.fine_tune(6, freeze_epochs=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Questionnaire"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Why do we first resize to a large size on the CPU, and then to a smaller size on the GPU?\n",
"1. If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book's website for suggestions.\n",
"1. What are the two ways in which data is most commonly provided, for most deep learning datasets?\n",
"1. Look up the documentation for `L` and try using a few of the new methods is that it adds.\n",
"1. Look up the documentation for the Python `pathlib` module and try using a few methods of the `Path` class.\n",
"1. Give two examples of ways that image transformations can degrade the quality of the data.\n",
"1. What method does fastai provide to view the data in a `DataLoaders`?\n",
"1. What method does fastai provide to help you debug a `DataBlock`?\n",
"1. Should you hold off on training a model until you have thoroughly cleaned your data?\n",
"1. What are the two pieces that are combined into cross-entropy loss in PyTorch?\n",
"1. What are the two properties of activations that softmax ensures? Why is this important?\n",
"1. When might you want your activations to not have these two properties?\n",
"1. Calculate the `exp` and `softmax` columns of <<bear_softmax>> yourself (i.e., in a spreadsheet, with a calculator, or in a notebook).\n",
"1. Why can't we use `torch.where` to create a loss function for datasets where our label can have more than two categories?\n",
"1. What is the value of log(-2)? Why?\n",
"1. What are two good rules of thumb for picking a learning rate from the learning rate finder?\n",
"1. What two steps does the `fine_tune` method do?\n",
"1. In Jupyter Notebook, how do you get the source code for a method or function?\n",
"1. What are discriminative learning rates?\n",
"1. How is a Python `slice` object interpreted when passed as a learning rate to fastai?\n",
"1. Why is early stopping a poor choice when using 1cycle training?\n",
"1. What is the difference between `resnet50` and `resnet101`?\n",
"1. What does `to_fp16` do?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Further Research"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Find the paper by Leslie Smith that introduced the learning rate finder, and read it.\n",
"1. See if you can improve the accuracy of the classifier in this chapter. What's the best accuracy you can achieve? Look on the forums and the book's website to see what other students have achieved with this dataset, and how they did it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"jupytext": {
"split_at_heading": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}