mirror of
https://github.com/fastai/fastbook.git
synced 2025-04-04 01:40:44 +00:00
4882 lines
277 KiB
Plaintext
4882 lines
277 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"#hide\n",
|
||
"from fastai2.vision.all import *\n",
|
||
"from utils import *\n",
|
||
"\n",
|
||
"matplotlib.rc('image', cmap='Greys')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "raw",
|
||
"metadata": {},
|
||
"source": [
|
||
"[[chapter_mnist_basics]]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Under the hood: training a digit classifier"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Pixels: the foundations of computer vision"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now that we’ve seen what it looks like to actually train a variety of models, let’s now dig under the hood and see exactly what is going on. We’ll start with computer vision, and will use that to introduce many of the key concepts of deep learning. In future chapters we’ll do deep dives into other applications as well, and we’ll see how to use these insights to both improve our model’s accuracy, speed up its training, and turn it into a real working web application.\n",
|
||
"\n",
|
||
"In order to understand what happens in a computer vision model, we first have to understand how computers handle images. We'll use one of the most famous datasets in computer vision, [MNIST](https://en.wikipedia.org/wiki/MNIST_database), for our experiments. MNIST contains hand-written digits, collected by the National Institute of Standards and Technology, and collated into a machine learning dataset by Yann Lecun and his colleagues. Lecun used MNIST in 1998 to demonstrate [Lenet 5](http://yann.lecun.com/exdb/lenet/), the first computer system to demonstrate practically useful recognition of hand-written digit sequences. This was one of the most important breakthroughs in the history of AI."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Sidebar: Tenacity and deep learning"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The story of deep learning is one of tenacity and grit from a handful of dedicated researchers. After early hopes (and hype!) neural networks went out of favor in the 1990's and 2000's, and just a handful of researchers kept trying to make them work well. Three of them, Yann Lecun, Geoff Hinton, and Yoshua Bengio were awarded the highest honor in computer science, the Turing Award (generally considered the \"Nobel Prize of computer science\") after triumphing despite the deep skepticism and disinterest of the wider machine learning and statistics community.\n",
|
||
"\n",
|
||
"<img src=\"images/turing_300.jpg\" id=\"dl_fathers\" caption=\"Left to right, Yann Lecun, Geoffrey Hinton and Yoshua Bengio\" alt=\"Picture of Yann Lecun, Geoffrey Hinton and Yoshua Bengio\">\n",
|
||
"\n",
|
||
"Geoff Hinton has told of how even academic papers showing dramatically better results than anything previously published would be rejected from top journals and conferences, just because they used a neural network. Yann Lecun's work on convolutional neural networks, which we will study in the next section, showed that these models could read hand-written text--something that had never been achieved before. However his breakthrough was ignored by most researchers, even as it was used commercially to read 10% of the checks in the US!\n",
|
||
"\n",
|
||
"In addition to these three Turing Award winners, there are many other researchers who have battled to get us to where we are today. For instance, Jurgen Schmidhuber (who many believe should have shared in the Turing Award) pioneered many important ideas, including working on the *LSTM* architecture with his student Sepp Hochreiter (widely used for speech recognition and other text modeling tasks, and used in the IMDb example in <<chapter_intro>>). Perhaps most important of all, Werbos invented back-propagation for neural networks, the technique shown in this chapter and used universally for training neural networks. His development was almost entirely ignored for decades, but today it is the most important foundation of modern AI.\n",
|
||
"\n",
|
||
"There is a lesson here for all of us! On your deep learning journey you will face many obstacles, both technical, and (even more difficult) people around you who don't believe you'll be successful. There's one *guaranteed* way to fail, and that's to stop trying. We've seen that the only consistent trait amongst every fast.ai student that's gone on to be a world-class practitioner is that they are all very tenacious."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## End sidebar"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"For this initial tutorial we are just going to try to create a model that can recognise \"3\"s and \"7\"s. So let's download a sample of MNIST which contains images of just these digits:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"path = untar_data(URLs.MNIST_SAMPLE)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"#hide\n",
|
||
"Path.BASE_PATH = path"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can see what's in this directory by using `ls()`, a method added by fastai. This method returns an object of a special fastai class called `L`, which has all the same functionality of Python's builtin `list`, plus a lot more. One of its handy features is that, when printed, it displays the count of items, before listing the items themselves (if there's more than 10 items, it just shows the first few)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(#9) [Path('cleaned.csv'),Path('item_list.txt'),Path('trained_model.pkl'),Path('models'),Path('valid'),Path('labels.csv'),Path('export.pkl'),Path('history.csv'),Path('train')]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"path.ls()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The MNIST dataset shows is a very common layout for machine learning datasets: separate folders for the *training set*, which is used to train a model, and the *validation set* (and/or *test set*), which is used to evaluate the model (we'll be talking a lot of these concepts very soon!) Let's see what's inside the training set:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(#2) [Path('train/7'),Path('train/3')]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"(path/'train').ls()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"There's a folder of \"3\"s, and a folder of \"7\"s. In machine learning parlance, we say that \"3\" and \"7\" are the *labels* in this dataset. Let's take a look in one of these folders (using `sorted` to ensure we all get the same order of files):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(#6131) [Path('train/3/10.png'),Path('train/3/10000.png'),Path('train/3/10011.png'),Path('train/3/10031.png'),Path('train/3/10034.png'),Path('train/3/10042.png'),Path('train/3/10052.png'),Path('train/3/1007.png'),Path('train/3/10074.png'),Path('train/3/10091.png')...]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"threes = (path/'train'/'3').ls().sorted()\n",
|
||
"sevens = (path/'train'/'7').ls().sorted()\n",
|
||
"threes"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As we might expect, it's full of image files. Let’s take a look at one now. Here’s an image of a handwritten number ‘3’, taken from the famous MNIST dataset of handwritten numbers:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAAA9ElEQVR4nM3Or0sDcRjH8c/pgrfBVBjCgibThiKIyTWbWF1bORhGwxARxH/AbtW0JoIGwzXRYhJhtuFY2q1ocLgbe3sGReTuuWbwkx6+r+/zQ/pncX6q+YOldSe6nG3dn8U/rTQ70L8FCGJUewvxl7NTmezNb8xIkvKugr1HSeMP6SrWOVkoTEuSyh0Gm2n3hQyObMnXnxkempRrvgD+gokzwxFAr7U7YXHZ8x4A/Dl7rbu6D2yl3etcw/F3nZgfRVI7rXM7hMUUqzzBec427x26rkmlkzEEa4nnRqnSOH2F0UUx0ePzlbuqMXAHgN6GY9if5xP8dmtHFfwjuQAAAABJRU5ErkJggg==\n",
|
||
"text/plain": [
|
||
"<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F24CDF87F50>"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"im3_path = threes[1]\n",
|
||
"im3 = Image.open(im3_path)\n",
|
||
"im3"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Here we are using the `Image` class from the *Python Imaging Library* (PIL), which is the most widely used Python package for opening, manipulating, and viewing images. Jupyter knows about PIL images, so it displays the image for us automatically.\n",
|
||
"\n",
|
||
"In a computer, everything is represented as a number. To view the numbers that make up this image, we have to convert it to a *NumPy array* or a *PyTorch tensor*. For instance, here's a few numbers from the top-left of the image, converted to a numpy array:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"array([[ 0, 0, 0, 0, 0, 0],\n",
|
||
" [ 0, 0, 0, 0, 0, 29],\n",
|
||
" [ 0, 0, 0, 48, 166, 224],\n",
|
||
" [ 0, 93, 244, 249, 253, 187],\n",
|
||
" [ 0, 107, 253, 253, 230, 48],\n",
|
||
" [ 0, 3, 20, 20, 15, 0]], dtype=uint8)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"array(im3)[4:10,4:10]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...and the same thing as a PyTorch tensor:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[ 0, 0, 0, 0, 0, 0],\n",
|
||
" [ 0, 0, 0, 0, 0, 29],\n",
|
||
" [ 0, 0, 0, 48, 166, 224],\n",
|
||
" [ 0, 93, 244, 249, 253, 187],\n",
|
||
" [ 0, 107, 253, 253, 230, 48],\n",
|
||
" [ 0, 3, 20, 20, 15, 0]], dtype=torch.uint8)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tensor(im3)[4:10,4:10]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can slice the array to pick just a part with the top of the digit in it, and then use a Pandas DataFrame to color-code the values using a gradient, which shows us clearly how the image is created from the pixel values:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<style type=\"text/css\" >\n",
|
||
" #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row0_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #efefef;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #7c7c7c;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #4a4a4a;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #606060;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #4d4d4d;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #7c7c7c;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #bbbbbb;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row1_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #e4e4e4;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #6b6b6b;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #171717;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #4b4b4b;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #171717;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row2_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #272727;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #0a0a0a;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #050505;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #333333;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #e6e6e6;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fafafa;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fbfbfb;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fdfdfd;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fafafa;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #4b4b4b;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #171717;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row3_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #1b1b1b;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #e0e0e0;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #4e4e4e;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #767676;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row4_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fcfcfc;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f6f6f6;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f6f6f6;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f8f8f8;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #e8e8e8;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #222222;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #090909;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #d0d0d0;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row5_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #060606;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #090909;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #979797;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row6_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f8f8f8;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #b6b6b6;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #252525;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #060606;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #999999;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row7_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f9f9f9;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #6b6b6b;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #101010;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #020202;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #545454;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f1f1f1;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row8_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #f7f7f7;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #060606;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #030303;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #020202;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #010101;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #181818;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #303030;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #a9a9a9;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #fefefe;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row9_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col0 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col1 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col2 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col3 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col4 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col5 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col6 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col7 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #e8e8e8;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col8 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #bababa;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col9 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #bababa;\n",
|
||
" color: #000000;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col10 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #393939;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col11 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col12 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col13 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col14 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col15 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col16 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #000000;\n",
|
||
" color: #f1f1f1;\n",
|
||
" } #T_507bccae_5414_11ea_8833_a372988ddbd9row10_col17 {\n",
|
||
" font-size: 6pt;\n",
|
||
" background-color: #ffffff;\n",
|
||
" color: #000000;\n",
|
||
" }</style><table id=\"T_507bccae_5414_11ea_8833_a372988ddbd9\" ><thead> <tr> <th class=\"blank level0\" ></th> <th class=\"col_heading level0 col0\" >0</th> <th class=\"col_heading level0 col1\" >1</th> <th class=\"col_heading level0 col2\" >2</th> <th class=\"col_heading level0 col3\" >3</th> <th class=\"col_heading level0 col4\" >4</th> <th class=\"col_heading level0 col5\" >5</th> <th class=\"col_heading level0 col6\" >6</th> <th class=\"col_heading level0 col7\" >7</th> <th class=\"col_heading level0 col8\" >8</th> <th class=\"col_heading level0 col9\" >9</th> <th class=\"col_heading level0 col10\" >10</th> <th class=\"col_heading level0 col11\" >11</th> <th class=\"col_heading level0 col12\" >12</th> <th class=\"col_heading level0 col13\" >13</th> <th class=\"col_heading level0 col14\" >14</th> <th class=\"col_heading level0 col15\" >15</th> <th class=\"col_heading level0 col16\" >16</th> <th class=\"col_heading level0 col17\" >17</th> </tr></thead><tbody>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row0\" class=\"row_heading level0 row0\" >0</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col0\" class=\"data row0 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col1\" class=\"data row0 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col2\" class=\"data row0 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col3\" class=\"data row0 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col4\" class=\"data row0 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col5\" class=\"data row0 col5\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col6\" class=\"data row0 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col7\" class=\"data row0 col7\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col8\" class=\"data row0 col8\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col9\" class=\"data row0 col9\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col10\" class=\"data row0 col10\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col11\" class=\"data row0 col11\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col12\" class=\"data row0 col12\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col13\" class=\"data row0 col13\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col14\" class=\"data row0 col14\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col15\" class=\"data row0 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col16\" class=\"data row0 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row0_col17\" class=\"data row0 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col0\" class=\"data row1 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col1\" class=\"data row1 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col2\" class=\"data row1 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col3\" class=\"data row1 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col4\" class=\"data row1 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col5\" class=\"data row1 col5\" >29</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col6\" class=\"data row1 col6\" >150</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col7\" class=\"data row1 col7\" >195</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col8\" class=\"data row1 col8\" >254</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col9\" class=\"data row1 col9\" >255</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col10\" class=\"data row1 col10\" >254</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col11\" class=\"data row1 col11\" >176</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col12\" class=\"data row1 col12\" >193</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col13\" class=\"data row1 col13\" >150</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col14\" class=\"data row1 col14\" >96</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col15\" class=\"data row1 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col16\" class=\"data row1 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row1_col17\" class=\"data row1 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row2\" class=\"row_heading level0 row2\" >2</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col0\" class=\"data row2 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col1\" class=\"data row2 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col2\" class=\"data row2 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col3\" class=\"data row2 col3\" >48</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col4\" class=\"data row2 col4\" >166</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col5\" class=\"data row2 col5\" >224</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col6\" class=\"data row2 col6\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col7\" class=\"data row2 col7\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col8\" class=\"data row2 col8\" >234</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col9\" class=\"data row2 col9\" >196</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col10\" class=\"data row2 col10\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col11\" class=\"data row2 col11\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col12\" class=\"data row2 col12\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col13\" class=\"data row2 col13\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col14\" class=\"data row2 col14\" >233</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col15\" class=\"data row2 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col16\" class=\"data row2 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row2_col17\" class=\"data row2 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row3\" class=\"row_heading level0 row3\" >3</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col0\" class=\"data row3 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col1\" class=\"data row3 col1\" >93</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col2\" class=\"data row3 col2\" >244</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col3\" class=\"data row3 col3\" >249</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col4\" class=\"data row3 col4\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col5\" class=\"data row3 col5\" >187</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col6\" class=\"data row3 col6\" >46</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col7\" class=\"data row3 col7\" >10</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col8\" class=\"data row3 col8\" >8</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col9\" class=\"data row3 col9\" >4</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col10\" class=\"data row3 col10\" >10</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col11\" class=\"data row3 col11\" >194</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col12\" class=\"data row3 col12\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col13\" class=\"data row3 col13\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col14\" class=\"data row3 col14\" >233</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col15\" class=\"data row3 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col16\" class=\"data row3 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row3_col17\" class=\"data row3 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row4\" class=\"row_heading level0 row4\" >4</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col0\" class=\"data row4 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col1\" class=\"data row4 col1\" >107</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col2\" class=\"data row4 col2\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col3\" class=\"data row4 col3\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col4\" class=\"data row4 col4\" >230</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col5\" class=\"data row4 col5\" >48</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col6\" class=\"data row4 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col7\" class=\"data row4 col7\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col8\" class=\"data row4 col8\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col9\" class=\"data row4 col9\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col10\" class=\"data row4 col10\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col11\" class=\"data row4 col11\" >192</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col12\" class=\"data row4 col12\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col13\" class=\"data row4 col13\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col14\" class=\"data row4 col14\" >156</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col15\" class=\"data row4 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col16\" class=\"data row4 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row4_col17\" class=\"data row4 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row5\" class=\"row_heading level0 row5\" >5</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col0\" class=\"data row5 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col1\" class=\"data row5 col1\" >3</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col2\" class=\"data row5 col2\" >20</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col3\" class=\"data row5 col3\" >20</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col4\" class=\"data row5 col4\" >15</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col5\" class=\"data row5 col5\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col6\" class=\"data row5 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col7\" class=\"data row5 col7\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col8\" class=\"data row5 col8\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col9\" class=\"data row5 col9\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col10\" class=\"data row5 col10\" >43</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col11\" class=\"data row5 col11\" >224</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col12\" class=\"data row5 col12\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col13\" class=\"data row5 col13\" >245</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col14\" class=\"data row5 col14\" >74</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col15\" class=\"data row5 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col16\" class=\"data row5 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row5_col17\" class=\"data row5 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row6\" class=\"row_heading level0 row6\" >6</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col0\" class=\"data row6 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col1\" class=\"data row6 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col2\" class=\"data row6 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col3\" class=\"data row6 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col4\" class=\"data row6 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col5\" class=\"data row6 col5\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col6\" class=\"data row6 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col7\" class=\"data row6 col7\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col8\" class=\"data row6 col8\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col9\" class=\"data row6 col9\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col10\" class=\"data row6 col10\" >249</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col11\" class=\"data row6 col11\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col12\" class=\"data row6 col12\" >245</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col13\" class=\"data row6 col13\" >126</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col14\" class=\"data row6 col14\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col15\" class=\"data row6 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col16\" class=\"data row6 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row6_col17\" class=\"data row6 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row7\" class=\"row_heading level0 row7\" >7</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col0\" class=\"data row7 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col1\" class=\"data row7 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col2\" class=\"data row7 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col3\" class=\"data row7 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col4\" class=\"data row7 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col5\" class=\"data row7 col5\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col6\" class=\"data row7 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col7\" class=\"data row7 col7\" >14</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col8\" class=\"data row7 col8\" >101</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col9\" class=\"data row7 col9\" >223</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col10\" class=\"data row7 col10\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col11\" class=\"data row7 col11\" >248</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col12\" class=\"data row7 col12\" >124</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col13\" class=\"data row7 col13\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col14\" class=\"data row7 col14\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col15\" class=\"data row7 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col16\" class=\"data row7 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row7_col17\" class=\"data row7 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row8\" class=\"row_heading level0 row8\" >8</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col0\" class=\"data row8 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col1\" class=\"data row8 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col2\" class=\"data row8 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col3\" class=\"data row8 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col4\" class=\"data row8 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col5\" class=\"data row8 col5\" >11</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col6\" class=\"data row8 col6\" >166</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col7\" class=\"data row8 col7\" >239</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col8\" class=\"data row8 col8\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col9\" class=\"data row8 col9\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col10\" class=\"data row8 col10\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col11\" class=\"data row8 col11\" >187</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col12\" class=\"data row8 col12\" >30</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col13\" class=\"data row8 col13\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col14\" class=\"data row8 col14\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col15\" class=\"data row8 col15\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col16\" class=\"data row8 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row8_col17\" class=\"data row8 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row9\" class=\"row_heading level0 row9\" >9</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col0\" class=\"data row9 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col1\" class=\"data row9 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col2\" class=\"data row9 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col3\" class=\"data row9 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col4\" class=\"data row9 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col5\" class=\"data row9 col5\" >16</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col6\" class=\"data row9 col6\" >248</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col7\" class=\"data row9 col7\" >250</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col8\" class=\"data row9 col8\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col9\" class=\"data row9 col9\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col10\" class=\"data row9 col10\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col11\" class=\"data row9 col11\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col12\" class=\"data row9 col12\" >232</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col13\" class=\"data row9 col13\" >213</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col14\" class=\"data row9 col14\" >111</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col15\" class=\"data row9 col15\" >2</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col16\" class=\"data row9 col16\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row9_col17\" class=\"data row9 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <th id=\"T_507bccae_5414_11ea_8833_a372988ddbd9level0_row10\" class=\"row_heading level0 row10\" >10</th>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col0\" class=\"data row10 col0\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col1\" class=\"data row10 col1\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col2\" class=\"data row10 col2\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col3\" class=\"data row10 col3\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col4\" class=\"data row10 col4\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col5\" class=\"data row10 col5\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col6\" class=\"data row10 col6\" >0</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col7\" class=\"data row10 col7\" >43</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col8\" class=\"data row10 col8\" >98</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col9\" class=\"data row10 col9\" >98</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col10\" class=\"data row10 col10\" >208</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col11\" class=\"data row10 col11\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col12\" class=\"data row10 col12\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col13\" class=\"data row10 col13\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col14\" class=\"data row10 col14\" >253</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col15\" class=\"data row10 col15\" >187</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col16\" class=\"data row10 col16\" >22</td>\n",
|
||
" <td id=\"T_507bccae_5414_11ea_8833_a372988ddbd9row10_col17\" class=\"data row10 col17\" >0</td>\n",
|
||
" </tr>\n",
|
||
" </tbody></table>"
|
||
],
|
||
"text/plain": [
|
||
"<pandas.io.formats.style.Styler at 0x7f24e23b2190>"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"#hide_output\n",
|
||
"im3_t = tensor(im3)\n",
|
||
"df = pd.DataFrame(im3_t[4:15,4:22])\n",
|
||
"df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img width=\"453\" id=\"output_pd_pixels\" src=\"images/att_00058.png\">"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can see that the background white pixels are stored as the number zero, black is the number 255, and shades of grey are between the two. This image contains 28 pixels across and 28 pixels down, for a total of 768 pixels. (This is much smaller than an image that you would get from a phone camera, which has millions of pixels, but is a convenient size for our initial learning and experiments. We will build up to bigger, full-colour images soon.)\n",
|
||
"\n",
|
||
"So, now you've seen what an image looks like to a computer, let's recall our goal: create a model that can recognise “3”s and “7”s. How might you go about getting a computer to do that?\n",
|
||
"\n",
|
||
"> stop: Before you read on, take a moment to think about how a computer might be able to recognize these two different digits. What kind of features might it be able to look at? How might it be able to identify these features? How could it combine them together? Learning works best when you try to solve problems yourself, rather than just reading somebody else's answers; so step away from this book for a few minutes, grab a piece of paper and pen, and jot some ideas down…"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## First try: pixel similarity"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"So, here is a first idea: how about we find the average pixel value for every pixel of the threes and do the same for each of the sevens. Then, to classify a digit see which of these two group averages it is most similar to. This certainly seems like it should be better than nothing, so it will make a good baseline."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> note: A _baseline_ is a simple model which you are confident should perform reasonably well. It should be very simple to implement, and very easy to test, so that you can then test each of your improved ideas, and make sure they are always better than your baseline. Without starting with a sensible baseline, it is very difficult to know whether your super fancy models are actually any good. One good approach to creating a baseline is doing what we have done here: think of a simple, easy to implement model. Another good approach is to search around to find other people that have solved similar problems to yours, and download and run their code on your dataset. Ideally, try both of these!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Step one for our simple model is to get the average of pixel values for each of our two groups. In the process of doing this, we will learn a lot of neat Python numeric programming tricks!\n",
|
||
"\n",
|
||
"Let's create a tensor containing all of our threes stacked together. We already know how to create a tensor containing a single image. To create a tensor for every image in a directory, we can use a list comprehension. (Notice also that we use Jupyter to do some little checks of our work along the way; in this case, making sure that the number of returned items seems reasonable):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(6131, 6265)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"seven_tensors = [tensor(Image.open(o)) for o in sevens]\n",
|
||
"three_tensors = [tensor(Image.open(o)) for o in threes]\n",
|
||
"len(three_tensors),len(seven_tensors)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> note: List and dictionary comprehensions are a wonderful feature of Python. Many Python programmers use them every day, including all of the authors of this book—they are part of \"idiomatic Python\". But programmers coming from other languages may have never seen them before. There are a lot of great tutorials just a web search away, so we won't spend a long time discussing them now. Here is a quick explanation and example to get you started. A list comprehension looks like this: `new_list = [f(o) for o in a_list if o>0]`. This would return every element of `a_list` that is greater than zero, after passing it to the function `f`. There are three parts here: the collection you are iterating over (`a_list`), an optional filter (`if o>0`), and something to do to each element (`f(o)`). It's not only shorter to write but way faster than the alternative ways of creating the same list with a loop."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We'll also check that one of the images looks okay. Since we now have tensors (which Jupyter by default will print as values), rather than PIL images (which Jupyter by default will display as an image), we need to use fastai's show_image function to display it:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAADjElEQVR4nO2aPyh9YRjHP/f4k38L5X+ysohsUpTBhEVMJGUyGAwWg0kGkcFqlMFIyv+kSGIwKWUiUvKn5P/9DXrvcR+He+695957+vV8llPnvvd9n77n2/s8z3tOIBgMothYqQ7Ab6ggAhVEoIIIVBBBeoTf/+cUFHC6qQ4RqCACFUSggghUEIEKIlBBBCqIQAURRKpUPeHh4QGAyclJAI6PjwFYXl4GIBgMEgh8FY59fX0A3N7eAlBTUwNAU1MTAC0tLQmNVR0iCEQ4MYupl7m4uABgYmICgJWVFQDOz8/DxhUVFQFQX18fGvMbxcXFAFxeXsYSkhPay7jBkz1ke3sbgLa2NgBeX18BeH9/B6CzsxOAnZ0dAAoLCwFC+4ZlWXx8fISNXVpa8iK0qFGHCDxxyN3dHQBPT09h98vLywGYmpoCoKys7Nc5LMsKu0p6enrijtMN6hCBJ1nm8/MTgOfn57D75mlnZWVFnOPq6gqAxsZGwM5I2dnZAOzu7gJQW1vrJiQ3aJZxgyd7iHFCTk5OzHNUVlYCdmYyzjDVrYfO+BN1iCApvYzk5eUFgM3NTQCGhoZCzsjMzARgenoagIGBgaTGpg4RJMUhpnIdHh4GYH5+HrDrl++0t7cD0NXVlYzQfqAOESSk25WY+iQ/Px8g1LeYqxMlJSUAlJaWAjAyMgLYvY7pg+LAcYKkCCIxRdjJyUno3tjYGAD7+/t//tcIMjc3B0Bubm6sYWhh5oaUOMSJt7c3wHaPScn9/f2O4w8PDwGoq6uLdUl1iBtSUpg5kZGRAUBFRQUAvb29AKyurgKwsLAQNn5tbQ2IyyGOqEMEvnGIxKTV39JrdXV1QtZVhwh8k2Uke3t7ADQ3NwP2sYDh5uYGgIKCgliX0CzjBt/tIWdnZwAMDg4CP51h6pK8vLyErK8OEfhmDzF1RUdHB2AfIhnMEePp6Slg1y1xoHuIG1K6h1xfXwMwOzvL+Pg48PVpxHfMS+6trS3AE2f8iTpE4KlDzBPf2NgA7I9bHh8fATg4OADg6OgIsM807u/vQ3OkpaUB9qvLmZkZIHFZRaIOEXiaZbq7uwFYXFyMOpDW1lYARkdHAWhoaIh6jijRLOMGTx1iPnIxtUQkzEHy+vo6VVVVXwHFf3jsFnWIG3xTqaYAdYgbVBCBCiJQQQQqiCBSL5O0osAvqEMEKohABRGoIAIVRKCCCP4B/PMI7HrW9/wAAAAASUVORK5CYII=\n",
|
||
"text/plain": [
|
||
"<Figure size 72x72 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"show_image(three_tensors[1]);"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We want to take the average of the pixels, which means we need to combine all of the items of this list into a single three-dimensional tensor. The most common way to describe such a tensor is to call it a *rank-3 tensor*. We often need to stack up individual tensors in a collection into a single tensor. Unsurprisingly, PyTorch comes with a function called `stack`. Some things in PyTorch, such as taking a mean, require us to cast our integer types to float types. Since we'll be needing this later, we'll cast our tensor to `float` now. Casting in PyTorch is as simple as typing the name of the type you wish to cast to, and treating it as a method. Generally when images are floats, the pixels are expected to be be zero and one, so we also divide by 255 here."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"torch.Size([6131, 28, 28])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"stacked_sevens = torch.stack(seven_tensors).float()/255\n",
|
||
"stacked_threes = torch.stack(three_tensors).float()/255\n",
|
||
"stacked_threes.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Perhaps the most important attribute of tensor is its shape. This tells you the length of each axis. In this case, we can see that we have 6131 images, each of size 28 x 28 pixels. There is nothing specifically about this tensor that says that the first axis is the number of images, the second is the height, and the third is the width — the semantics of a tensor are entirely up to us, and how we construct it. As far as PyTorch is concerned, it is just a bunch of numbers in memory.\n",
|
||
"\n",
|
||
"The length of a tensor's shape is its rank."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"3"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"len(stacked_threes.shape)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> Important: it's really important for you to commit to memory and practice these bits of tensor jargon: _rank_ is the number of axes or dimensions in a tensor; _shape_ is the size of each axis of a tensor."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can also get a tensor's rank directly with `ndim`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"3"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"stacked_threes.ndim"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we can calculate the mean across all of these tensors by taking the mean of dimension zero."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAE1klEQVR4nO2byU8jPRTEf2En7AgQO4gDi9hO8P9fOIE4AGIR+xoU1kAgQAKZA6o4eUMU6O7R983IdbE66XYHv3K98rOJ5fN5PByq/usf8H+DHxADPyAGfkAM/IAY1FT4/l9OQbGvPvQMMfADYuAHxMAPiIEfEINKWSYSaL1kW/t9MWKx2JfX5T6PCp4hBpEwxEb+/f0dgFwuB0A2mwXg6emppH1+fgbg9fWVj48PAGprawGIx+MANDc3A9DU1ARAfX19yX3V1dVAeQb9FJ4hBqEYIkZYJmQyGQBub28BuLi4AGBnZweAvb09AM7PzwFIJpOFPsSEvr4+AMbHxwGYnZ0FYHR0FICuri7AMUiMqar6jHFQpniGGARiSDmtkDZcXV0BsL+/D8Da2hoAu7u7ABwcHACOOalUipeXl5J3tLW1AXB6elrS58LCAgBTU1MA9Pf3A44pYkhQeIYYhGKIMoMYoigXZw+AmprP13R2dgJuvo+NjQGf2qNnbm5uSvp4e3sD4O7uDnC6I41Rn8pK+m1eQyJCJFlG0VDkW1tbARgaGgKgo6MDcFEX5CFyuVwhIx0dHQFwcnICOF3SO8RGsbOc+w0KzxCDUAyRoivSmse6lvIrGymqgqKayWRIJBIAXF9fl/Ste6RD8il6V11dXcn93qlGjEAMURQUFUVP19ISO8/FFGWfx8dH4NNjbGxsALC1tQU4DWloaAAc2wYGBgCXXRobGwHHyrDwDDGIREMsY8QMtVrjKMtcXl4CsLm5CcDq6irr6+uAc7fqc35+HoDe3l7AOVNlsqjWMIW/KdTT/yBCaUglVyjNSKVSgIv+0tISACsrKwAsLy8XHKhYJScqBrS0tABOU6JmhuAZYhBKQyxTBF0rm2ilKp3Q6lcMSSQSBWbIX6gyJv2RPxHburu7S+6zlbOgiLTIbBd9tjygHytBnJ6eBmBkZKTQh50KelZlABWZ2tvbATeFoiol+iljEMnizk4Zu9iTiVIZUNcyZvl8vsAmlR+TySRAwdI/PDwAcHh4CMDw8DDgmGItfFCj5hliEKpAZDWjXDnAFnGkGcXPSyskmipEizlKy/peDBJTrFELWijyDDH4EUMsI2xrmaPoKDVqnlsUM0T3SDO03NcC0pYrldpticFrSEQIpCG2uKxWURLEEEVLGcBuFcRisd88ixiirKN32ixiWesXdxEjlIbYTWw7nxUt6YLdqC4uHIsRcqTb29uA8yF6l5yptEV9e6f6hxDKh2i+q/CjzSQ5UGUCRdEu3IR0Os3x8THgmCEfoj61ua1FXU9PD+BKi9apBoVniMGPGGLnp90qSKfTgCsEyV1KY3SfXcmmUqlCiUDPaAtzcHAQcI50cnIScAUkOVT5FJ9lIkYgDZGiKyrSBm0JCPf394DTA2UQfS6NyWazBdboGMTMzAwAc3NzACwuLgIwMTEBOA0pVw8JCs8Qg0AaomhK2VUA1haBXcvY+8/OzgDnQuPxeEEjdERCtZNymmFLh2Gzi+AZYhCrcIzgyy8rHcOUY1V2kQtVK99SfBRTkbetdMkew4xg+8H/e8h3EIghZW8uc4S70tFu+N3jVGojgGfIdxApQ/4yeIZ8B35ADPyAGFRyqtH+d85fAM8QAz8gBn5ADPyAGPgBMfADYvALMumtb+Vr5kIAAAAASUVORK5CYII=\n",
|
||
"text/plain": [
|
||
"<Figure size 72x72 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"mean3 = stacked_threes.mean(0)\n",
|
||
"show_image(mean3);"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"According to this dataset, this is the ideal number three! Let's do the same thing for the sevens, but let's put all the steps together at once to save some time:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAElUlEQVR4nO2bSUszWRSGn3JKUGOM84giCA7oQnHj33ejiKCI4MIpxhES5ylx6oW8dTvnMyaWJd1033dzqVRqyLlPnelWgvf3d7yc6v7pG/i3yRvEyBvEyBvEyBvEqKHK/v9yCAo++9ATYuQNYuQNYuQNYuQNYuQNYuQNYuQNYuQNYlQtU42kaj2Wz/YHwaeJY+TvRZUnxOhHhGimNb69vZWNr6+vX25r/Ep1dR9zVl9f/3HDDQ1l29qvUQRFJckTYvQtQiwRmvGXlxcAnp6eALi9vQXg+voagMvLSwAuLi7KxoeHh/A4nUOjrtHU1ARAR0cHAENDQwAMDw8D0NvbC0BraysAjY2NgCPou6R4QowiEaJZfH5+BhwR+XwegIODAwC2t7cB2NvbA+Dw8BCAo6MjAK6urgAoFovhuUSdRhEiMubn5wFYXFwEYGFhoWx/JZ9SqzwhRpEIUXTQrN7f3wNQKBQA2N/fB2B3dxdwpIicm5ubsvMlEgkSiQTgyBB1Oufj4yPgfMXo6GjZtXWc5KNMTKqJkGqZp2ZDnr25uRmA9vZ2AEZGRgDo7u4GnF/Q/nQ6HRKiGRdNq6urgPM3IsFe0+YlUeUJMaqJEJv9aRaUNYqIzs5OAMbGxsr29/f3A44Mbff19QEffkEzrBxlaWkJgLOzM8DlOJlMpmxsa2sDXP4RNbpInhCjb0WZaoTYGkVEaDuVSgGOJM1uQ0NDmNtYKdroXD09PYDzS+l0GnCE/LQa9oQYRap2K1WglhRFDn2/paUF+LPuCIIgPEY5irLb8/NzwNUyqmEGBgbKrql7+akiPTKSNYx+oH64DCKDCXttS6+vr2G43dzcBGB9fR1wj8zU1BTgHhWFbHsuSamCT91/qB81iKyTFSkiwTo6jXo8NIulUolsNgvA8vIyADs7O4CjTKFcox6VuFuKnhCjSIRU8iX2ubUNJdtYUnGYz+dZWVkBYG1tDXCJ2OzsLACTk5OAC7vJZLLs2nGR4gkximUZwhZalZrOkvarpM/lcmxsbAAuzKoQVENoZmYGcOFXfsq2Cn1iFrNiiTJ22/oSjXYZQknY1tYWuVwOcDM/NzcHOB8yODgIuKRO+YdtGVa6t1rlCTGK1YdUyg7tfvmO09NTALLZbLgkoTxjenoagImJCcBlprbMj4sMyRNiFOtityVBks8olUqAawIpGy0UCiEBKt7Gx8cB6OrqAqrnHT4P+SXFSkiljFRkaGnz5OQEcO3BIAjCdqKWF7TwpMrZRpXfei3CE2IUCyGVXouwC1la6jw+PgZcvZJKpcJ2oho/2lZe8tPlhVrlCTH6FR+ihrF8h7peWmy6u7sDXE6RyWTCaKJqVv0O+Y64apVq8oQY/corVYou8hHKTLUtMuQnEolE+OLL3z+D6C++RJUnxCiWarfaYriI0EKVljIVhZLJZLjgpIxVmWmlrvpvyRNiFFSZ3Zr+YmZ9iH3lStFGPqRYLJYdFwRBSJHIkA+xL9HF2CHzfzGrRbEQ8sdBFc5po9JX37cE/EIe4gmpRdUI+d/JE2LkDWLkDWLkDWLkDWLkDWL0F7hnDWZImx+vAAAAAElFTkSuQmCC\n",
|
||
"text/plain": [
|
||
"<Figure size 72x72 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"mean7 = stacked_sevens.mean(0)\n",
|
||
"show_image(mean7);"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's now pick a \"3\", and measure its *distance* from each of these \"ideal digits\".\n",
|
||
"\n",
|
||
"> stop: How would you calculate how similar a particular image is from each of our ideal digits? Remember to step away from this book and jot down some ideas, before you move on! Research shows that recall and understanding improves dramatically when you are *engaged* with the learning process by solving problems, experimenting, and trying new ideas yourself\n",
|
||
"\n",
|
||
"Here's our sample \"3\":"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAADjElEQVR4nO2aPyh9YRjHP/f4k38L5X+ysohsUpTBhEVMJGUyGAwWg0kGkcFqlMFIyv+kSGIwKWUiUvKn5P/9DXrvcR+He+695957+vV8llPnvvd9n77n2/s8z3tOIBgMothYqQ7Ab6ggAhVEoIIIVBBBeoTf/+cUFHC6qQ4RqCACFUSggghUEIEKIlBBBCqIQAURRKpUPeHh4QGAyclJAI6PjwFYXl4GIBgMEgh8FY59fX0A3N7eAlBTUwNAU1MTAC0tLQmNVR0iCEQ4MYupl7m4uABgYmICgJWVFQDOz8/DxhUVFQFQX18fGvMbxcXFAFxeXsYSkhPay7jBkz1ke3sbgLa2NgBeX18BeH9/B6CzsxOAnZ0dAAoLCwFC+4ZlWXx8fISNXVpa8iK0qFGHCDxxyN3dHQBPT09h98vLywGYmpoCoKys7Nc5LMsKu0p6enrijtMN6hCBJ1nm8/MTgOfn57D75mlnZWVFnOPq6gqAxsZGwM5I2dnZAOzu7gJQW1vrJiQ3aJZxgyd7iHFCTk5OzHNUVlYCdmYyzjDVrYfO+BN1iCApvYzk5eUFgM3NTQCGhoZCzsjMzARgenoagIGBgaTGpg4RJMUhpnIdHh4GYH5+HrDrl++0t7cD0NXVlYzQfqAOESSk25WY+iQ/Px8g1LeYqxMlJSUAlJaWAjAyMgLYvY7pg+LAcYKkCCIxRdjJyUno3tjYGAD7+/t//tcIMjc3B0Bubm6sYWhh5oaUOMSJt7c3wHaPScn9/f2O4w8PDwGoq6uLdUl1iBtSUpg5kZGRAUBFRQUAvb29AKyurgKwsLAQNn5tbQ2IyyGOqEMEvnGIxKTV39JrdXV1QtZVhwh8k2Uke3t7ADQ3NwP2sYDh5uYGgIKCgliX0CzjBt/tIWdnZwAMDg4CP51h6pK8vLyErK8OEfhmDzF1RUdHB2AfIhnMEePp6Slg1y1xoHuIG1K6h1xfXwMwOzvL+Pg48PVpxHfMS+6trS3AE2f8iTpE4KlDzBPf2NgA7I9bHh8fATg4OADg6OgIsM807u/vQ3OkpaUB9qvLmZkZIHFZRaIOEXiaZbq7uwFYXFyMOpDW1lYARkdHAWhoaIh6jijRLOMGTx1iPnIxtUQkzEHy+vo6VVVVXwHFf3jsFnWIG3xTqaYAdYgbVBCBCiJQQQQqiCBSL5O0osAvqEMEKohABRGoIAIVRKCCCP4B/PMI7HrW9/wAAAAASUVORK5CYII=\n",
|
||
"text/plain": [
|
||
"<Figure size 72x72 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"a_3 = stacked_threes[1]\n",
|
||
"show_image(a_3);"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can't just add up the differences between the pixels of this image and the ideal digit. Why not?...\n",
|
||
"\n",
|
||
"Because some will be too high, some will be too low, but overall these will balance out. Instead, there's two main ways data scientists measure *distance* in this context:\n",
|
||
"\n",
|
||
"- Take the mean of the *absolute value* of differences (_absolute value_ is the function that replaces negative values with positive values). This is called the *mean absolute difference* or *L1 norm*\n",
|
||
"- Take the mean of the *square* of differences (which makes everything positive) and then take the *square root* (which *undoes* the squaring). This is called the *root mean squared error (RMSE)* or *L2 norm*.\n",
|
||
"\n",
|
||
"> important: in this book we generally assume that you have completed high school maths, and remember at least some of it... But everybody forgets some things! It all depends on what you happen to have had reason to practice in the meantime. Perhaps you have forgotten what a _square root_ is, or exactly how they work. No problem! Any time you come across a maths concept that is not explained fully in this book, don't just keep moving on, but instead stop and look it up. Make sure you understand the basic idea of what that maths concept is, how it works, and why we might be using it. One of the best places to refresh your understanding is Khan Academy. For instance, Khan Academy has a great [introduction to square roots](https://www.khanacademy.org/math/algebra/x2f8bb11595b61c86:rational-exponents-radicals/x2f8bb11595b61c86:radicals/v/understanding-square-roots)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's try both of these now:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(0.1114), tensor(0.2021))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dist_3_abs = (a_3 - mean3).abs().mean()\n",
|
||
"dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()\n",
|
||
"dist_3_abs,dist_3_sqr"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(0.1586), tensor(0.3021))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dist_7_abs = (a_3 - mean7).abs().mean()\n",
|
||
"dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()\n",
|
||
"dist_7_abs,dist_7_sqr"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In both cases, the distance between our `3` and the \"ideal\" `3` is less than the distance to the ideal `7`. So our simple model will give the right prediction in this case."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> s: Intuitively, the difference between L1 norm and mean squared error (*MSE*) is that the latter will penalize more heavily bigger mistakes than the former (and be more lenient with small mistakes)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"PyTorch already provides both of these as *loss functions*. You'll find these inside `torch.nn.functional`, which the PyTorch team recommends importing as `F` (and is available by default under that name in fastai). Here *MSE* stands for *mean squared error*, and *L1* refers to the standard mathematical jargon for *mean absolute value* (in math it's called the *L1 norm*)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(0.1586), tensor(0.3021))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> j: When I first came across this \"L1\" thingie, I looked it up to see what on Earth it meant, found on Google that it is a _vector norm_ using _absolute value_, so looked up _vector norm_ and started reading: _Given a vector space V over a field F of the real or complex numbers, a norm on V is a nonnegative-valued any function p: V → \\[0,+∞) with the following properties: For all a ∈ F and all u, v ∈ V, p(u + v) ≤ p(u) + p(v)..._ Then I stopped reading. \"Ugh, I'll never understand math!\" I thought, for the thousandth time. Since then I've learned that every time these complex mathy bits of jargon come up in practice, it turns out I can replace them with a tiny bit of code! Like the _L1 loss_ is just equal to `(a-b).abs().mean()`, where `a` and `b` are tensors. I guess mathy folks just think differently to me... I'll make sure, in this book, every time some mathy jargon comes up, I'll give you the little bit of code it's equal to as well, and explain in common sense terms what's going on."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### NumPy arrays and PyTorch tensors"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In the above code we completed various mathematical operations on *PyTorch tensors*. If you've done some numeric programming in Pytorch before, you may recognize these as being similar to *Numpy arrays*. [Numpy](https://numpy.org/) is the most widely used library for scientific and numeric programming in Python, and provides very similar functionality and a very similar API to that provided by PyTorch; however, it does not support using the GPU, or calculating gradients, which are both critical for deep learning. Therefore, in this book we will generally use PyTorch tensors instead of NumPy arrays, where possible. (Note that fastai adds some features to NumPy and PyTorch to make them a bit more similar to each other; if any code in this book doesn't work on your computer, it's possible that you forgot to include a line at the start of your notebook such as: `from fastai.vision.all import *`.)\n",
|
||
"\n",
|
||
"So, what's an array? And what's a tensor?\n",
|
||
"\n",
|
||
"And why should you care?"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"A numpy array is multidimensional table of data, with all items of the same type. Since that can be any type at all, they could even be arrays of arrays, with the innermost array potentially being different sizes — this is called a \"jagged array\". By \"multidimensional table\" we mean, for instance, a list (dimension of one), a table or matrix (dimension of two), a \"table of tables\" or a \"cube\" (dimension of three), and so forth. If the items are all of some simple type such as an integer or a float then numpy will store them as a compact C data structure in memory. This is where numpy shines. Numpy has a wide variety of operators and methods which can run computations on these compact structures at the same speed as optimized C, because they are written in optimized C!\n",
|
||
"\n",
|
||
"**Arrays and tensors can finish computations many thousands of times faster than using pure Python!**\n",
|
||
"A PyTorch tensor is nearly the same thing. It, too, is a multidimensional table of data, with all items of the same type. However, they cannot be just any old type — they have to be a basic numeric type. Therefore, a PyTorch tensor cannot be a jagged array. It is always a regularly shaped multidimensional rectangular structure. The vast majority of methods and operators supported by numpy on these structures are also supported by PyTorch. But PyTorch has the very big benefit that these structures can live on the GPU, in which case this computation will be optimised for the GPU. And furthermore, PyTorch can automatically calculate derivatives of these operations, including combinations of them. As you'll see, it would be impossible to do deep learning in practice without this capability.\n",
|
||
"\n",
|
||
"> s: If you don't know what C is, do not worry as you won't need it at all. In a nutshell, it's a low-level (low-level means more similar to the language that computers use internally) language that is very fast compared to Python. To take advantage of its speed while programming in Python, try to avoid as much as possible writing loops and replace them by commands that work directly on arrays or tensors.\n",
|
||
"\n",
|
||
"Perhaps the most important new coding skill for a Python programmer to learn is how to effectively use the array/tensor APIs. We will be showing lots more tricks later in this book, but here's a summary of the key things you need to know for now."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"To create an array or tensor, pass a list (or list of lists, or list of lists of lists, etc), to `array()` or `tensor()`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"data = [[1,2,3],[4,5,6]]\n",
|
||
"arr = array (data)\n",
|
||
"tns = tensor(data)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"array([[1, 2, 3],\n",
|
||
" [4, 5, 6]])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"arr # numpy"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[1, 2, 3],\n",
|
||
" [4, 5, 6]])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns # pytorch"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"All the operations below are shown on tensors - the syntax and results for NumPy arrays is idential.\n",
|
||
"\n",
|
||
"You can select a row:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([4, 5, 6])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns[1]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...or a column, using `:` to indicate *all of the first axis* (we sometimes refer to the dimensions of tensors/arrays as *axes*):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([2, 5])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns[:,1]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can combine these, along with Python slice syntax (`[start:end]`, `end` being excluded)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([5, 6])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns[1,1:3]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can use the standard operators:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[2, 3, 4],\n",
|
||
" [5, 6, 7]])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns+1"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Tensors have a type:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'torch.LongTensor'"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns.type()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Tensors will automatically change from int to float if needed"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[1.5000, 3.0000, 4.5000],\n",
|
||
" [6.0000, 7.5000, 9.0000]])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tns*1.5"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Broadcasting and metrics"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"So, is our baseline model any good? To quantify this, we will use a metric. A metric is a number which is calculated from the predictions of our model, and the correct labels in our dataset, and tells us something about how good our model is. For instance, we could use either of the functions we saw in the previous section, mean squared error or mean absolute error, and take the average of them over the whole dataset. However, neither of these are numbers that are very understandable to most people; in practice, we normally use *accuracy* as the metric for classification models.\n",
|
||
"\n",
|
||
"As we've discussed, we need to use a *validation set* to calculate our metric. That means we need to do is remove some of the data from training entirely, so it is not seen by the model at all. As it turns out, the creators of the MNIST dataset have already done this for us. Do you remember how there was a whole separate directory called \"valid\"? That's what this directory is for!\n",
|
||
"\n",
|
||
"So to start with, let's create tensors for our threes and sevens from that directory."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([1010, 28, 28]), torch.Size([1028, 28, 28]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"valid_3_tens = torch.stack([tensor(Image.open(o)) for o in (path/'valid'/'3').ls()])\n",
|
||
"valid_3_tens = valid_3_tens.float()/255\n",
|
||
"valid_7_tens = torch.stack([tensor(Image.open(o)) for o in (path/'valid'/'7').ls()])\n",
|
||
"valid_7_tens = valid_7_tens.float()/255\n",
|
||
"valid_3_tens.shape,valid_7_tens.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we need a function that decides if a digit is a 3 or a 7. We need to know which of our \"ideal digits\" its closer to. First, we need a function that calculates the distance from a dataset to an ideal image. It turns out we can do that very simply, in this case calculating the mean absolute error:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(0.1114)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))\n",
|
||
"mnist_distance(a_3, mean3)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Something very interesting happens when we run this function on the whole set of threes in the validation set:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor([0.1050, 0.1526, 0.1186, ..., 0.1122, 0.1170, 0.1086]),\n",
|
||
" torch.Size([1010]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"valid_3_dist = mnist_distance(valid_3_tens, mean3)\n",
|
||
"valid_3_dist, valid_3_dist.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"It's returned the distance for every single image, as a vector (i.e. rank 1 tensor) of length 1010 (the number of threes in our validation set). How did that happen? Have a look again at our function `mnist_distance`, and you'll see we have there `(a-b)`. The magic trick is that PyTorch, when it sees two tensors of different ranks, will `broadcast` the tensor with the smaller rank to have the same size as the one with the larger rank. Then, when PyTorch sees an operation on two tensors of the same rank, it completes the operation on each corresponding element of the two tensors, and returns the tensor result. For instance:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([2, 3, 4])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"tensor([1,2,3]) + tensor([1,1,1])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"So in this case, PyTorch treats `mean3`, a rank 2 tensor representing a single image, as if it was 1010 copies of the same image, and then subtracts each of those copies from each \"three\" in our validation set. What shape would you expect this tensor to have? Try to figure it out yourself before you look at the answer below:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"torch.Size([1010, 28, 28])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"(valid_3_tens-mean3).shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We are calculating the difference between the \"ideal 3\" and each of 1010 threes in the validation set, for each of `28x28` images, resulting in the shape `1010,28,28`.\n",
|
||
"\n",
|
||
"There's a couple of really cool things to know about this operation we just did:\n",
|
||
"\n",
|
||
"- PyTorch doesn't *actually* copy `mean3` 1010 times. Instead, it just *pretends* as if it was a tensor of that shape, but doesn't actually allocate any additional memory\n",
|
||
"- It does the whole calculation in C (or, if you're using a GPU, in CUDA, the equivalent of C on the GPU), tens of thousands of times faster than pure Python (up to millions of times faster on a GPU!)\n",
|
||
"\n",
|
||
"This is true of all broadcasting and elementwise operations and functions done in PyTorch. **It's the most important technique for you to know to create efficient PyTorch code.**\n",
|
||
"\n",
|
||
"Next in `mnist_distance` we see `abs()`. You might be able to guess now what this does when applied to a tensor... It applies the method to each individual element in the tensor, and returns a tensor of the results (that is, it applies the method \"elementwise\"). So in this case, we'll get back 1010 absolute values.\n",
|
||
"\n",
|
||
"Finally, our function calls `mean((-1,-2))`. In Python, `-1` refers to the last element, and `-2` refers to the second last. So in this case, this tells PyTorch that we want to take the mean of the last two axes of the tensor. After taking the mean over the last two axes, we are left with just the first axis, which is why our final size was `(1010)`.\n",
|
||
"\n",
|
||
"We'll be learning lots more about broadcasting throughout this book, especially in <<chapter_foundations>>, and will be practising it regularly too.\n",
|
||
"\n",
|
||
"We can use this `mnist_distance` to figure out whether an image is a three or not by using the logic: if the distance between the digit in question and the ideal 3 is less than the distance to the ideal 7, then it's a 3. This function will automatically do broadcasting and be applied elementwise, just like all PyTorch functions and operators."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def is_3(x): return mnist_distance(x,mean3) < mnist_distance(x,mean7)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's test it on our example case (note also that when we convert the boolean response to a float, we get a `1.0` for true and `0.0` for false):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(True), tensor(1.))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"is_3(a_3), is_3(a_3).float()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"And testing it on the full validation set of threes:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([True, True, True, ..., True, True, True])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"is_3(valid_3_tens)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we can calculate the accuracy for each of threes and sevens, by taking the average of that function for all threes, and it's inverse for all sevens:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(0.9168), tensor(0.9854), tensor(0.9511))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"accuracy_3s = is_3(valid_3_tens).float() .mean()\n",
|
||
"accuracy_7s = (1 - is_3(valid_7_tens).float()).mean()\n",
|
||
"\n",
|
||
"accuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This looks like a pretty good start! We're getting over 90% accuracy on both threes and sevens.\n",
|
||
"\n",
|
||
"But let's be honest: threes and sevens are very different looking digits. And we're only classifying two out of the ten possible digits so far. So we're going to need to do better! To do better, perhaps we should try some deep learning."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Stochastic Gradient descent (SGD)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Do you remember the way that Arthur Samuel described machine learning, which we quoted in <<chapter_intro>>:\n",
|
||
"\n",
|
||
"> : _Suppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programed would \"learn\" from its experience._\n",
|
||
"\n",
|
||
"As we discussed, this is the key to allowing us to have something which can get better and better — to learn. But our pixel similarity approach does not really do this. We do not have any kind of weight assignment, or any way of improving based on testing the effectiveness of a weight assignment. In other words, we can't really improve our pixel similarity approach by modifying a set of parameters. In order to take advantage of the power of deep learning, we will first have to represent our task in the way that Arthur Samuel described it.\n",
|
||
"\n",
|
||
"Instead of trying to find the similarity between an image and a \"ideal image\" we could instead look at each individual pixel, and come up with a set of weights for each pixel, such that the highest weights are associated with those pixels most likely to be black for a particular category. For instance, pixels towards the bottom right are not very likely to be activated for a seven, so they should have a low weight for a seven, but are more likely to be activated for an eight, so they should have a high weight for an eight. This can be represented as a function for each possible category, for instance the probability of being the number eight:\n",
|
||
"\n",
|
||
"```\n",
|
||
"def pr_eight(x,w) = (x*w).sum()\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Here we are assuming that X is the image, represented as a vector. In other words, with all of the rows stacked up end to end into a single long line. And we are assuming that the weights are a vector W. If we have this function, then we just need some way to update the weights to make them a little bit better. With such an approach, we can repeat that step a number of times, making the weights better and better, until they are as good as we can make them.\n",
|
||
"\n",
|
||
"We want to find the specific values for the vector W which causes our function to be high for those images that are actually an eight, and low for those images which are not. Searching for the best vector W is a way to search for the best function for recognising eights. (Because we are not yet using a deep neural network, we are limited by what our function can actually do — we are going to fix that constraint later in this chapter.) \n",
|
||
"\n",
|
||
"To be more specific, here are the steps that we are going to require, to turn this function into a machine learning classifier:\n",
|
||
"\n",
|
||
"1. *Initialize* the weights\n",
|
||
"1. For each image, use these weights to *predict* whether it appears to be a three or a seven\n",
|
||
"1. Based on these predictions, calculate how good the model is (its *loss*)\n",
|
||
"1. Calculate the *gradient*, which measures for each weight, how changing that weight would change the loss\n",
|
||
"1. *Step* all weights based on that calculation\n",
|
||
"1. Go back to the second step, and *repeat* the process\n",
|
||
"1. ...until you decide to *stop* the training process (for instance because the model is good enough, or you don't want to wait any longer)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"hide_input": true
|
||
},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/svg+xml": [
|
||
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n",
|
||
"<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
|
||
" \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
|
||
"<!-- Generated by graphviz version 2.40.1 (20161225.0304)\n",
|
||
" -->\n",
|
||
"<!-- Title: G Pages: 1 -->\n",
|
||
"<svg width=\"591pt\" height=\"78pt\"\n",
|
||
" viewBox=\"0.00 0.00 591.49 78.00\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n",
|
||
"<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 74)\">\n",
|
||
"<title>G</title>\n",
|
||
"<polygon fill=\"#ffffff\" stroke=\"transparent\" points=\"-4,4 -4,-74 587.4867,-74 587.4867,4 -4,4\"/>\n",
|
||
"<!-- init -->\n",
|
||
"<g id=\"node1\" class=\"node\">\n",
|
||
"<title>init</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"27\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"27\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">init</text>\n",
|
||
"</g>\n",
|
||
"<!-- predict -->\n",
|
||
"<g id=\"node2\" class=\"node\">\n",
|
||
"<title>predict</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"126.0969\" cy=\"-18\" rx=\"35.194\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"126.0969\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">predict</text>\n",
|
||
"</g>\n",
|
||
"<!-- init->predict -->\n",
|
||
"<g id=\"edge1\" class=\"edge\">\n",
|
||
"<title>init->predict</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M54.0787,-18C62.3227,-18 71.6196,-18 80.7269,-18\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"80.8626,-21.5001 90.8626,-18 80.8625,-14.5001 80.8626,-21.5001\"/>\n",
|
||
"</g>\n",
|
||
"<!-- loss -->\n",
|
||
"<g id=\"node3\" class=\"node\">\n",
|
||
"<title>loss</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"225.1938\" cy=\"-52\" rx=\"27\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"225.1938\" y=\"-48.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">loss</text>\n",
|
||
"</g>\n",
|
||
"<!-- predict->loss -->\n",
|
||
"<g id=\"edge2\" class=\"edge\">\n",
|
||
"<title>predict->loss</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M155.2932,-28.0172C166.6224,-31.9043 179.6698,-36.3808 191.4018,-40.406\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"190.2859,-43.7234 200.8806,-43.6582 192.5577,-37.1023 190.2859,-43.7234\"/>\n",
|
||
"</g>\n",
|
||
"<!-- gradient -->\n",
|
||
"<g id=\"node4\" class=\"node\">\n",
|
||
"<title>gradient</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"361.8403\" cy=\"-52\" rx=\"39.7935\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"361.8403\" y=\"-48.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">gradient</text>\n",
|
||
"</g>\n",
|
||
"<!-- loss->gradient -->\n",
|
||
"<g id=\"edge3\" class=\"edge\">\n",
|
||
"<title>loss->gradient</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M252.5178,-52C269.4967,-52 291.836,-52 311.8929,-52\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"312.1329,-55.5001 322.1329,-52 312.1328,-48.5001 312.1329,-55.5001\"/>\n",
|
||
"</g>\n",
|
||
"<!-- step -->\n",
|
||
"<g id=\"node5\" class=\"node\">\n",
|
||
"<title>step</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"465.4867\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"465.4867\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">step</text>\n",
|
||
"</g>\n",
|
||
"<!-- gradient->step -->\n",
|
||
"<g id=\"edge4\" class=\"edge\">\n",
|
||
"<title>gradient->step</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M394.0665,-41.4286C405.9515,-37.5298 419.4492,-33.1021 431.4862,-29.1535\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"432.7754,-32.4142 441.1862,-25.9715 430.5935,-25.7629 432.7754,-32.4142\"/>\n",
|
||
"</g>\n",
|
||
"<!-- step->predict -->\n",
|
||
"<g id=\"edge6\" class=\"edge\">\n",
|
||
"<title>step->predict</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M438.4132,-18C380.3272,-18 243.2155,-18 171.5401,-18\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"171.4571,-14.5001 161.4571,-18 171.4571,-21.5001 171.4571,-14.5001\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"287.1938\" y=\"-21.8\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">repeat</text>\n",
|
||
"</g>\n",
|
||
"<!-- stop -->\n",
|
||
"<g id=\"node6\" class=\"node\">\n",
|
||
"<title>stop</title>\n",
|
||
"<ellipse fill=\"none\" stroke=\"#000000\" cx=\"556.4867\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
|
||
"<text text-anchor=\"middle\" x=\"556.4867\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\" fill=\"#000000\">stop</text>\n",
|
||
"</g>\n",
|
||
"<!-- step->stop -->\n",
|
||
"<g id=\"edge5\" class=\"edge\">\n",
|
||
"<title>step->stop</title>\n",
|
||
"<path fill=\"none\" stroke=\"#000000\" d=\"M492.7897,-18C501.068,-18 510.3085,-18 519.1272,-18\"/>\n",
|
||
"<polygon fill=\"#000000\" stroke=\"#000000\" points=\"519.203,-21.5001 529.203,-18 519.203,-14.5001 519.203,-21.5001\"/>\n",
|
||
"</g>\n",
|
||
"</g>\n",
|
||
"</svg>\n"
|
||
],
|
||
"text/plain": [
|
||
"<graphviz.files.Source at 0x7f24cd580910>"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"#id gradient_descent\n",
|
||
"#caption The gradient descent process\n",
|
||
"#alt Graph showing the steps for Gradient Descent\n",
|
||
"gv('''\n",
|
||
"init->predict->loss->gradient->step->stop\n",
|
||
"step->predict[label=repeat]\n",
|
||
"''')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"These seven steps are the key to the training of all deep learning models, and we'll be using the seven terms in the above diagram throughout this book. That deep learning turns out to rely entirely on these steps is extremely surprising and counter-intuitive. It's amazing that this process can solve such complex problems. But, as you'll see, it really does!\n",
|
||
"\n",
|
||
"There are many different ways to do each of these seven steps, and we will be learning about them throughout the rest of this book. These are the details which make a big difference for deep learning practitioners. But it turns out that the general approach to each one generally follows some basic principles:\n",
|
||
"\n",
|
||
"- **Initialize**: we initialise the weights to random values. This may sound surprising. There are certainly other choices we could make, such as initialising them to the percentage of times that that pixel is activated for that category. But since we already know that we have a routine to improve these weights, it turns out that just starting with random weights works perfectly well\n",
|
||
"- **Loss**: This is the thing Arthur Samuel refered to: \"*testing the effectiveness of any current weight assignment in terms of actual performance*\". We need some function that will return a number that is small if the performance of the model is good, and vice versa (the standard approach is to treat a small loss as good, and a large loss as bad, although this is just a convention)\n",
|
||
"- **Step**: A simple way to figure out whether a weight should be increased a bit, or decreased a bit, would be just to try it. Increase the weight by a small amount, and see if the loss goes up or down. Once you find the correct direction, you could then change that amount by a bit more, and a bit less, until you find an amount which works well. However, this is slow! As we will see, the magic of calculus allows us to directly figure out which direction, and roughly how much, to change each weight, without having to try all these small changes, by calculating *gradients*. This is just a performance optimisation, we would get exactly the same results by using the slower manual process as well\n",
|
||
"- **Stop**: We have already discussed how to choose how many epochs to train a model for. This is where that decision is applied. For our digit classifier, we would keep training until the accuracy of the model started getting worse, or we ran out of time."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's look at a picture of what this would look like. First we will define a very simple function, the quadratic — let's pretend that this is our loss function:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def f(x): return x**2"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Here is a graph of that function:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEGCAYAAABo25JHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deXxU1f3/8dcn+waBkIQlCQQI+w5hX0RQixsoiooLUlDEpWq3b7W2tqV7bbUubS0iiqiIIi4oLlSkIHtYw04gZGFLSEjIvp7fHzP4S2MSkpCbO5n5PB+PeTCTezL3nQvkM/eec88RYwxKKaU8l5fdAZRSStlLC4FSSnk4LQRKKeXhtBAopZSH00KglFIezsfuAA0VHh5uYmNj7Y6hlFItyo4dO84ZYyJq2tbiCkFsbCwJCQl2x1BKqRZFRFJq26aXhpRSysNpIVBKKQ+nhUAppTycFgKllPJwWgiUUsrDWV4IRMRbRHaJyCc1bPMXkeUikiQiW0Uk1uo8Siml/ldznBE8BhysZdtc4LwxJg54DvhzM+RRSilVhaWFQESigeuBRbU0mQYscT5fAUwWEbEiS1JGPgtWHaC0vNKKt1dKKUs9/5+jbD2eZcl7W31G8Hfg/4DafvtGAWkAxphyIBdoV72RiMwTkQQRScjMzGxUkLTsQhZvTGbtobON+n6llLJLalYhz/3nCFuTsy15f8sKgYjcAGQYY3bU1ayGr31npRxjzEJjTLwxJj4iosY7pC9pQs8IOrQO4J3taY36fqWUssu7CWl4Cdw6LNqS97fyjGAsMFVETgDvAJNE5M1qbdKBGAAR8QFCAUtKnreXMCM+mvVHMjmVU2TFLpRSqsmVV1SyYkc6E3pG0KlNoCX7sKwQGGOeNMZEG2NigTuAtcaYu6s1+xi41/n8Vmcby9bOnDEshkoDK3akW7ULpZRqUuuPZnLmQjF3DI+xbB/Nfh+BiCwQkanOl68C7UQkCfgR8ISV++7cLoixce14NyGNykpdq1kp5fqWb0+jXbAfk3q3t2wfzVIIjDHrjDE3OJ8/bYz52Pm82BgzwxgTZ4wZYYw5bnWW2+JjSD9fxKZj1vS+K6VUU8nIK+argxncMiwaPx/rfl173J3F3+vXgdBAX97Znmp3FKWUqtPKnScprzTcFm/dZSHwwEIQ4OvNzUOi+HL/Wc4XlNodRymlamSM4d3tacR3aUtcZIil+/K4QgBw+/AYSisqWbnrpN1RlFKqRtuSszl+roDbLOwkvsgjC0Gfjq0ZHNOGZdtSsXCQklJKNdqybam0CvDhxoGdLN+XRxYCgJkjYkjKyGdHynm7oyil1P/IKSxl9b4z3DQ4ikA/b8v357GF4IaBnQjx9+HtbdpprJRyLSt3nqS0vJKZIzo3y/48thAE+/swbXAnPt17mtzCMrvjKKUU4Ogkfmd7KoNi2tC3U+tm2afHFgKAmSM6U1JeyYe7tdNYKeUadqae58jZfGY2QyfxRR5dCPpHhTIgKlQ7jZVSLmPZtjSC/by5cZD1ncQXeXQhAMdZwaEzeexKy7E7ilLKw+UWlfHJ3lNMGxJFsL9Ps+3X4wvB1MGdCPbzZtlW7TRWStnrw10nKS6rZObw5ukkvsjjC0GIvw/ThkSxau8p7TRWStnGGMNbW1MYFB3KgOjQZt23xxcCgDtHdKa4rJKVu3R6aqWUPRJSHJ3Ed45s3rMB0EIAODqNB8e04a2t2mmslLLHW1tSaOXv06ydxBdpIXC6c2RnkjLy2WbRmqBKKVWb7IJSVieeYfrQKIL8mq+T+CItBE43DuxEqwAf3tJOY6VUM1uxI43SikruHNnFlv1buXh9gIhsE5E9IrJfRH5TQ5vZIpIpIrudj/usynMpgX7e3DI0ms/3nSErv8SuGEopD1NZaVi2zTHddK8OrWzJYOUZQQkwyRgzCBgMTBGRUTW0W26MGex8LLIwzyXdNbIzpRWVvJugncZKqeax6VgWyecKbOkkvsjKxeuNMSbf+dLX+XDpntge7VsxsmsYb29LoULXNFZKNYOlW07QNsiX6wZ0tC2DpX0EIuItIruBDGCNMWZrDc1uEZG9IrJCRGqcXENE5olIgogkZGZmWhmZe0Z3IS27iPVHrN2PUkqdzi1izYGz3DY8hgBf66ebro2lhcAYU2GMGQxEAyNEpH+1JquAWGPMQOA/wJJa3mehMSbeGBMfERFhZWSu6duBiFb+LN2SYul+lFJq2dZUDHC3TZ3EFzXLqCFjTA6wDphS7etZxpiLPbOvAMOaI09d/Hy8mDk8hq8PZ5CWXWh3HKWUmyotr2TZ9jSu7BVJTFiQrVmsHDUUISJtnM8DgauAQ9XaVL0oNhU4aFWehpg5sjNeIjqUVCllmS8PnCEzr4R7Rtl7NgDWnhF0BL4Wkb3Adhx9BJ+IyAIRmeps86hzaOke4FFgtoV56q1jaCBX9Ynk3YQ0issq7I6jlHJDSzenEBMWyISe1l7urg/LbmEzxuwFhtTw9aerPH8SeNKqDJfjnlGxfLH/LJ/tO83NQ6LtjqOUciNHzuaxNTmbJ67tjbeX2B1H7yyuzZju7egWEcySTdpprJRqWm9sPoGfjxe3xTffKmR10UJQCy8vYdaoLuxOy2GPLlqjlGoiF4rLWLnzJFMHdSIs2M/uOIAWgjrdMiyaYD9vlmw+YXcUpZSbWJGQTmFpBbPHxNod5VtaCOrQKsCXW4ZF88me0zr/kFLqslVWGpZuSWFo5zb0j2rexWfqooXgEmaN7kJpRSXvbE+zO4pSqoVbfzST5HMF3OtCZwOgheCS4iJbMS4unDe3pFBeUWl3HKVUC/bG5hTCQ/y5tr998wrVRAtBPcwa3YXTucWsOXDW7ihKqRYqJauArw9ncOfIzvj5uNavXtdK46Im92lPdNtAXtt0wu4oSqkWasmmFLxFuMvG6aZro4WgHry9hFmju7AtOZv9p3LtjqOUamHyS8p5LyGN6wZ0pH3rALvjfIcWgnq6Pb4zgb7evL7xhN1RlFItzPs70skrKef7Y2PtjlIjLQT1FBrkyy3DovhozykdSqqUqrfKSsPrm04wOKYNQzq3tTtOjbQQNMDsMbGUllfyts5KqpSqp/8ecQwZddWzAdBC0CBxka0Y3yOcpVtSKC3XoaRKqUtbvDGZyFauN2S0Ki0EDTRnbFcy8kr4bN9pu6MopVxcUkYeG46e455RXVxuyGhVrpvMRV3RM4Ju4cEs/iYZY3SBe6VU7V7b6JhldKYLDhmtSgtBA3l5Cd8fG8ue9Fx2pJy3O45SykWdLyjl/Z3p3Dw4ivAQf7vj1MnKpSoDRGSbiOxxrkL2mxra+IvIchFJEpGtIhJrVZ6mdMuwaEIDfVm0IdnuKEopF/X2tlSKyyqZO76r3VEuycozghJgkjFmEDAYmCIio6q1mQucN8bEAc8Bf7YwT5MJ8vPhzpGd+fLAGVKzdIF7pdT/Ki2vZMmmE4zvEU7P9q3sjnNJlhUC45DvfOnrfFS/qD4NWOJ8vgKYLCL2r9tWD/eOjsVLhNc26VmBUup/fbL3FBl5Jdw3vpvdUerF0j4CEfEWkd1ABo7F67dWaxIFpAEYY8qBXKBdDe8zT0QSRCQhMzPTysj11iE0gBsGduTd7WlcKC6zO45SykUYY3j1m2R6RIYwoUe43XHqxdJCYIypMMYMBqKBESLSv1qTmj79f2cojjFmoTEm3hgTHxERYUXURpk7rhsFpRUs36ZrFSilHLYcz2b/qQvMGdeVFnKBo3lGDRljcoB1wJRqm9KBGAAR8QFCgezmyNQUBkSHMqJrGK9vOkGZrlWglAIWbThOWLAfNw+JsjtKvVk5aihCRNo4nwcCVwGHqjX7GLjX+fxWYK1pYYPz543vxsmcIlYn6g1mSnm6pIw8vjqUwazRXQjw9bY7Tr1ZeUbQEfhaRPYC23H0EXwiIgtEZKqzzatAOxFJAn4EPGFhHktM6h1J94hgFq4/rjeYKeXhFm1Ixt/Hi3tGdbE7SoP4WPXGxpi9wJAavv50lefFwAyrMjQHLy/h/vHdeGJlIpuPZTEmrmV0DimlmlZGXjErd57ktuHRtHPxG8iq0zuLm8BNQ6IID/Fj4YbjdkdRStnkjU0plFVWMndcyxgyWpUWgiYQ4OvNvaNjWXc4k8Nn8uyOo5RqZoWl5SzdksI1fdvTNTzY7jgNpoWgidw9qgsBvl68omcFSnmc9xLSyS0qY96Elnc2AFoImkzbYD9uj4/ho90nOZ1bZHccpVQzKa+o5JUNxxnWpS3DuoTZHadRtBA0ofvGd6PSwOJvdNoJpTzFp4mnST9fxPwrutsdpdG0EDShmLAgbhjYkbe3ppJbqNNOKOXujDG8/N/jxEWGMLl3pN1xGk0LQRN7YEJ3CkoreHNrit1RlFIW+++RTA6evsC8Cd3w8moZ00nURAtBE+vbqTVX9IzgtY3JFJdV2B1HKWWhl/97jA6tA7hpcMuZTqImWggsMP+K7pzLL2XFjnS7oyilLLI7LYctx7OZO66rS69HXB8tO72LGtUtjEExbVi4/jjlOhmdUm7p5XXHaBXg4/LrEdeHFgILiAgPTexOanYhn+pkdEq5naSMPD7ff4bZY2IJ8bdspp5mo4XAIlf3aU+PyBD+te6YTkanlJv517rjBPp68/2xrr8ecX1oIbCIl5fw0JXdOXQmj7WHMuyOo5RqImnZhXy4+yQzR3QmLNjP7jhNQguBhW4c2InotoG89HWSnhUo5SZe2XAcL4H7J7jH2QBoIbCUj7cXD1zRnV2pjtEFSqmWLSOvmHe2pzF9SDQdQwPtjtNkrFyhLEZEvhaRgyKyX0Qeq6HNRBHJFZHdzsfTNb1XSzZjWDThIf784+sku6MopS7T4m9OUF5RyfyJLXc6iZpYeUZQDvzYGNMHGAU8LCJ9a2i3wRgz2PlYYGEeWwT4ejNvQle+STrHrtTzdsdRSjXS+YJSlm4+wXUDOrbIqabrYlkhMMacNsbsdD7PAw4CLfv2u0a6a2QX2gb58uJaPStQqqV6bWMyBaUVPDIpzu4oTa5Z+ghEJBbHspVba9g8WkT2iMhnItKvlu+fJyIJIpKQmZlpYVJrBPv7MHdcV9YeymDfyVy74yilGuhCcRmvbTrBlH4d6N2htd1xmpzlhUBEQoD3gceNMReqbd4JdDHGDAJeBD6s6T2MMQuNMfHGmPiIiAhrA1tk1phYWgf48OLao3ZHUUo10JKNJ8grLnfLswGwuBCIiC+OIvCWMWZl9e3GmAvGmHzn89WAr4i45ervrQN8mT22K1/sP8vB09XroVLKVeWXlPPqxmQm946kf1So3XEsYeWoIQFeBQ4aY56tpU0HZztEZIQzT5ZVmew2Z2wswX7evKQjiJRqMd7ckkJOYRk/mNzD7iiWsXKSjLHAPUCiiOx2fu3nQGcAY8zLwK3AgyJSDhQBdxg3vvOqTZAfs8bE8vJ/j3H0bB492reyO5JSqg6FpeW8sv4443uEMzimjd1xLGNZITDGfAPUuVKDMeYl4CWrMrii+8d3Y8mmEzz/1VFeunOo3XGUUnVYujmFrIJSHr/Kfc8GQO8sbnZhwX7cOyaWTxNPc+Rsnt1xlFK1KCwt59/Os4GWuih9fWkhsMH947sR5OvNC1/pCCKlXNUbm1PILijl8at62h3FcloIbKBnBUq5toKSchauP86EnhEM69LW7jiW00Jgk4tnBc/rWYFSLuf/nw24d9/ARVoIbNI22I/ZY2NZnXiaQ2f0vgKlXEV+STkL1x9jQs8IhnZ2/7MB0EJgq/vHdyPEz4fn1hyxO4pSyum1b5I5X1jGj692/76Bi7QQ2KhNkB9zxzvuNk5M1zmIlLJbbmEZCzcc56o+7RnkxvcNVKeFwGZzxnWlTZAvf1tz2O4oSnm8VzYcJ6+4nB950NkAaCGwXesAXx6Y0J11hzPZkaKrmClll6z8EhZvTOb6gR3p28n9ZhitixYCF3DvmC6Eh/jxty+1r0Apu/x7/XGKyyr4oYeMFKpKC4ELCPLz4aGJcWw6lsXGpHN2x1HK45zJLWbJphPcNDiKuEjPmwOszkIgIq1F5DuLc4rIQOsieaY7R3amU2gAf/n8EG48755SLumFtUepNIYfeljfwEW1FgIRuQ04BLzvXHx+eJXNr1sdzNME+Hrz+NU92ZOeyxf7z9gdRymPkXyugOXb07hzRGdiwoLsjmOLus4Ifg4MM8YMBr4PLBWR6c5tdc4qqhpn+pAoukcE89cvj1BeUWl3HKU8wrNrjuDn7cUjkzyvb+CiugqBtzHmNIAxZhtwJfCUiDwK6LULC/h4e/GTa3qRlJHPyl0n7Y6jlNvbdzKXVXtOMWdcLBGt/O2OY5u6CkFe1f4BZ1GYCEwDalxkXl2+Kf07MDA6lL+vOUJxWYXdcZRya3/98jChgb7Mm/CdrlCPUlcheJBql4CMMXnAFGDOpd5YRGJE5GsROejsY3ishjYiIi+ISJKI7BURj1+pRUT42ZTenMot5s0tKXbHUcptbT6WxbrDmTw4sTuhgb52x7FVrYXAGLMHiAUQkclVvl5mjHmrHu9dDvzYGNMHGAU8LCJ9q7W5FujhfMwD/tWg9G5qbFw443uE8+LaJHILy+yOo5Tbqaw0/PGzg3QMDWD2mFi749juUvcRXCEiY3FcEmoQY8xpY8xO5/M84CAQVa3ZNOAN47AFaCMiHRu6L3f0xLW9uVBcxj//qwvdK9XUPk08zd70XH58TS8CfL3tjmO7uoaP/grwB/4D+InI043diYjEAkOArdU2RQFpVV6n891igYjME5EEEUnIzMxsbIwWpV+nUG4eHMVrG09wKqfI7jhKuY3S8kqe+eIwvTu04uYh3/l145HqujT0G+Aw8GvgsDFmQWN2ICIhwPvA48aY6hPv1zQM9TsjkowxC40x8caY+IiIiMbEaJF+dI3j5pZndZpqpZrM21tTSM0u5GfX9sbbS0fCw6UvDbUyxvwZaNQ91yLii6MIvGWMWVlDk3QgpsrraOBUY/bljqLbBjF7TCzv70zn4GldvEapy3WhuIwX1iYxuls7Jvb0nA+Vl3KpQrCv2p/1JiICvAocNMY8W0uzj4FZztFDo4Dci/cuKIeHJ8YRGujL7z89qFNPKHWZ/vn1MbILSvn5dX1w/IpSYGFnMTAWuAeYJCK7nY/rRGS+iMx3tlkNHAeSgFeAhxqxH7cWGuTLo5N68E3SOdYd9oz+EaWskJZdyOKNyUwfEsWA6FC747gUn9o2VOssfkFEnm5IP4Ex5hsuMRWFcXzEfbi+7+mp7h7VhaVbUvj96oOM7xGOj7dOGqtUQ/3li8N4Cfzke73sjuJyLO8sVpfPz8eLJ67tTVJGPsu2p136G5RS/2NX6nlW7TnFvPHd6NQm0O44LudSHy1bA6uAkKpfFJGJVgVSNbumb3tGdA3j72uOcKFYbzJTqr6MMfzu04NEtPLngSs8eyqJ2tRZCIwxzwHvAoHODt1AEXkR+GOzpFPfEhF+eX1fsgpK+cdavclMqfr6ZO9pdqSc58dX9yTYv9ar4R6tPhebR+IY4rkJ2I5jeOdYK0Opmg2IDmXGsGgWb0wm+VyB3XGUcnlFpRX86bND9OvUmhnxMZf+Bg9Vn0JQBhQBgUAAkGyM0cnybfLTKb3w8/bi958etDuKUi5v4frjnMwp4lc39tObx+pQn0KwHUchGA6MA2aKyApLU6laRbYK4JFJPfjPwbNsOKrDSZWqzamcIv713ySuH9iREV3D7I7j0upTCOYaY552zjp6xhgzDfjI6mCqdnPGxdKlXRALVh3QlcyUqsWfPjuEMfDktb3tjuLyLlkIjDEJNXxtqTVxVH34+3jz1HV9OJqRz1Jds0Cp79h+IpuP95zigSu6E93WM9chbgi9M6mFurpve8b3COfZNUc4l19idxylXEZ5RSW//HAfnUIDmH9FN7vjtAhaCFooEeHXU/tRXFbBnz87ZHccpVzGW1tTOXQmj1/e0JcgPx0uWh9aCFqw7hEhzBnXlfd2pLMz9bzdcZSy3bn8Ev725WHGxYUzpX8Hu+O0GFoIWrhHJ/WgfWt/fvXRfioqdXZS5dme+fwwhaUV/HpqX51dtAG0ELRwwf4+PHV9XxJP5rJsW6rdcZSyza7U8yxPSGPuuK7ERTZqCRWPpYXADdw4sCNjurfjL58f0o5j5ZHKKyp56oN9dGgdwA8m97A7ToujhcANiAgLpvWnqKyCP+gdx8oDLdmcwoHTF/jVjX0J0fmEGkwLgZuIiwzhgQndWbnrJJuOnbM7jlLN5kxuMc9+eZiJvSK0g7iRLCsEIrJYRDJEpMZlLkVkoojkVlm97GmrsniKRybF0TksiF9+uI/Scr3jWHmG335ygPJKw4Kp/bWDuJGsPCN4HZhyiTYbjDGDnQ9d+OYyBfh685tp/TiWWcDC9cfsjqOU5dYdzuDTxNP8YFIcndvpHcSNZVkhMMasB7Kten9Vsyt7RXL9gI68sDaJ45n5dsdRyjKFpeX84sN9xEWGcP8EvYP4ctjdRzBaRPaIyGci0q+2RiIyT0QSRCQhM1Nn3LyUX93YF38fL37+QSKOZaGVcj/PfnmE9PNF/HH6APx9vO2O06LZWQh2Al2MMYOAF4EPa2tojFlojIk3xsRHREQ0W8CWKrJ1AD+/rg9bjmfzXkK63XGUanKJ6bks3pjMnSM7MzxWp5i+XLYVAmPMBWNMvvP5asBXRMLtyuNubo+PYURsGL9ffZDMPL23QLmP8opKnli5l/AQf342RaeYbgq2FQIR6SDOLn4RGeHMkmVXHnfj5SX8YfoAikor+PWq/XbHUarJLPommf2nLvCbqf0IDfS1O45bsHL46DJgM9BLRNJFZK6IzBeR+c4mtwL7RGQP8AJwh9EL2k0qLjKERyfH8ene03y+74zdcZS6bMcy83l2zRGu6dte7xloQpbdgmeMmXmJ7S8BL1m1f+XwwBXdWZ14hl9+tI9R3cJoE+RndySlGqWi0vB/K/YS6OvN727Sewaakt2jhpTFfL29eGbGQM4XlLLgkwN2x1Gq0d7YfIIdKed5+oa+RLYOsDuOW9FC4AH6dQrlwYndWbnzJF8fyrA7jlINlppVyF8+d0wjMX1olN1x3I4WAg/xyKQ4ekSG8OTKRHILy+yOo1S9VVYafrpiD95ewh9uHqCXhCyghcBD+Pt487fbBpGZX6KjiFSL8tqmE2xNzubpG/vSqU2g3XHckhYCDzIwug0PXxnHB7tO8vm+03bHUeqSkjLy+cvnh5jcO5IZw6LtjuO2tBB4mEeujKNfp9Y89cE+XcRGubTyikp+/N4eAv28+eN0vSRkJS0EHsbPx4tnbxtMXnE5T+lcRMqF/WvdMfak5fC7m/rrKCGLaSHwQL06tOLH1/Tki/1ndS4i5ZL2pOXw/FdHuXFQJ24Y2MnuOG5PC4GHum98N0Z1C+PXq/aTklVgdxylvlVYWs7jy3cT2cqf303rb3ccj6CFwEN5ewnP3jYYHy/h8eW7Ka/QFc2Ua/jtJwc5kVXA324bTGiQziXUHLQQeLBObQL5/c0D2JWaw4trk+yOoxRrDpxl2bZU5k3oxuju7eyO4zG0EHi4Gwd1YvqQKF5ce5RtybqgnLLPmdxi/m/FHvp2bM2Pru5pdxyPooVAseCm/nQOC+Kxd3aRU1hqdxzlgSoqDY+9s4uS8kpevHOIrjjWzLQQKEL8fXhx5lDO5Zfw0xV7dUipanYvrU1ia3I2C6b1p3tEiN1xPI4WAgXAgOhQfjalN2sOnGXplhS74ygPsi05m+e/OsLNQ6K4RSeUs4WVC9MsFpEMEdlXy3YRkRdEJElE9orIUKuyqPqZO64rk3pH8rtPDpKYnmt3HOUBsvJLeHTZLjqHBfFbXWPANlaeEbwOTKlj+7VAD+djHvAvC7OoehAR/jpjEO1C/Hjo7R06S6myVEWl4fHlu8kuLOUfdw0lxN+ydbLUJVhWCIwx64G6hqFMA94wDluANiLS0ao8qn7Cgv146c6hnM4p5icr9mh/gbLMi2uPsuHoOX4ztR/9OoXaHcej2dlHEAWkVXmd7vyastmwLm158ro+rDlwllc2HLc7jnJD3xw9x/NfHWX6kCjuGB5jdxyPZ2chqOliYI0fP0VknogkiEhCZmamxbEUwJyxsVzbvwN//vwwm49l2R1HuZGTOUU8+s4u4iJC+N3N2i/gCuwsBOlA1Y8C0cCpmhoaYxYaY+KNMfERERHNEs7TiQh/uXUgse2CeOTtnZzKKbI7knIDxWUVzF+6g7LySl6+ZxhBftov4ArsLAQfA7Oco4dGAbnGGF0txYW0CvDl3/fEU1JeyYNv7qC4rMLuSKoFM8bwiw/3kXgyl2dvH6z3C7gQK4ePLgM2A71EJF1E5orIfBGZ72yyGjgOJAGvAA9ZlUU1XlxkCH+7bRB70nN5+qN92nmsGu3NLSms2JHOo5N7cHXf9nbHUVVYdl5mjJl5ie0GeNiq/aum871+HfjBpDheXJtE346tmT22q92RVAuz+VgWv1l1gEm9I3l8cg+746hq9M5iVS8/vKonV/dtz4JPDrD+iHbYq/pLzSrkwbd2EBsezN/vGIyXl3YOuxotBKpevLyE524fTM/2rXj47Z0cy8y3O5JqAfKKy5i7ZDsAi2bF0zpA1xdwRVoIVL2F+Pvwyqx4/Ly9uH9Jgs5UqurkmFF0N8nnCvjnXUOJDQ+2O5KqhRYC1SAxYUG8fM8w0s8X8cDSHZSU60gi9V3GGH6zaj9rD2Xw66n9GNM93O5Iqg5aCFSDDY8N45kZA9manM0T7yfqSCL1Ha9+k8wbm1OYN6Ebd4/qYnccdQl6N4dqlGmDo0jLLuSvXx4hJixIV5RS3/p83xl+v/og1/bvwBNTetsdR9WDFgLVaA9fGUdqdiEvfHWUqDYB3D68s92RlM12pGTz2Du7GBzThudu1xFCLYUWAtVoIsLvbx5ARl4JT65MpG2QH9f062B3LGWTI2fzmPN6Ap3aBLJoVjwBvrrcZEuhfQTqsvh6e/HPu4YyILoNP1i2i23Jdc08rtzVyZwiZr26DX8fL96YM4J2If52R1INoIVAXbYgPx9emz2cqLaBzF2ynavow+kAAA/7SURBVIOnL9gdSTWjrPwSZr26lYKScpbMGUFMWJDdkVQDaSFQTSIs2I835owgxN+He17dqjeceYjcojJmLd5G+vkiFt0bT5+Ore2OpBpBC4FqMtFtg3jzvpEA3L1oK2nZhTYnUlYqKClnzuvbOXI2j5fvGcbIbu3sjqQaSQuBalLdI0JYOnckhaUV3LVoK2dyi+2OpCxQXFbBvKUJ7E7L4cWZQ7iyV6TdkdRl0EKgmlyfjq1ZMmcE2QWlzHxlixYDN1NcVsH9bySw6VgWz9w6kCn9danxlk4LgbLE4Jg2LJkzgsy8Ei0GbuRiEfgm6Rx/uWUg04dG2x1JNQEtBMoyw7q0/bYY3LFwM6dzdbnLlqyotIL7ljiKwDO3DmJGvC467y4sLQQiMkVEDotIkog8UcP22SKSKSK7nY/7rMyjmt+wLm15Y+4IsvJLmfHyZlKyCuyOpBohr7iMexdvY+MxRxG4dZieCbgTK5eq9Ab+AVwL9AVmikjfGpouN8YMdj4WWZVH2Wdo57a8ff8oCkrKmfHyZo6ezbM7kmqA8wWl3LVoKztTz/PCHUO0CLghK88IRgBJxpjjxphS4B1gmoX7Uy5sQHQoyx8YDcBt/97M3vQcmxOp+jh7oZjbF27m0Jk8/n3PMG4c1MnuSMoCVhaCKCCtyut059equ0VE9orIChGp8aKjiMwTkQQRScjM1GUSW6qe7Vvx3vzRBPv7cMfCLaw7nGF3JFWHpIw8pv9zE+nni3h99nAm99EF592VlYWgpmkHq09cvwqINcYMBP4DLKnpjYwxC40x8caY+IiIiCaOqZpTl3bBrHxwDLHtgrlvSQIrdqTbHUnVIOFENrf8azMl5ZUsnzeaMXG6sIw7s7IQpANVP+FHA6eqNjDGZBljSpwvXwGGWZhHuYjI1gEsf2AUo7q14yfv7eGFr47q4jYu5LPE09y1aCthwX6sfHAMA6JD7Y6kLGZlIdgO9BCRriLiB9wBfFy1gYhUvRNlKnDQwjzKhbQK8GXx7OFMHxLFs2uO8Pjy3RSX6bKXdjLG8NLaozz41k76dWrN+w+OoXM7nUDOE1i2HoExplxEHgG+ALyBxcaY/SKyAEgwxnwMPCoiU4FyIBuYbVUe5Xr8fLz4222D6B4ZwjNfHCY1u5CF98QT0UqnMG5uxWUVPLkykQ92neSmwZ340y0DdT0BDyIt7ZQ8Pj7eJCQk2B1DNbHPEk/zw3d30zbIj3/eNZQhndvaHcljnMop4sE3d7AnPZefXNOTh6+MQ0RXFnM3IrLDGBNf0za9s1i5hGsHdGTF/DF4ewm3/3sLy7al2h3JI2w6do4bX/yGY5kFvHz3MB6Z1EOLgAfSQqBcRv+oUFY9Mo6R3cJ4cmUiP31vD4Wl5XbHckuVlYZ/rTvG3Yu20ibIlw8fHsuU/rrMqKfSNYuVS2kb7Mfr3x/Bc2uO8I91SexKy+GlO4fQu4MueNJUMvNK+NG7u9lw9BzXD+jIn28dSIi//irwZHpGoFyOt5fwk+/1YumckeQUljHtpY0s3ZKiQ0ybwPojmVz7/Aa2JWfzh5sH8NKdQ7QIKC0EynWN6xHOZ4+NZ2S3dvzyw33c+9p2nc66kQpLy/nFh4nMWryNtkG+fPTIWO4c2Vn7AxSghUC5uIhW/rw+ezi/ndaP7cnZXPPcf/lgV7qeHTTA9hPZXPv8Bt7amsp947qy6gfj9FKb+h9aCJTL8/IS7hkdy+rHxtOjfSt+uHwP9762ndQsXRO5LrmFZTy5MpEZL2+motKw7P5R/OKGvnp/gPoOvY9AtSgVlYalm0/wzBeHqTCGxyb3ZO64rvj56Geai4wxrNp7mgWrDpBdUMLccV354dU9CfLTvgBPVtd9BFoIVIt0KqeIX328nzUHztI1PJinruvD5D6RHn/NOzE9lwWf7Gf7ifMMiArlj9MH0D9K5wpSWgiUG/v6cAa//eQAxzMLGN8jnJ9N6e2Rv/hO5hTx9zVHWLEznbAgP37yvV7cFh+Dt5dnF0b1/2khUG6trKKSNzan8MJXR8ktKuP6AR350TU96R4RYnc0y53LL+GfXx/jzS0pAMwa3YVHr+pB6wBfm5MpV6OFQHmEC8VlLFp/nEXfJFNcVsH1Azsx/4pu9OvkfmcIp3KKeGXDcd7ZlkZJeQW3Dovmsat6EtUm0O5oykVpIVAe5Vx+Ca9sOM5bW1LJLylnYq8I5oztyri4cLxa+KWSfSdzeX3TCT7afZJKA9MGd+KhiXHERbr/2Y+6PFoIlEfKLSxj6ZYTvL7pBOfyS+kaHszdo7owfUgUbYP97I5Xb0WlFXyx/wxvbD7BztQcAn29uS0+mvsndCO6ra4XoOpHC4HyaCXlFXyWeIYlm0+wKzUHX2/hyl6RTB8axcRekS45rr6i0rAtOZsPdqWzOvEM+SXlxLYL4p7Rsdw6LJrQQO0DUA1TVyHQgcXK7fn7eHPTkChuGhLFgVMXWLkznQ93n+LLA2cJ8vNmYq8IrunbgQk9Iwiz8UyhsLScLcez+GLfWf5z8CxZBaUE+3lz7YCOTB8Sxahu7Vr8pS3lmiw9IxCRKcDzOFYoW2SM+VO17f7AGzjWKs4CbjfGnKjrPfWMQDWF8opKNh3L4ov9Z/jywFky8xxLZ/ft2JpxPcKJ79KWQTFtaN86wLIMuYVl7D2Zw86UHDYeO8eu1POUVRhC/H24snck1/Rtz+Q+kXojmGoStlwaEhFv4AhwNY6F7LcDM40xB6q0eQgYaIyZLyJ3ADcbY26v6321EKimVllp2JOew8akc2xMymJHynlKKyoB6NA6gN4dWxEXEUL3yBBi2gbRIdSf9q0DaFWPIZrFZRWcvVDMmdxiTuYUcSwzn2MZBRw+m0fyuQIARKB/p1DGxLVjbPdwRnYLw9/H9S5XqZbNrkIwGvi1MeZ7ztdPAhhj/lilzRfONptFxAc4A0SYOkJpIVBWKy6r4MDpC+xOzWFPeg5HzuZzPDOfkvLK/2nn5+1FSIAPwf7e+Pt4c/GiTVlFJfklFeSXlFFc9r/f4+MldGkXRFxkCAOj2zA4pg0DokN13L+ynF19BFFAWpXX6cDI2to4F7vPBdoB56o2EpF5wDyAzp07W5VXKQACfL0Z2rktQ6usm1xZaTiZU8TJnKJvP+GfLywjv6SMgpIKSsorvm3r4+VFsL8PrQJ8aB3gQ/vWAXQIDaBjaCBd2gXh663zIinXYmUhqKlXq/on/fq0wRizEFgIjjOCy4+mVMN4eQkxYUHEhOlwTeV+rPxokg7EVHkdDZyqrY3z0lAokG1hJqWUUtVYWQi2Az1EpKuI+AF3AB9Xa/MxcK/z+a3A2rr6B5RSSjU9yy4NOa/5PwJ8gWP46GJjzH4RWQAkGGM+Bl4FlopIEo4zgTusyqOUUqpmlg5QNsasBlZX+9rTVZ4XAzOszKCUUqpuOnxBKaU8nBYCpZTycFoIlFLKw2khUEopD9fipqEWkUwgpZHfHk61u5ZdhKvmAtfNprkaRnM1jDvm6mKMiahpQ4srBJdDRBJqm2vDTq6aC1w3m+ZqGM3VMJ6WSy8NKaWUh9NCoJRSHs7TCsFCuwPUwlVzgetm01wNo7kaxqNyeVQfgVJKqe/ytDMCpZRS1WghUEopD+fWhUBEnhGRQyKyV0Q+EJE2tbSbIiKHRSRJRJ5ohlwzRGS/iFSKSK1DwUTkhIgkishuEbF8fc4G5GrW4+XcZ5iIrBGRo84/29bSrsJ5vHaLSPVpz5sqS50/v4j4i8hy5/atIhJrRY5G5JotIplVjs99zZRrsYhkiMi+WraLiLzgzL1XRIa6SK6JIpJb5Xg9XVM7C3LFiMjXInLQ+f/xsRraNO0xM8a47QO4BvBxPv8z8Oca2ngDx4BugB+wB+hrca4+QC9gHRBfR7sTQHgzHq9L5rLjeDn3+xfgCefzJ2r6u3Ruy7c4xyV/fuAh4GXn8zuA5c1wfOqTazbwUnP9e6qy3wnAUGBfLduvAz7DsWLhKGCri+SaCHxiw/HqCAx1Pm8FHKnh77JJj5lbnxEYY740xpQ7X27BsUpadSOAJGPMcWNMKfAOMM3iXAeNMYet3Edj1DNXsx8vp2nAEufzJcBNzbDPmtTn56+adQUwWURqWpa1uXPZwhiznrpXHpwGvGEctgBtRKSjC+SyhTHmtDFmp/N5HnAQx/ruVTXpMXPrQlDNHBwVtLooIK3K63S+e9DtYoAvRWSHiMyzO4yTXcervTHmNDj+owCRtbQLEJEEEdkiIlYUi/r8/N+2cX4QyQXaWZClobkAbnFeSlghIjE1bLeDK/8fHC0ie0TkMxHp19w7d15WHAJsrbapSY+ZpQvTNAcR+Q/QoYZNTxljPnK2eQooB96q6S1q+Nplj6mtT656GGuMOSUikcAaETnk/BRjZy5LjhfUna0Bb9PZecy6AWtFJNEYc6wp8jnV5+e37BjVoT77XAUsM8aUiMh8HGctkyzOVR92HK/62Iljfp58EbkO+BDo0Vw7F5EQ4H3gcWPMheqba/iWRh+zFl8IjDFX1bVdRO4FbgAmG+fFtWrSgaqfjKKBU1bnqud7nHL+mSEiH+A4/b+sQtAEuSw5XlB3NhE5KyIdjTGnnafAGbW8x8VjdlxE1uH4NNWUhaA+P//FNuki4gOEYv0liEvmMsZkVXn5Co5+M1dg2b+py1H1l68xZrWI/FNEwo0xlk9GJyK+OIrAW8aYlTU0adJj5taXhkRkCvAzYKoxprCWZtuBHiLSVUT8cHTuWTLapCFEJFhEWl18jqPju8bRDc3MruP1MXCv8/m9wHfOXkSkrYj4O5+HA2OBA02coz4/f9WstwJra/kQ0qy5ql1Dnorj2rMr+BiY5RwJMwrIvXgZ0E4i0uFi346IjMDx+zKr7u9qkv0KjvXcDxpjnq2lWdMes+buEW/OB5CE4zrabufj4kiOTsDqKu2uw9EzfwzHJRKrc92Mo6KXAGeBL6rnwjH6Y4/zsd9VctlxvJz7bAd8BRx1/hnm/Ho8sMj5fAyQ6DxmicBci7J85+cHFuD4wAEQALzn/Pe3DejWTMfoUrn+6Py3tAf4GujdTLmWAaeBMue/r7nAfGC+c7sA/3DmTqSOkXTNnOuRKsdrCzCmmXKNw3GZZ2+V313XWXnMdIoJpZTycG59aUgppdSlaSFQSikPp4VAKaU8nBYCpZTycFoIlFLKw2khUEopD6eFQCmlPJwWAqUuk4gMd07kFuC8I3y/iPS3O5dS9aU3lCnVBETkdzjuKA4E0o0xf7Q5klL1poVAqSbgnN9nO1CMYyqCCpsjKVVvemlIqaYRBoTgWFEqwOYsSjWInhEo1QTEsT7yO0BXoKMx5hGbIylVby1+PQKl7CYis4ByY8zbIuINbBKRScaYtXZnU6o+9IxAKaU8nPYRKKWUh9NCoJRSHk4LgVJKeTgtBEop5eG0ECillIfTQqCUUh5OC4FSSnm4/weo1qgmn6rz9AAAAABJRU5ErkJggg==\n",
|
||
"text/plain": [
|
||
"<Figure size 432x288 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plot_function(f, 'x', 'x**2')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The sequence of steps we described above starts by picking some random value for a parameter, and calculating the value of the loss:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEGCAYAAABo25JHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deXxU1f3/8dcnO0kgEJKwJIEAYd8hsoMIanEDRVFRQQqKuFTt9q3W1ra0tt/Wb7UubS0iioiIIi4oLihSkD3sOwSysiUkJGRfz++PGfzFmIQk5OZOZj7Px2MezMw9mfvmivnMvefcc8QYg1JKKc/lZXcApZRS9tJCoJRSHk4LgVJKeTgtBEop5eG0ECillIfzsTtAfYWFhZmYmBi7YyilVLOyY8eOc8aY8Oq2NbtCEBMTQ3x8vN0xlFKqWRGR5Jq26aUhpZTycFoIlFLKw2khUEopD6eFQCmlPJwWAqWU8nCWFwIR8RaRXSLySTXb/EVkuYgkiMhWEYmxOo9SSqnva4ozgseAQzVsmwOcN8bEAs8Df22CPEoppSqxtBCISBRwA7CwhiZTgMXO5yuAiSIiVmRJSM9j/qqDlJRVWPHxSillqRe+OsbWE5mWfLbVZwT/AP4HqOm3bySQCmCMKQNygLZVG4nIXBGJF5H4jIyMBgVJzSpg0cZE1h4+26CfV0opu6RkFvD8V0fZmphlyedbVghE5EYg3Rizo7Zm1bz3g5VyjDELjDFxxpi48PBq75C+pHE9wmnfKoB3tqc26OeVUsou78an4iVw29AoSz7fyjOC0cBkEUkC3gEmiMhbVdqkAdEAIuIDhACWlDxvL2FaXBTrj2ZwKrvQil0opVSjKyuvYMWONMb1CKdj6xaW7MOyQmCMedIYE2WMiQHuBNYaY+6p0uxj4F7n89ucbSxbO3Pa0GgqDKzYkWbVLpRSqlGtP5bBmQtF3HlFtGX7aPL7CERkvohMdr58DWgrIgnAz4AnrNx3p7aBjI5ty7vxqVRU6FrNSinXt3x7Km2D/JjQq51l+2iSQmCMWWeMudH5/GljzMfO50XGmGnGmFhjzDBjzAmrs9weF03a+UI2Hbem910ppRpLem4RXx9K59ahUfj5WPfr2uPuLP5R3/aEtPDlne0pdkdRSqlardx5krIKw+1x1l0WAg8sBAG+3twyOJIvD5zlfH6J3XGUUqpaxhje3Z5KXOc2xEYEW7ovjysEAHdcEU1JeQUrd520O4pSSlVrW2IWJ87lc7uFncQXeWQh6N2hFYOiW7NsWwoWDlJSSqkGW7YthZYBPtw0oKPl+/LIQgAwfVg0Cel57Eg+b3cUpZT6nuyCElbvP8PNgyJp4edt+f48thDcOKAjwf4+vL1NO42VUq5l5c6TlJRVMH1YpybZn8cWgiB/H6YM6sine0+TU1BqdxyllAIcncTvbE9hYHRr+nRs1ST79NhCADB9WCeKyyr4cLd2GiulXMPOlPMcPZvH9CboJL7IowtBv8gQ+keGaKexUsplLNuWSpCfNzcNtL6T+CKPLgTgOCs4fCaXXanZdkdRSnm4nMJSPtl7iimDIwny92my/Xp8IZg8qCNBft4s26qdxkope3246yRFpRVMv6JpOokv8vhCEOzvw5TBkazae0o7jZVStjHGsHRrMgOjQugfFdKk+/b4QgBw17BOFJVWsHKXTk+tlLJHfLKjk/iu4U17NgBaCABHp/Gg6NYs3aqdxkopeyzdkkxLf58m7SS+SAuB013DO5GQnsc2i9YEVUqpmmTll7B63xmmDokk0K/pOokv0kLgdNOAjrQM8GGpdhorpZrYih2plJRXcNfwzrbs38rF6wNEZJuI7BGRAyLyh2razBKRDBHZ7XzcZ1WeS2nh582tQ6L4fP8ZMvOK7YqhlPIwFRWGZdsc0033bN/SlgxWnhEUAxOMMQOBQcAkERlRTbvlxphBzsdCC/Nc0t3DO1FSXsG78dpprJRqGpuOZ5J4Lt+WTuKLrFy83hhj8pwvfZ0Pl+6J7d6uJcO7hPL2tmTKdU1jpVQTWLIliTaBvlzfv4NtGSztIxARbxHZDaQDa4wxW6tpdquI7BWRFSJS7eQaIjJXROJFJD4jI8PKyMwY2ZnUrELWH7V2P0opdTqnkDUHz3L7FdEE+Fo/3XRNLC0ExphyY8wgIAoYJiL9qjRZBcQYYwYAXwGLa/icBcaYOGNMXHh4uJWRubZPe8Jb+rNkS7Kl+1FKqWVbUzDAPTZ1El/UJKOGjDHZwDpgUpX3M40xF3tmXwWGNkWe2vj5eDH9imi+OZJOalaB3XGUUm6qpKyCZdtTuapnBNGhgbZmsXLUULiItHY+bwFcDRyu0qbyRbHJwCGr8tTH9OGd8BLRoaRKKct8efAMGbnFzBhh79kAWHtG0AH4RkT2Attx9BF8IiLzRWSys82jzqGle4BHgVkW5qmzDiEtuLp3BO/Gp1JUWm53HKWUG1qyOZno0BaM62Ht5e66sOwWNmPMXmBwNe8/Xen5k8CTVmW4HDNGxPDFgbN8tv80twyOsjuOUsqNHD2by9bELJ64rhfeXmJ3HL2zuCajurWla3gQizdpp7FSqnG9uTkJPx8vbo9rulXIaqOFoAZeXsLMEZ3ZnZrNHl20RinVSC4UlbJy50kmD+xIaJCf3XEALQS1unVoFEF+3izenGR3FKWUm1gRn0ZBSTmzRsXYHeU7Wghq0TLAl1uHRvHJntM6/5BS6rJVVBiWbElmSKfW9Its2sVnaqOF4BJmjuxMSXkF72xPtTuKUqqZW38sg8Rz+dzrQmcDoIXgkmIjWjImNoy3tiRTVl5hdxylVDP25uZkwoL9ua6fffMKVUcLQR3MHNmZ0zlFrDl41u4oSqlmKjkzn2+OpHPX8E74+bjWr17XSuOiJvZuR1SbFry+KcnuKEqpZmrxpmS8Rbjbxumma6KFoA68vYSZIzuzLTGLA6dy7I6jlGpm8orLeC8+lev7d6BdqwC74/yAFoI6uiOuEy18vXljY5LdUZRSzcz7O9LILS7jx6Nj7I5SLS0EdRQS6MutQyP5aM8pHUqqlKqzigrDG5uSGBTdmsGd2tgdp1paCOph1qgYSsoqeFtnJVVK1dF/jzqGjLrq2QBoIaiX2IiWjO0expItyZSU6VBSpdSlLdqYSERL1xsyWpkWgnqaPboL6bnFfLb/tN1RlFIuLiE9lw3HzjFjRGeXGzJamesmc1FX9gina1gQi75NxBhd4F4pVbPXNzpmGZ3ugkNGK9NCUE9eXsKPR8ewJy2HHcnn7Y6jlHJR5/NLeH9nGrcMiiQs2N/uOLWycqnKABHZJiJ7nKuQ/aGaNv4islxEEkRkq4jEWJWnMd06NIqQFr4s3JBodxSllIt6e1sKRaUVzBnbxe4ol2TlGUExMMEYMxAYBEwSkRFV2swBzhtjYoHngb9amKfRBPr5cNfwTnx58AwpmbrAvVLq+0rKKli8KYmx3cPo0a6l3XEuybJCYBzynC99nY+qF9WnAIudz1cAE0XE/nXb6uDekTF4ifD6Jj0rUEp93yd7T5GeW8x9Y7vaHaVOLO0jEBFvEdkNpONYvH5rlSaRQCqAMaYMyAHaVvM5c0UkXkTiMzIyrIxcZ+1DArhxQAfe3Z7KhaJSu+MopVyEMYbXvk2ke0Qw47qH2R2nTiwtBMaYcmPMICAKGCYi/ao0qe7b/w+G4hhjFhhj4owxceHh4VZEbZA5Y7qSX1LO8m26VoFSymHLiSwOnLrA7DFdaCYXOJpm1JAxJhtYB0yqsikNiAYQER8gBMhqikyNoX9UCMO6hPLGpiRKda0CpRSwcMMJQoP8uGVwpN1R6szKUUPhItLa+bwFcDVwuEqzj4F7nc9vA9aaZjY4f+7YrpzMLmT1Pr3BTClPl5Cey9eH05k5sjMBvt52x6kzK88IOgDfiMheYDuOPoJPRGS+iEx2tnkNaCsiCcDPgCcszGOJCb0i6BYexIL1J/QGM6U83MINifj7eDFjRGe7o9SLj1UfbIzZCwyu5v2nKz0vAqZZlaEpeHkJ94/tyhMr97H5eCajYptH55BSqnGl5xaxcudJbr8iirYufgNZVXpncSO4eXAkYcF+LNhwwu4oSimbvLkpmdKKCuaMaR5DRivTQtAIAny9uXdkDOuOZHDkTK7dcZRSTaygpIwlW5K5tk87uoQF2R2n3rQQNJJ7RnQmwNeLV/WsQCmP8158GjmFpcwd1/zOBkALQaNpE+THHXHRfLT7JKdzCu2Oo5RqImXlFby64QRDO7dhaOdQu+M0iBaCRnTf2K5UGFj0rU47oZSn+HTfadLOFzLvym52R2kwLQSNKDo0kBsHdODtrSnkFOi0E0q5O2MMr/z3BLERwUzsFWF3nAbTQtDIHhjXjfySct7ammx3FKWUxf57NINDpy8wd1xXvLyax3QS1dFC0Mj6dGzFlT3CeX1jIkWl5XbHUUpZ6JX/Hqd9qwBuHtR8ppOojhYCC8y7shvn8kpYsSPN7ihKKYvsTs1my4ks5ozp4tLrEddF807vokZ0DWVgdGsWrD9BmU5Gp5RbemXdcVoG+Lj8esR1oYXAAiLCQ+O7kZJVwKc6GZ1SbichPZfPD5xh1qgYgv0tm6mnyWghsMg1vdvRPSKYf687rpPRKeVm/r3uBC18vfnxaNdfj7gutBBYxMtLeOiqbhw+k8vaw+l2x1FKNZLUrAI+3H2S6cM6ERrkZ3ecRqGFwEI3DehIVJsWvPxNgp4VKOUmXt1wAi+B+8e5x9kAaCGwlI+3Fw9c2Y1dKY7RBUqp5i09t4h3tqcydXAUHUJa2B2n0Vi5Qlm0iHwjIodE5ICIPFZNm/EikiMiu52Pp6v7rOZs2tAowoL9+ec3CXZHUUpdpkXfJlFWXsG88c13OonqWHlGUAb83BjTGxgBPCwifappt8EYM8j5mG9hHlsE+Hozd1wXvk04x66U83bHUUo10Pn8EpZsTuL6/h2a5VTTtbGsEBhjThtjdjqf5wKHgOZ9+10D3T28M20CfXlprZ4VKNVcvb4xkfySch6ZEGt3lEbXJH0EIhKDY9nKrdVsHikie0TkMxHpW8PPzxWReBGJz8jIsDCpNYL8fZgzpgtrD6ez/2SO3XGUUvV0oaiU1zclMalve3q1b2V3nEZneSEQkWDgfeBxY8yFKpt3Ap2NMQOBl4APq/sMY8wCY0ycMSYuPDzc2sAWmTkqhlYBPry09pjdUZRS9bR4YxK5RWVueTYAFhcCEfHFUQSWGmNWVt1ujLlgjMlzPl8N+IqIW67+3irAl1mju/DFgbMcOl21HiqlXFVecRmvbUxkYq8I+kWG2B3HElaOGhLgNeCQMea5Gtq0d7ZDRIY582Ralclus0fHEOTnzcs6gkipZuOtLclkF5Tyk4nd7Y5iGSsnyRgNzAD2ichu53u/BjoBGGNeAW4DHhSRMqAQuNO48Z1XrQP9mDkqhlf+e5xjZ3Pp3q6l3ZGUUrUoKCnj1fUnGNs9jEHRre2OYxnLCoEx5lug1pUajDEvAy9blcEV3T+2K4s3JfHC18d4+a4hdsdRStViyeZkMvNLePxq9z0bAL2zuMmFBvlx76gYPt13mqNnc+2Oo5SqQUFJGf9xng0010Xp60oLgQ3uH9uVQF9vXvxaRxAp5are3JxMVn4Jj1/dw+4oltNCYAM9K1DKteUXl7Fg/QnG9QhnaOc2dsexnBYCm1w8K3hBzwqUcjn//2zAvfsGLtJCYJM2QX7MCsln9Z6THI7oAjExsHSp3bGU8nh5xWUsWH+ccT3CGdLJ/c8GQAuBfZYu5f5nHiS4pJDnR98Fyckwd64WA6Vs9vq3iZwvKOXn17h/38BFWgjs8tRTtD6fwZztH/JFz1Hsa9cNCgrgqafsTqaUx8opKGXBhhNc3bsdA934voGqtBDYJSUFgNnbP6J14QX+PnbG995XSjW9VzecILeojJ950NkAaCGwT6dOALQqKeCBre+zrlscOyJ7ffe+UqppZeYVs2hjIjcM6ECfju43w2httBDY5ZlnIDAQgHt3fkJY/nn+fuW9jveVUk3uP+tPUFRazk89ZKRQZVoI7HL33bBgAXTuTGBZCQ8d+YpN0f3ZOPxHdidTyuOcySli8aYkbh4USWyE580BVmshEJFWIvKDxTlFZIB1kTzI3XdDUhJUVHDXp6/RMSSAv31+GDeed08pl/Ti2mNUGMNPPaxv4KIaC4GI3A4cBt53Lj5/RaXNb1gdzNME+Hrz+DU92JOWwxcHztgdRymPkXgun+XbU7lrWCeiQwPtjmOL2s4Ifg0MNcYMAn4MLBGRqc5ttc4qqhpm6uBIuoUH8X9fHqWsvMLuOEp5hOfWHMXP24tHJnhe38BFtRUCb2PMaQBjzDbgKuApEXkU0GsXFvDx9uIX1/YkIT2PlbtO2h1HKbe3/2QOq/acYvaYGMJb+tsdxza1FYLcyv0DzqIwHpgCVLvIvLp8k/q1Z0BUCP9Yc5Si0nK74yjl1v7vyyOEtPBl7rgfdIV6lNoKwYNUuQRkjMkFJgGzL/XBIhItIt+IyCFnH8Nj1bQREXlRRBJEZK+IePxKLSLCryb14lROEW9tSbY7jlJua/PxTNYdyeDB8d0IaeFrdxxb1VgIjDF7gBgAEZlY6f1SY0xdJsQpA35ujOkNjAAeFpE+VdpcB3R3PuYC/65Xejc1OjaMsd3DeGltAjkFpXbHUcrtVFQY/vLZITqEBDBrVIzdcWx3qfsIrhSR0TguCdWLMea0MWan83kucAiIrNJsCvCmcdgCtBaRDvXdlzt64rpeXCgq5V//1YXulWpsn+47zd60HH5+bU8CfL3tjmO72oaP/g7wB74C/ETk6YbuRERigMHA1iqbIoHUSq/T+GGxQETmiki8iMRnZGQ0NEaz0rdjCLcMiuT1jUmcyi60O45SbqOkrIJnvzhCr/YtuWXwD37deKTaLg39ATgC/B44YoyZ35AdiEgw8D7wuDHmQtXN1e26miwLjDFxxpi48PDwhsRoln52rePmlufWHLU5iVLu4+2tyaRkFfCr63rh7aUj4eHSl4ZaGmP+CjTonmsR8cVRBJYaY1ZW0yQNiK70Ogo41ZB9uaOoNoHMGhXD+zvTOHS6ag1VStXXhaJSXlybwMiubRnfw3O+VF7KpQrB/ip/1pmICPAacMgY81wNzT4GZjpHD40Aci7eu6AcHh4fS0gLX5759JBOPaHUZfrXN8fJyi/h19f3xvErSoGFncXAaGAGMEFEdjsf14vIPBGZ52yzGjgBJACvAg81YD9uLSTQl0cndOfbhHOsO+IZ/SNKWSE1q4BFGxOZOjiS/lEhdsdxKT41bajSWfyiiDxdn34CY8y3XGIqCuP4ivtwXT/TU90zojNLtiTzzOpDjO0eho+3ThqrVH397YsjeAn84kc97Y7icizvLFaXz8/Hiyeu60VCeh7Ltqde+geUUt+zK+U8q/acYu7YrnRs3cLuOC7nUl8tWwGrgODKb4rIeKsCqepd26cdw7qE8o81R7lQpDeZKVVXxhj+9Okhwlv688CVnj2VRE1qLQTGmOeBd4EWzg7dFiLyEvCXJkmnviMi/PaGPmTml/DPtXqTmVJ19cne0+xIPs/Pr+lBkH+NV8M9Wl0uNg/HMcRzE7Adx/DO0VaGUtXrHxXCtKFRLNqYSOK5fLvjKOXyCkvK+d/PDtO3YyumxUVf+gc8VF0KQSlQCLQAAoBEY4xOlm+TX07qiZ+3F898esjuKEq5vAXrT3Ayu5Df3dRXbx6rRV0KwXYcheAKYAwwXURWWJpK1SiiZQCPTOjOV4fOsuGYDidVqiansgv5938TuGFAB4Z1CbU7jkurSyGYY4x52jnr6BljzBTgI6uDqZrNHhND57aBzF91UFcyU6oG//vZYYyBJ6/rZXcUl3fJQmCMia/mvSXWxFF14e/jzVPX9+ZYeh5LdM0CpX5ge1IWH+85xQNXdiOqjWeuQ1wfemdSM3VNn3aM7R7Gc2uOci6v2O44SrmMsvIKfvvhfjqGBDDvyq52x2kWtBA0UyLC7yf3pai0nL9+dtjuOEq5jKVbUzh8Jpff3tiHQD8dLloXWgiasW7hwcwe04X3dqSxM+W83XGUst25vGL+/uURxsSGMalfe7vjNBtaCJq5Ryd0p10rf3730QHKK3R2UuXZnv38CAUl5fx+ch+dXbQetBA0c0H+Pjx1Qx/2ncxh2bYUu+MoZZtdKedZHp/KnDFdiI1o0BIqHksLgRu4aUAHRnVry98+P6wdx8ojlZVX8NQH+2nfKoCfTOxud5xmRwuBGxAR5k/pR2FpOX/WO46VB1q8OZmDpy/wu5v6EKzzCdWbFgI3ERsRzAPjurFy10k2HT9ndxylmsyZnCKe+/II43uGawdxA1lWCERkkYiki0i1y1yKyHgRyam0etnTVmXxFI9MiKVTaCC//XA/JWV6x7HyDH/85CBlFYb5k/tpB3EDWXlG8AYw6RJtNhhjBjkfuvDNZQrw9eYPU/pyPCOfBeuP2x1HKcutO5LOp/tO85MJsXRqq3cQN5RlhcAYsx7IsurzVfWu6hnBDf078OLaBE5k5NkdRynLFJSU8ZsP9xMbEcz94/QO4sthdx/BSBHZIyKfiUjfmhqJyFwRiReR+IwMnXHzUn53Ux/8fbz49Qf7cCwLrZT7ee7Lo6SdL+QvU/vj7+Ntd5xmzc5CsBPobIwZCLwEfFhTQ2PMAmNMnDEmLjw8vMkCNlcRrQL49fW92XIii/fi0+yOo1Sj25eWw6KNidw1vBNXxOgU05fLtkJgjLlgjMlzPl8N+IpImF153M0dcdEMiwnlmdWHyMjVewuU+ygrr+CJlXsJC/bnV5N0iunGYFshEJH24uziF5FhziyZduVxN15ewp+n9qewpJzfrzpgdxylGs3CbxM5cOoCf5jcl5AWvnbHcQtWDh9dBmwGeopImojMEZF5IjLP2eQ2YL+I7AFeBO40ekG7UcVGBPPoxFg+3Xuaz/efsTuOUpfteEYez605yrV92uk9A43IslvwjDHTL7H9ZeBlq/avHB64shur953htx/tZ0TXUFoH+tkdSakGKa8w/M+KvbTw9eZPN+s9A43J7lFDymK+3l48O20A5/NLmP/JQbvjKNVgb25OYkfyeZ6+sQ8RrQLsjuNWtBB4gL4dQ3hwfDdW7jzJN4fT7Y6jVL2lZBbwt88d00hMHRJpdxy3o4XAQzwyIZbuEcE8uXIfOQWldsdRqs4qKgy/XLEHby/hz7f010tCFtBC4CH8fbz5++0Dycgr1lFEqll5fVMSWxOzePqmPnRs3cLuOG5JC4EHGRDVmoeviuWDXSf5fP9pu+ModUkJ6Xn87fPDTOwVwbShUXbHcVtaCDzMI1fF0rdjK576YL8uYqNcWll5BT9/bw8t/Lz5y1S9JGQlLQQexs/Hi+duH0RuURlP6VxEyoX9e91x9qRm86eb++koIYtpIfBAPdu35OfX9uCLA2d1LiLlkvakZvPC18e4aWBHbhzQ0e44bk8LgYe6b2xXRnQN5ferDpCcmW93HKW+U1BSxuPLdxPR0p8/TelndxyPoIXAQ3l7Cc/dPggfL+Hx5bspK9cVzZRr+OMnh0jKzOfvtw8iJFDnEmoKWgg8WMfWLXjmlv7sSsnmpbUJdsdRijUHz7JsWwpzx3VlZLe2dsfxGFoIPNxNAzsydXAkL609xrZEXVBO2edMThH/s2IPfTq04mfX9LA7jkfRQqCYf3M/OoUG8tg7u8guKLE7jvJA5RWGx97ZRXFZBS/dNVhXHGtiWggUwf4+vDR9COfyivnlir06pFQ1uZfXJrA1MYv5U/rRLTzY7jgeRwuBAqB/VAi/mtSLNQfPsmRLst1xlAfZlpjFC18f5ZbBkdyqE8rZwsqFaRaJSLqI7K9hu4jIiyKSICJ7RWSIVVlU3cwZ04UJvSL40yeH2JeWY3cc5QEy84p5dNkuOoUG8kddY8A2Vp4RvAFMqmX7dUB352Mu8G8Ls6g6EBH+b9pA2gb78dDbO3SWUmWp8grD48t3k1VQwj/vHkKwv2XrZKlLsKwQGGPWA7UNQ5kCvGkctgCtRaSDVXlU3YQG+fHyXUM4nV3EL1bs0f4CZZmX1h5jw7Fz/GFyX/p2DLE7jkezs48gEkit9DrN+Z6y2dDObXjy+t6sOXiWVzecsDuOckPfHjvHC18fY+rgSO68ItruOB7PzkJQ3cXAar9+ishcEYkXkfiMjAyLYymA2aNjuK5fe/76+RE2H8+0O45yIyezC3n0nV3Ehgfzp1u0X8AV2FkI0oDKXwWigFPVNTTGLDDGxBlj4sLDw5sknKcTEf522wBi2gbyyNs7OZVdaHck5QaKSsuZt2QHpWUVvDJjKIF+2i/gCuwsBB8DM52jh0YAOcYYXS3FhbQM8OU/M+IoLqvgwbd2UFRabnck1YwZY/jNh/vZdzKH5+4YpPcLuBArh48uAzYDPUUkTUTmiMg8EZnnbLIaOAEkAK8CD1mVRTVcbEQwf799IHvScnj6o/3aeawa7K0tyazYkcajE7tzTZ92dsdRlVh2XmaMmX6J7QZ42Kr9q8bzo77t+cmEWF5am0CfDq2YNbqL3ZFUM7P5eCZ/WHWQCb0ieHxid7vjqCr0zmJVJz+9ugfX9GnH/E8Osv6odtirukvJLODBpTuICQviH3cOwstLO4ddjRYCVSdeXsLzdwyiR7uWPPz2To5n5NkdSTUDuUWlzFm8HYCFM+NoFaDrC7giLQSqzoL9fXh1Zhx+3l7cvzheZypVtXLMKLqbxHP5/OvuIcSEBdkdSdVAC4Gql+jQQF6ZMZS084U8sGQHxWU6kkj9kDGGP6w6wNrD6fx+cl9GdQuzO5KqhRYCVW9XxITy7LQBbE3M4on39+lIIvUDr32byJubk5k7riv3jOhsdxx1CXo3h2qQKYMiSc0q4P++PEp0aKCuKKW+8/n+Mzyz+hDX9WvPE5N62R1H1YEWAtVgD18VS0pWAS9+fYzI1gHccUUnuyMpm+1IzuKxd3YxKLo1z9+hI4SaCy0EqsFEhGdu6U96bjFPrtxHm0A/ru3b3u5YyiZHz+Yy+414OrZuwcKZcQT46nKTzYX2EajL4uvtxZRPsrIAABA3SURBVL/uHkL/qNb8ZNkutiXWNvO4clcnswuZ+do2/H28eHP2MNoG+9sdSdWDFgJ12QL9fHh91hVEtmnBnMXbOXT6gt2RVBPKzCtm5mtbyS8uY/HsYUSHBtodSdWTFgLVKEKD/Hhz9jCC/X2Y8dpWveHMQ+QUljJz0TbSzhey8N44endoZXck1QBaCFSjiWoTyFv3DQfgnoVbSc0qsDmRslJ+cRmz39jO0bO5vDJjKMO7trU7kmogLQSqUXULD2bJnOEUlJRz98KtnMkpsjuSskBRaTlzl8SzOzWbl6YP5qqeEXZHUpdBC4FqdL07tGLx7GFk5Zcw/dUtWgzcTFFpOfe/Gc+m45k8e9sAJvXTpcabOy0EyhKDoluzePYwMnKLtRi4kYtF4NuEc/zt1gFMHRJldyTVCLQQKMsM7dzmu2Jw54LNnM7R5S6bs8KScu5b7CgCz942kGlxuui8u7C0EIjIJBE5IiIJIvJENdtniUiGiOx2Pu6zMo9qekM7t+HNOcPIzCth2iubSc7MtzuSaoDcolLuXbSNjccdReC2oXom4E6sXKrSG/gncB3QB5guIn2qabrcGDPI+VhoVR5lnyGd2vD2/SPILy5j2iubOXY21+5Iqh7O55dw98Kt7Ew5z4t3DtYi4IasPCMYBiQYY04YY0qAd4ApFu5PubD+USEsf2AkALf/ZzN707JtTqTq4uyFIu5YsJnDZ3L5z4yh3DSwo92RlAWsLASRQGql12nO96q6VUT2isgKEan2oqOIzBWReBGJz8jQZRKbqx7tWvLevJEE+ftw54ItrDuSbnckVYuE9Fym/msTaecLeWPWFUzsrQvOuysrC0F10w5Wnbh+FRBjjBkAfAUsru6DjDELjDFxxpi48PDwRo6pmlLntkGsfHAUMW2DuG9xPCt2pNkdSVUjPimLW/+9meKyCpbPHcmoWF1Yxp1ZWQjSgMrf8KOAU5UbGGMyjTHFzpevAkMtzKNcRESrAJY/MIIRXdvyi/f28OLXx3RxGxfy2b7T3L1wK6FBfqx8cBT9o0LsjqQsZmUh2A50F5EuIuIH3Al8XLmBiFS+E2UycMjCPMqFtAzwZdGsK5g6OJLn1hzl8eW7KSrVZS/tZIzh5bXHeHDpTvp2bMX7D46iU1udQM4TWLYegTGmTEQeAb4AvIFFxpgDIjIfiDfGfAw8KiKTgTIgC5hlVR7levx8vPj77QPpFhHMs18cISWrgAUz4ghvqVMYN7Wi0nKeXLmPD3ad5OZBHfnfWwfoegIeRJrbKXlcXJyJj4+3O4ZqZJ/tO81P391Nm0A//nX3EAZ3amN3JI9xKruQB9/awZ60HH5xbQ8evioWEV1ZzN2IyA5jTFx12/TOYuUSruvfgRXzRuHtJdzxny0s25ZidySPsOn4OW566VuOZ+Tzyj1DeWRCdy0CHkgLgXIZ/SJDWPXIGIZ3DeXJlfv45Xt7KCgpszuWW6qoMPx73XHuWbiV1oG+fPjwaCb102VGPZWuWaxcSpsgP9748TCeX3OUf65LYFdqNi/fNZhe7XXBk8aSkVvMz97dzYZj57ihfwf+etsAgv31V4En0zMC5XK8vYRf/KgnS2YPJ7uglCkvb2TJlmQdYtoI1h/N4LoXNrAtMYs/39Kfl+8arEVAaSFQrmtM9zA+e2wsw7u25bcf7ufe17frdNYNVFBSxm8+3MfMRdtoE+jLR4+M5q7hnbQ/QAFaCJSLC2/pzxuzruCPU/qyPTGLa5//Lx/sStOzg3rYnpTFdS9sYOnWFO4b04VVPxmjl9rU92ghUC7Py0uYMTKG1Y+NpXu7lvx0+R7ufX07KZm6JnJtcgpKeXLlPqa9spnyCsOy+0fwmxv76P0B6gf0PgLVrJRXGJZsTuLZL45QbgyPTezBnDFd8PPR7zQXGWNYtfc081cdJCu/mDljuvDTa3oQ6Kd9AZ6stvsItBCoZulUdiG/+/gAaw6epUtYEE9d35uJvSM8/pr3vrQc5n9ygO1J5+kfGcJfpvanX6TOFaS0ECg39s2RdP74yUFOZOQztnsYv5rUyyN/8Z3MLuQfa46yYmcaoYF+/OJHPbk9LhpvL88ujOr/00Kg3FppeQVvbk7mxa+PkVNYyg39O/Cza3vQLTzY7miWO5dXzL++Oc5bW5IBmDmyM49e3Z1WAb42J1OuRguB8ggXikpZuP4EC79NpKi0nBsGdGTelV3p29H9zhBOZRfy6oYTvLMtleKycm4bGsVjV/cgsnULu6MpF6WFQHmUc3nFvLrhBEu3pJBXXMb4nuHMHt2FMbFheDXzSyX7T+bwxqYkPtp9kgoDUwZ15KHxscRGuP/Zj7o8WgiUR8opKGXJliTe2JTEubwSuoQFcc+IzkwdHEmbID+749VZYUk5Xxw4w5ubk9iZkk0LX29uj4vi/nFdiWqj6wWoutFCoDxacVk5n+07w+LNSexKycbXW7iqZwRTh0QyvmeES46rL68wbEvM4oNdaazed4a84jJi2gYyY2QMtw2NIqSF9gGo+qmtEOjAYuX2/H28uXlwJDcPjuTgqQus3JnGh7tP8eXBswT6eTO+ZzjX9mnPuB7hhNp4plBQUsaWE5l8sf8sXx06S2Z+CUF+3lzXvwNTB0cyomvbZn9pS7kmS88IRGQS8AKOFcoWGmP+t8p2f+BNHGsVZwJ3GGOSavtMPSNQjaGsvIJNxzP54sAZvjx4loxcx9LZfTq0Ykz3MOI6t2FgdGvatQqwLENOQSl7T2azMzmbjcfPsSvlPKXlhmB/H67qFcG1fdoxsXeE3gimGoUtl4ZExBs4ClyDYyH77cB0Y8zBSm0eAgYYY+aJyJ3ALcaYO2r7XC0EqrFVVBj2pGWzMeEcGxMy2ZF8npLyCgDatwqgV4eWxIYH0y0imOg2gbQP8addqwBa1mGIZlFpOWcvFHEmp4iT2YUcz8jjeHo+R87mknguHwAR6NcxhFGxbRndLYzhXUPx93G9y1WqebOrEIwEfm+M+ZHz9ZMAxpi/VGrzhbPNZhHxAc4A4aaWUFoIlNWKSss5ePoCu1Oy2ZOWzdGzeZzIyKO4rOJ77fy8vQgO8CHI3xt/H28uXrQpLa8gr7icvOJSikq//zM+XkLntoHERgQzIKo1g6Jb0z8qRMf9K8vZ1UcQCaRWep0GDK+pjXOx+xygLXCuciMRmQvMBejUqZNVeZUCIMDXmyGd2jCk0rrJFRWGk9mFnMwu/O4b/vmCUvKKS8kvLqe4rPy7tj5eXgT5+9AywIdWAT60axVA+5AAOoS0oHPbQHy9dV4k5VqsLATV9WpV/aZflzYYYxYAC8BxRnD50ZSqHy8vITo0kOhQHa6p3I+VX03SgOhKr6OAUzW1cV4aCgGyLMyklFKqCisLwXagu4h0ERE/4E7g4yptPgbudT6/DVhbW/+AUkqpxmfZpSHnNf9HgC9wDB9dZIw5ICLzgXhjzMfAa8ASEUnAcSZwp1V5lFJKVc/SAcrGmNXA6irvPV3peREwzcoMSimlaqfDF5RSysNpIVBKKQ+nhUAppTycFgKllPJwzW4aahHJAJIb+ONhVLlr2UW4ai5w3Wyaq340V/24Y67Oxpjw6jY0u0JwOUQkvqa5NuzkqrnAdbNprvrRXPXjabn00pBSSnk4LQRKKeXhPK0QLLA7QA1cNRe4bjbNVT+aq348KpdH9REopZT6IU87I1BKKVWFFgKllPJwbl0IRORZETksIntF5AMRaV1Du0kickREEkTkiSbINU1EDohIhYjUOBRMRJJEZJ+I7BYRy9fnrEeuJj1ezn2GisgaETnm/LNNDe3Kncdrt4hUnfa8sbLU+vcXEX8RWe7cvlVEYqzI0YBcs0Qko9Lxua+Jci0SkXQR2V/DdhGRF52594rIEBfJNV5Eciodr6era2dBrmgR+UZEDjn/f3ysmjaNe8yMMW77AK4FfJzP/wr8tZo23sBxoCvgB+wB+licqzfQE1gHxNXSLgkIa8Ljdclcdhwv537/BjzhfP5Edf8tndvyLM5xyb8/8BDwivP5ncDyJjg+dck1C3i5qf49VdrvOGAIsL+G7dcDn+FYsXAEsNVFco0HPrHheHUAhjiftwSOVvPfslGPmVufERhjvjTGlDlfbsGxSlpVw4AEY8wJY0wJ8A4wxeJch4wxR6zcR0PUMVeTHy+nKcBi5/PFwM1NsM/q1OXvXznrCmCiiFS3LGtT57KFMWY9ta88OAV40zhsAVqLSAcXyGULY8xpY8xO5/Nc4BCO9d0ra9Rj5taFoIrZOCpoVZFAaqXXafzwoNvFAF+KyA4RmWt3GCe7jlc7Y8xpcPyPAkTU0C5AROJFZIuIWFEs6vL3/66N84tIDtDWgiz1zQVwq/NSwgoRia5mux1c+f/BkSKyR0Q+E5G+Tb1z52XFwcDWKpsa9ZhZujBNUxCRr4D21Wx6yhjzkbPNU0AZsLS6j6jmvcseU1uXXHUw2hhzSkQigDUictj5LcbOXJYcL6g9Wz0+ppPzmHUF1orIPmPM8cbI51SXv79lx6gWddnnKmCZMaZYRObhOGuZYHGuurDjeNXFThzz8+SJyPXAh0D3ptq5iAQD7wOPG2MuVN1czY80+Jg1+0JgjLm6tu0ici9wIzDROC+uVZEGVP5mFAWcsjpXHT/jlPPPdBH5AMfp/2UVgkbIZcnxgtqzichZEelgjDntPAVOr+EzLh6zEyKyDse3qcYsBHX5+19skyYiPkAI1l+CuGQuY0xmpZev4ug3cwWW/Zu6HJV/+RpjVovIv0QkzBhj+WR0IuKLowgsNcasrKZJox4zt740JCKTgF8Bk40xBTU02w50F5EuIuKHo3PPktEm9SEiQSLS8uJzHB3f1Y5uaGJ2Ha+PgXudz+8FfnD2IiJtRMTf+TwMGA0cbOQcdfn7V856G7C2hi8hTZqryjXkyTiuPbuCj4GZzpEwI4Cci5cB7SQi7S/27YjIMBy/LzNr/6lG2a/gWM/9kDHmuRqaNe4xa+oe8aZ8AAk4rqPtdj4ujuToCKyu1O56HD3zx3FcIrE61y04KnoxcBb4omouHKM/9jgfB1wllx3Hy7nPtsDXwDHnn6HO9+OAhc7no4B9zmO2D5hjUZYf/P2B+Ti+cAAEAO85//1tA7o20TG6VK6/OP8t7QG+AXo1Ua5lwGmg1Pnvaw4wD5jn3C7AP52591HLSLomzvVIpeO1BRjVRLnG4LjMs7fS767rrTxmOsWEUkp5OLe+NKSUUurStBAopZSH00KglFIeTguBUkp5OC0ESinl4bQQKKWUh9NCoJRSHk4LgVKXSUSucE7kFuC8I/yAiPSzO5dSdaU3lCnVCETkTzjuKG4BpBlj/mJzJKXqTAuBUo3AOb/PdqAIx1QE5TZHUqrO9NKQUo0jFAjGsaJUgM1ZlKoXPSNQqhGIY33kd4AuQAdjzCM2R1Kqzpr9egRK2U1EZgJlxpi3RcQb2CQiE4wxa+3OplRd6BmBUkp5OO0jUEopD6eFQCmlPJwWAqWU8nBaCJRSysNpIVBKKQ+nhUAppTycFgKllPJw/w8z4cUm3m1L+AAAAABJRU5ErkJggg==\n",
|
||
"text/plain": [
|
||
"<Figure size 432x288 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plot_function(f, 'x', 'x**2')\n",
|
||
"plt.scatter(-1.5, f(-1.5), color='red');"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we look to see what would happen if we increased or decreased our parameter by a little bit — the *adjustment*. This is simply the slope at a particular point:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"A graph showing the squared function with the slope at one point\" width=\"400\" caption=\"The slope of a function\" src=\"images/grad_illustration.svg\" id=\"slope\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can change our weight by a little in the direction of the slop, calculate our loss and adjustment again, and repeat this a few times. Eventually, we will get to the lowest point on our curve:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"An illustration of gradient descent\" width=\"400\" caption=\"Gradient descent\" src=\"images/chapter2_perfect.svg\" id=\"descent\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This basic idea goes all the way back to Isaac Newton, who pointed out that we can optimise arbitrary functions in this way. Regardless of how complicated our functions become, this basic approach of gradient descent will not significantly change. The only minor changes we will see later in this book are some handy ways we can make it faster, by finding better steps."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## The gradient"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The one magic step is the bit where we calculate the *gradients*. As we mentioned, we can use calculus as a performance optimization; it allows us to more quickly calculate whether our loss will go up or down when we adjust our parameters up or down. In other words, the gradients will tell us how much we have to change each weight to make our model better.\n",
|
||
"\n",
|
||
"Perhaps you remember back to your high school calculus class: the *derivative* of a function tells you how much a change in the parameters of a function will change its result. Don't worry, lots of us forget our calculus once high school is behind us! But you will have to have some intuitive understanding of what a derivative is before you continue, so if this is all very fuzzy in your head, head over to Khan Academy and complete the lessons on basic derivatives. You won't have to know how to calculate them yourselves, you just have to know what a derivative is.\n",
|
||
"\n",
|
||
"The key point about a derivative is this: for any function, such as the quadratic function we saw before, we can calculate its derivative. The derivative is another function. It calculates the change, rather than the value. For instance, the derivative of the quadratic function at the value three tells us how rapidly the function changes at the value three. More specifically, you may remember from high school that gradient is defined as \"rise/run\", that is, the change in the value of the function, divided by the change in the value of the parameter. When we know how our function will change, then we know what we need to do to make it smaller. This is the key to machine learning: having a way to change the parameters of a function to make it smaller. Calculus provides us with a computational shortcut, the derivative, which lets us directly calculate the gradient of our functions."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"One important thing to be aware of: our function has lots of weights that we need to adjust, so when we calculate the derivative we won't get back one number, but lots of them — a gradient for every weight. But there is nothing mathematically tricky here; you can calculate the derivative with respect to one weight, and treat all the other ones as constant. Then repeat that for each weight. This is how all of the gradients are calculated, for every weight.\n",
|
||
"\n",
|
||
"We mentioned just now that you won't have to calculate any gradients yourselves. How can that be? Amazingly enough, PyTorch is able to automatically compute the derivative of nearly any function! What's more, it does it very fast. Most of the time, it will be at least as fast as any derivative function that you can create by hand. Let's see an example. First, pick a value (which must be a tensor) we want gradients at:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"xt = tensor(3.).requires_grad_()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Notice the special method `requires_grad_`? That's the magical incantation we use to tell PyTorch that we want to calculate gradients for that value.\n",
|
||
"\n",
|
||
"Now we calculate our function with that value (notice how PyTorch prints not just the value calculated, but also a note that it has a gradient function it'll be using to calculate our gradient when needed):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(9., grad_fn=<PowBackward0>)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"yt = f(xt)\n",
|
||
"yt"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Finally, we tell PyTorch to calculate the gradients for us:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"yt.backward()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> note: The \"backward\" here refers to \"back propagation\", which is the name given to the process of calculating the derivative of each layer (we'll see how this is done exactly in chapter <chapter_foundations>, when we calculate the gradients of a deep neural net from scratch). This is called the \"backward pass\" of the network, as opposed to the \"forward pass\", which is where the activations are calculated. Life would probably be easier if `backward` was just called `calculate_grad`, but deep learning folks really do like to add jargon everywhere they can!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can now view the gradients by checking the `grad` attribute of our tensor:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(6.)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"xt.grad"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"If you remember your high school calculus rules, the derivative of `x**2` is `2*x`, and we have `x=3`, so the gradient should be `2*3=6`, which is what PyTorch calculated for us!\n",
|
||
"\n",
|
||
"Now we'll repeat the above steps, but with a vector argument for our function:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([ 3., 4., 10.], requires_grad=True)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"xt = tensor([3.,4.,10.]).requires_grad_()\n",
|
||
"xt"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...and adding `sum()` to our function so it can take a vector (i.e. a *rank-1 tensor*), and return a scalar (i.e. a *rank-0 tensor*):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(125., grad_fn=<SumBackward0>)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"def f(x): return (x**2).sum()\n",
|
||
"\n",
|
||
"yt = f(xt)\n",
|
||
"yt"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our gradients are `2*xt`, as we'd expect!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([ 6., 8., 20.])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"yt.backward()\n",
|
||
"xt.grad"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## The loss function"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As we've seen, if we are going to calculate gradients (which we need), then we need some *loss function* that represents how good our model is. The obvious approach would be to use the accuracy for this purpose. In this case, we would calculate our prediction for each image, and then calculate the overall accuracy (remember, at first we simply use random weights), and then calculate the gradients of each weight with respect to that accuracy calculation.\n",
|
||
"\n",
|
||
"Unfortunately, we have a significant technical problem here. The gradient of a function is its *slope*, or its steepness, which can be defined as *rise over run* -- that is, how much the value of function goes up or down, divided by how much you changed the input. We can write this in maths: `(y_new-y_old) / (x_new-x_old)`. Specifically, it is defined when x_new is very similar to x_old, meaning that their difference is very small. But accuracy only changes at all when a prediction changes from a 3 to a 7, or vice versa. So the problem is that a small change in weights from from x_old to x_new isn't likely to cause any prediction to change, so `(y_new - y_old)` will be zero. (In other words, the gradient is zero almost everywhere.) As a result, a very small change in the value of a weight will often not actually change the accuracy at all. This means it is not useful to use accuracy as a loss function. When we use accuracy as a loss function, most of the time our gradients will actually be zero, and the model will not be able to learn from that number. That is not much use at all!\n",
|
||
"\n",
|
||
"> s: In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5) so its derivative is nil almost everywhere (and infinity at the threshold). This then gives gradients that are zero or infinite, so useless to do an update of gradient descent.\n",
|
||
"\n",
|
||
"Instead, we want a loss function which, when our weights result in slightly better predictions, gives us a slightly better loss. So what does a \"slightly better prediction\" look like, exactly? Well, in this case, it means that, if the correct answer is a 3, then the score is a little higher, or if the correct answer is a 7, then the score is a little lower. Here is a simple implementation of just such a function, assuming that `inputs` are numbers between zero and one:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def mnist_loss(inputs, targets):\n",
|
||
" return torch.where(targets==1, 1-inputs, inputs).mean()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Here, we're assuming that `targets` contains `1` for any digit which is meant to be a three, and `0` otherwise. Let's look at an example:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tgt = tensor([1,0,1])\n",
|
||
"inp = tensor([0.9, 0.4, 0.2])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"`torch.where(a,b,c)` is the same as running the list comprehension `[b[i] if a[i] else c[i] for i in range(len(a))]`, except it works on tensors, at C/CUDA speed. (It's important to learn about PyTorch functions like this, because looping over tensors in Python performs at Python speed, not C/CUDA speed!) Try running `help(torch.where)` now to read the docs for this function, or, better still, look it up on the PyTorch documentation site."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([0.1000, 0.4000, 0.8000])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"torch.where(tgt==1, 1-inp, inp)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can see that this function will return a lower number if the predictions are more accurate, and more confident for accurate predictions (higher absolute values) and less confident for inaccurate predictions. In PyTorch, we always assume that a lower value of a loss function is better."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(0.4333)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"mnist_loss(inp,tgt)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"For instance, if we change our prediction for the one \"false\" target from `0.2` to `0.8` the loss will go down, indicating that this is a better prediction."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(0.2333)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"mnist_loss(tensor([0.9, 0.4, 0.8]),tgt)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Sigmoid"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"One problem with `mnist_loss` as currently defined is that it assumes that inputs are always between zero and one. We need to ensure, then, that this is actually the case! As it happens, there is a function that does exactly that--it always outputs a number between one and one. This function is called *sigmoid* and is defined by:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def sigmoid(x): return 1/(1+torch.exp(-x))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Pytorch actually already defines this for us, so we don’t really need our own version. This is an important function in deep learning, since we often want to ensure values between zero and one. This is what it looks like:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAEICAYAAABPgw/pAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deXzU5b328c+XAAES9oQtC4sEWWWLaF3qhj2gFVxqBa2W6uNWt6o9rR49+tRW29r2VK27lbqDaytHsG51R2WRnbCEPWxZgIQEEpLM9/kjsU+MQQaY5Dczud6vF69mZm7mdxVmLm/u32bujoiIxL4WQQcQEZHIUKGLiMQJFbqISJxQoYuIxAkVuohInFChi4jECRW6xB0zu8jM3o627ZrZB2b2f5oykzQvKnSJWWZ2gpnNNrNiM9thZp+a2dHu/ry7f6+p8wS1XZGvtAw6gMihMLMOwBvA1cBLQGvgRKAiyFwiQdIMXWLVAAB3n+bu1e6+193fdvfFZjbFzD75aqCZfc/MVtbO5B82sw+/WvqoHfupmf3ZzHaZ2VozO672+U1mlm9mP67zXh3N7BkzKzCzDWZ2u5m1qPNedbd7upmtqN3ug4A12Z+ONEsqdIlVq4BqM3vazMabWeeGBplZCvAKcCvQFVgJHFdv2DHA4trXXwCmA0cD/YEfAQ+aWXLt2L8AHYF+wEnAJcBP9rPdV4HbgRRgDXD8of6fFQmHCl1ikruXACcADjwBFJjZDDPrXm/oGcAyd3/N3auAB4Bt9casc/e/uXs18CKQAdzl7hXu/jawD+hvZgnABcCt7r7b3dcDfwIubiDiGcByd3/F3SuB+xrYrkhEqdAlZrl7jrtPcfd0YCjQi5rirKsXsKnO73Egr96Y7XV+3ls7rv5zydTMtFsDG+q8tgFIayBeQ9vd1MA4kYhRoUtccPcVwFPUFHtdW4H0rx6YmdV9fJAKgUqgd53nMoHNDYzdSs1Mv+52MxoYJxIxKnSJSWY20MxuNrP02scZwGTg83pDZwLDzOxsM2sJXAP0OJRt1i7JvATcbWbtzaw3cBPwXAPDZwJDzOzc2u1ef6jbFQmXCl1i1W5qdmZ+YWZl1BT5UuDmuoPcvRA4H7gXKAIGA/M49MMbrwPKgLXAJ9TsRJ1af1Cd7f6udrtZwKeHuE2RsJhucCHNSe0hhnnARe7+ftB5RCJJM3SJe2b2H2bWycwSgf+i5njw+kszIjFPhS7NwXeoOQ68EDgLONvd9wYbSSTytOQiIhInNEMXEYkTgV2cKyUlxfv06RPU5kVEYtL8+fML3T21odcCK/Q+ffowb968oDYvIhKTzGzD/l7TkouISJw4YKGb2dTaS4gu3c/rZmYPmFmumS02s1GRjykiIgcSzgz9KWDct7w+npqz4LKAK4BHDj+WiIgcrAMWurt/BOz4liETgWe8xudAJzPrGamAIiISnkisoafx9cuC5tHw5URFRKQRRaLQG7qtVoNnK5nZFWY2z8zmFRQURGDTIiLylUgUeh5fv85zOrCloYHu/ri7Z7t7dmpqg4dRiojIIYrEcegzgGvNbDo1lzMtdvetEXhfEZGYFgo5hWUVbC+uYHtJOfm7a/73tEHdOCq9U8S3d8BCN7NpwMlAipnlAXcCrQDc/VFgFjX3T8wF9tDADXNFROJReWU1eTv3sGnnXvJ27mXzzr1s2bWXrcV72bKrnO0l5VSFvrkCndo+MZhCd/fJB3jdqbkLjIhI3KmoqmZ94R7WFpSyrqiM9YVlrC/cw4YdZWwv+fp9UlontKBHxzb06tSGY/p2oUfHNvTo2IbuHWp+dWufSEpyIq1bNs45nYGd+i8iEk32VYXIzS9l1fbdrNy+m9Xbd5ObX8rGHXuoO8lObZ9In67tODErlcwu7cjs0o6MLm1J79yO1OREWrRo6DiRpqFCF5Fmp6yiiuVbS1i6uZilm0tYtqWYNQWlVFbXNHfLFkbflCSG9OrIhBFpHJGaxBGpyfRJSSI5MXprM3qTiYhEQCjkrCkoZf6GnSzYuItFebtYtX33v2fdKcmJDOnVgZOP7Magnu0Z1LMDfbomNdqySGNSoYtIXKmsDrFkczFz1u3gi7VFfLlxF8V7KwHo1K4Vw9M78R9DenBUekeGpXWkW4c2ASeOHBW6iMQ0d2fl9t18srqQT3MLmbNuB2X7qgHol5rE+KE9GN27M6N7d6ZvShJmwa1xNzYVuojEnNKKKj5eVcAHKwv4cFUB20rKAeiXksQ5o9L4Tr8UxvTtQmr7xICTNi0VuojEhO0l5by9bBvv5OTz+Zoi9lWHaN+mJSdmpXDygG6ckJVCr05tg44ZKBW6iEStbcXlzFyylTeXbGX+xp24Q9+UJKYc34fTBnZjdO/OtEyIvZ2XjUWFLiJRpXhvJW8u2crrC7fw+boi3GFgj/bcOHYA44f2IKt7+6AjRi0VuogErjrkfJpbyCvz83hr2TYqqkL0TUnihtOyOGt4L45ITQ46YkxQoYtIYLaXlPPS3E1Mn7uJzbv20rFtKy44OoPzRqVzVHrHuD4ipTGo0EWkSbk7c9fv5KnZ63hr2XaqQ87x/bty6xkDOX1wdxJbJgQdMWap0EWkSeyrCvG/i7Yw9dN1LNtSQoc2LbnshL5MHpNJ35SkoOPFBRW6iDSq0ooqps/ZyJOfrGNrcTlZ3ZK555xhnD2yF+1aq4IiSX+aItIoivdW8vTs9Tz5yTqK91ZybL8u3HPuME4ekKq18UaiQheRiCopr+TJj9cx9dN17C6vYuygblxzSn9GZnYOOlrcU6GLSETs3VfNM5+t55EP17BrTyX/MaQ7152axdC0jkFHazZU6CJyWKpDzmtf5vGnt1exraSckwak8vPvHcmwdBV5U1Ohi8ghm51byK9n5pCztYThGZ24f9IIjunXNehYzZYKXUQOWt7OPdw9M4c3l24jrVNbHpg8krOO6qmdnQFToYtI2Cqqqnn8w7U89EEuADefPoDLv9uPNq10MlA0UKGLSFjmrNvBra8tZk1BGeOH9uD27w8mrZlfrjbaqNBF5FuVlFfy21k5TJuzifTObfnbT47mlCO7BR1LGqBCF5H9en9lPre+uoT83eVc+d1+3DA2S2d3RjH9zYjIN+wur+Su/13Oy/PzyOqWzGMXH8/wjE5Bx5IDUKGLyNfMXb+DG19cyJZde/npyUdww9gsXQExRqjQRQSAyuoQ97+7moc/yCW9cztevuo4RvfW6fqxRIUuImzetZfrpy1g/oadnD86nTsnDCE5UfUQa/Q3JtLMvbt8Oz9/ZRGVVSHunzSCiSPSgo4kh0iFLtJMVVWH+OPbq3j0wzUM7tmBhy4apRtNxDgVukgzVFRawfXTF/BpbhGTx2Ry51mDdbZnHFChizQzS/KKufLZeRSW7ePeHxzFD7Mzgo4kEdIinEFmNs7MVppZrpnd0sDrmWb2vpktMLPFZnZG5KOKyOF6Y/EWzn9sNmbGq1cdpzKPMwecoZtZAvAQcDqQB8w1sxnuvrzOsNuBl9z9ETMbDMwC+jRCXhE5BKGQc997q3ngvdVk9+7MoxePJiU5MehYEmHhLLmMAXLdfS2AmU0HJgJ1C92BDrU/dwS2RDKkiBy68spqbn55ETMXb+X80en85pyhOlEoToVT6GnApjqP84Bj6o35v8DbZnYdkASMjUg6ETksO8r2ccUz85i3YSe3jB/Ild/tp2uWx7Fw1tAb+tv3eo8nA0+5ezpwBvCsmX3jvc3sCjObZ2bzCgoKDj6tiIRtQ1EZ5z0ym8Wbi3nowlFcddIRKvM4F06h5wF195yk880llcuAlwDc/TOgDZBS/43c/XF3z3b37NTU1ENLLCIHtHRzMec9Mptde/Yx7fJjOPOonkFHkiYQTqHPBbLMrK+ZtQYmATPqjdkInAZgZoOoKXRNwUUCMDu3kEmPf05iywReufo4RvfuEnQkaSIHLHR3rwKuBd4Ccqg5mmWZmd1lZhNqh90MXG5mi4BpwBR3r78sIyKN7J9LtzLlb3Pp1akNr159HEekJgcdSZpQWCcWufssag5FrPvcHXV+Xg4cH9loInIwXp2fx3++sogRGZ2YOuVoOrVrHXQkaWI6U1QkDjz7+Qb++x9LOb5/V564JFt3FWqm9LcuEuOe+Ggtd8/K4bSB3XjoolG6JkszpkIXiWGPfriG3725gjOH9eS+SSNolRDW1TwkTqnQRWLUwx/kcu8/V3LW8F78+YfDaakyb/b0CRCJQV+V+QSVudShT4FIjPnrx2u5958rmTiiF/+jMpc69EkQiSHPfrae38zM4YxhPfjT+Spz+Tp9GkRixEvzNvHfry9j7KBu3HfBSJW5fIM+ESIxYNaSrdzy6mJOzErhwQtH0bqlvrryTfpUiES5j1cXcMP0BYzK7MxjF4/WceayXyp0kSg2f8NOrnhmPv27tefJKUfrDFD5Vip0kSi1evtuLn1qLt07JPLMpWPo2LZV0JEkyqnQRaLQ1uK9/HjqHFq3bMGzlx1Danvd/1MOTIUuEmWK91QyZepcSsqreOonR5PRpV3QkSRGqNBFokh5ZTWXPzuPdYVlPH7JaIb06hh0JIkh2sMiEiVCIefnLy9izrod/GXySI474ht3cRT5Vpqhi0SJe99ayRuLt3LL+IGcNbxX0HEkBqnQRaLA819s4NEP13DRMZlc+d1+QceRGKVCFwnYh6sKuOP1ZZw6sBu/mjAEMws6ksQoFbpIgFZv3821z3/JgO7teWCyrs8ih0efHpGAFJVWcOnTc2nTOoEnf5xNcqKOUZDDo0IXCUBFVTVXPjuf/JIKnrgkm16d2gYdSeKApgQiTczdue3vS5m3YScPXjiSERmdgo4kcUIzdJEm9uQn63hlfh43nJbF94/S4YkSOSp0kSb0wcp87pmVw/ihPbjhtKyg40icUaGLNJG1BaVcN20BR/bowJ9+OJwWLXR4okSWCl2kCewur+TyZ+bRKqEFT1wyWtc1l0ahT5VIIwuFnBtfXMT6oj08d9kxpHfW1ROlcWiGLtLIHvjXat7N2c7tZw7iO0d0DTqOxDEVukgjemf5du57dzXnjUpnynF9go4jcU6FLtJI1haUctOLCxmW1pG7zxmqa7RIo1OhizSCsooqrnpuPi0TjEd+NIo2rRKCjiTNQFiFbmbjzGylmeWa2S37GfNDM1tuZsvM7IXIxhSJHe7OL19dTG5+KQ9MHqmdoNJkDniUi5klAA8BpwN5wFwzm+Huy+uMyQJuBY53951m1q2xAotEu6mfrueNxVv5xbgjOTErNeg40oyEM0MfA+S6+1p33wdMBybWG3M58JC77wRw9/zIxhSJDXPX7+C3s3L43uDuXH3SEUHHkWYmnEJPAzbVeZxX+1xdA4ABZvapmX1uZuMiFVAkVhTsruCa578kvXNb/vjD4doJKk0unBOLGvpUegPvkwWcDKQDH5vZUHff9bU3MrsCuAIgMzPzoMOKRKuq6hDXTfuSkvJKnr50DB3atAo6kjRD4czQ84CMOo/TgS0NjHnd3SvdfR2wkpqC/xp3f9zds909OzVVa4sSP/749io+X7uDu88exqCeHYKOI81UOIU+F8gys75m1hqYBMyoN+YfwCkAZpZCzRLM2kgGFYlW7+Vs59EP1zB5TAbnjU4POo40YwcsdHevAq4F3gJygJfcfZmZ3WVmE2qHvQUUmdly4H3gP929qLFCi0SLTTv2cNNLixjSqwN3njUk6DjSzIV1cS53nwXMqvfcHXV+duCm2l8izUJFVTXXvPAlIXcevkgnD0nwdLVFkUN0z8wcFucV8+iPRtO7a1LQcUR06r/IoZi5eCtPf7aBy07oy7ihPYKOIwKo0EUO2vrCMn756mJGZHTil+MGBh1H5N9U6CIHobyyZt08oYXx4IUjad1SXyGJHlpDFzkId8/MYdmWEv56SbYuuiVRR9MLkTC9sXgLz36+gctP7MvYwd2DjiPyDSp0kTCsLyzjlleXMDKzE7/QurlEKRW6yAFUVFVz7bSadfO/TB5JqwR9bSQ6aQ1d5ADumZnD0s0lPKF1c4lymmqIfIt/Lv3/x5ufrnVziXIqdJH92LRjD//5ymKGp3fU8eYSE1ToIg3YVxXi2mkLAHjwwlE63lxigtbQRRrwh7dWsGjTLh65aBQZXbRuLrFB0w6Ret7L2c4TH6/j4mN7M35Yz6DjiIRNhS5Sx9bivdz88iIG9+zAbWcOCjqOyEFRoYvUqqoOcf20BVRWhXjwwpG6vrnEHK2hi9S6793VzF2/k/snjaBfanLQcUQOmmboIsDHqwt46INcfpidzsQRaUHHETkkKnRp9vJLyrnxxYX0T03mVxOGBh1H5JBpyUWateqQ87MXF1JaUcULlx9L29ZaN5fYpUKXZu3Bf+Uye00R9/7gKAZ0bx90HJHDoiUXabY+W1PE/e+t4uwRvTh/dHrQcUQOmwpdmqWC3RVcP30Bfbom8ZtzhmFmQUcSOWxacpFmJxRybnppISV7K3nm0jEkJ+prIPFBM3Rpdh7+IJePVxdy51lDGNSzQ9BxRCJGhS7Nyudri/ifd1Zx1vBeTB6TEXQckYhSoUuzUVhawfXTFtC7axL3nDNU6+YSd1To0ixUh5wbX1xI8d5KHrpwFO3btAo6kkjEaW+QNAsPv1+zbv7bc4cxuJfWzSU+aYYucW/2mkL+/G7N8eaTjta6ucQvFbrEtfyScq6ftpB+qcncrePNJc5pyUXiVlV1zX1ByyqqmHb5MSTpeHOJc2HN0M1snJmtNLNcM7vlW8b9wMzczLIjF1Hk0PzpnVXMWbeDe84dSpau0yLNwAEL3cwSgIeA8cBgYLKZDW5gXHvgeuCLSIcUOVjvLt/OIx+sYfKYTM4Zqeu0SPMQzgx9DJDr7mvdfR8wHZjYwLhfA/cC5RHMJ3LQNhbt4caXFjI0rQN3nvWNuYdI3Aqn0NOATXUe59U+929mNhLIcPc3IphN5KCVV1Zz1XPzaWHGIxeN1n1BpVkJZy9RQ4cF+L9fNGsB/BmYcsA3MrsCuAIgMzMzvIQiYXJ3/vsfS1m+tYS/TTmajC7tgo4k0qTCmaHnAXUP3k0HttR53B4YCnxgZuuBY4EZDe0YdffH3T3b3bNTU1MPPbVIA6bP3cTL8/O47tT+nDKwW9BxRJpcOIU+F8gys75m1hqYBMz46kV3L3b3FHfv4+59gM+BCe4+r1ESizRg4aZd3Pn6Mr47IJWfjR0QdByRQByw0N29CrgWeAvIAV5y92VmdpeZTWjsgCIHUlhawdXPzadbh0Tuv2AECS108pA0T2GdaeHus4BZ9Z67Yz9jTz78WCLhqaoOcd0LC9hRto9Xrz6Ozkmtg44kEhidOicx7ff/XMFna4v44/nDGZrWMeg4IoHStVwkZv1jwWae+HgdU47rww90k2cRFbrEpqWbi/nlq4s5pm8XbjtzUNBxRKKCCl1iTlFpBVc+O5+uSa156KJRtErQx1gEtIYuMWZfVYirn/uSwtIKXrnqOFKSE4OOJBI1VOgSM9ydO2csZc76Hdw/aQTD0rUTVKQu/VtVYsbTs9czbc4mrjnlCCaOSDvwbxBpZlToEhM+WV3Ir2fmMHZQd24+/cig44hEJRW6RL3c/FKufn4+/VOTuW/SCFroTFCRBqnQJartKNvHZU/PJbFlC56ckk2ybiMnsl/6dkjUqqiq5qpn57O1uJzpVxxLemddDlfk22iGLlHJ3bn11SXMWb+DP54/nFGZnYOOJBL1VOgSlf787mpeW7CZm08fwIThvYKOIxITVOgSdV6at4kH3lvNBdkZXHtq/6DjiMQMFbpElU9WF/Jfry3hxKwUfnPOUMx0RItIuFToEjWWbi7mymfn0b9bMg/rGi0iB03fGIkKG4rKmPK3OXRq15qnLx1D+zatgo4kEnN02KIErrC0gh9PnUNVyJl+6Ri6d2gTdCSRmKQZugRqd3klP/nbXLaVlPPkj4+mf7fkoCOJxCwVugSmvLKa//P0PHK2lvDIRaMZ3VvHmoscDi25SCAqq0Nc8/yXzFm/g/suGMEpA7sFHUkk5mmGLk2uOuT8/OVFvLcin7smDtWlcEUiRIUuTSoUcm59bTGvL9zCL8YdycXH9g46kkjcUKFLk6m549AyXpqXx/WnZfHTk3UWqEgkqdClSbg7d8/M4dnPN3Dld/tx49isoCOJxB0VujS6r8r8r5+sY8pxfbhl/ECd0i/SCHSUizQqd+fXb+Qw9dOaMr/zrMEqc5FGokKXRuPu/Op/l/PU7PX85Pg+3PF9lblIY1KhS6OoDjm3/X0J0+du4rIT+nL7mYNU5iKNTIUuEVdZHeLnLy/i9YVbuO7U/tx0+gCVuUgTUKFLRJVXVnPdtAW8s3w7vxw3kKtPPiLoSCLNhgpdIqZ4byWXPzOPuet3cNfEIVzynT5BRxJpVsI6bNHMxpnZSjPLNbNbGnj9JjNbbmaLzew9M9Ppf81Mfkk5Fzz2GQs27uT+SSNV5iIBOGChm1kC8BAwHhgMTDazwfWGLQCy3f0o4BXg3kgHleiVm1/KeY/OZuOOPUydcrRu6iwSkHBm6GOAXHdf6+77gOnAxLoD3P19d99T+/BzID2yMSVafbG2iPMemc3efdVMu/xYTsxKDTqSSLMVTqGnAZvqPM6rfW5/LgPePJxQEhteX7iZi5+cQ9fk1vz9p8czPKNT0JFEmrVwdoo2dLyZNzjQ7EdANnDSfl6/ArgCIDMzM8yIEm1CIee+d1fxwL9yOaZvFx67eDSd2rUOOpZIsxdOoecBGXUepwNb6g8ys7HAbcBJ7l7R0Bu5++PA4wDZ2dkN/kdBotuefVXc9OIi/rlsG+ePTuc35wwlsWVC0LFEhPAKfS6QZWZ9gc3AJODCugPMbCTwGDDO3fMjnlKiwqYde7jy2fms2FbC7WcO4rIT+uqEIZEocsBCd/cqM7sWeAtIAKa6+zIzuwuY5+4zgD8AycDLtV/wje4+oRFzSxP7aFUB109fQHXIeXLK0ZxypG4ZJxJtwjqxyN1nAbPqPXdHnZ/HRjiXRIlQyHnkwzX88e2VHNm9PY/+aDR9UpKCjiUiDdCZorJfRaUV3PzyIj5YWcDEEb347bnDaNdaHxmRaKVvpzRozrodXDftS3buqeTXE4fwo2N7a71cJMqp0OVrqqpDPPh+Lg+8t5reXZOYOuVohvTqGHQsEQmDCl3+bX1hGT97cSELN+3inJFp3DVxCO3btAo6loiESYUuuDsvzNnI3TNzaNnC+MvkkZyl67GIxBwVejOXt3MPt7y6hE9yCzm+f1f+8IPh9OrUNuhYInIIVOjNVHXIef6LDfz+zRUA3H3OUC4ck6kdnyIxTIXeDOVsLeHW15awcNMuTuifwm/PHUZGl3ZBxxKRw6RCb0ZKK6p44L3VTP1kHR3atuLPFwzn7BFpmpWLxAkVejPg7ry+cAv3zMohf3cFF2RncMv4gXRO0hUSReKJCj3Ozd+wk7tnLufLjbsYnt6Rxy/JZoSuWy4Sl1TocWpj0R5+/9YKZi7eSmr7RH5/3jDOH51BixZaXhGJVyr0OLO9pJy//Gs10+dsolVCC244LYsrvtuPpET9VYvEO33L40T+7nKe+Ggtz3y2geqQM2lMBtedmkX3Dm2CjiYiTUSFHuO2FZfz2EdreOGLjVRWhzh7RBo/GzuAzK46DFGkuVGhx6jV23fz+Edr+cfCzYQczh2Zxk9P6U9fXatcpNlSoccQd+fT3CL+9uk63luRT5tWLZg8JpPLT+ynE4NERIUeC0orqvj7gs08M3s9q/NL6ZrUmhtOy+KS7/Sma3Ji0PFEJEqo0KOUu7NsSwnPf7GRGQs3U7avmiG9OvDH84fz/aN60qZVQtARRSTKqNCjTGFpBf9YsJlX5uexYttu2rRqwfeP6sWFx2QyMqOTTtMXkf1SoUeB0ooq3lm+jdcXbuHj1YVUh5zh6R25a+IQJo5Io2Nb3WRCRA5MhR6Q3eWV/GtFPm8u2cYHq/IprwyR1qktl5/Yj3NHpTGge/ugI4pIjFGhN6FtxeW8t2I77y7fzqdrithXFaJb+0R+mJ3BhOG9GJXZWafmi8ghU6E3osrqEF9u2MkHqwr4cGUBy7eWAJDZpR0XH9ub8UN7qMRFJGJU6BEUCjk520r4bE0Rs9cU8cXaIsr2VdOyhTG6d2d+Me5ITh/Unf7dkrVzU0QiToV+GMorq1m6uZh5G3Yyb/0O5qzbQUl5FQD9UpI4d1Q6x/fvynH9U+jQRjs2RaRxqdDDFAo564vKWJxXzMJNu1i4aRfLt5SwrzoEQN+UJM4Y1pNj+nXhmL5ddaNlEWlyKvQGlFdWs3p7KTlbS1i+tYTlW0pYtqWYsn3VALRtlcCw9I785Pg+jO7dmVG9O5OiMzZFJGDNutDLKqpYW1BGbsFu1uSXsWr7blbnl7KhqIyQ14xp1zqBgT3a84PR6QxJ68iwtI5kdUumZUKLYMOLiNQT14Xu7uzcU0nezj1sKNrDxh172Fi0h3VFZawvLCN/d8W/xya0MHp3bcfAHu05a3gvBvZoz6CeHejdpZ2OQhGRmBCzhe7u7K6oIr+knG3FFWwrKWdb8V427ypna/FetuzaS97OveypXSb5SkpyIn26tuOkAan0SUniiNQk+ndLJrNLEq1batYtIrEr5gr9xbkbeej9NeTvLqe8MvSN17smtaZXp7b06ZrECf1TSe/clrTObcns0o7MLu10KzYRiVthtZuZjQPuBxKAv7r77+q9ngg8A4wGioAL3H19ZKPW6JqUyPCMTnRvn0i3Dol0a9+GHh3b0LNjG7p3aKOrEIpIs3XAQjezBOAh4HQgD5hrZjPcfXmdYZcBO929v5lNAn4PXNAYgccO7s7Ywd0b461FRGJaOIvGY4Bcd1/r7vuA6cDEemMmAk/X/vwKcJrpVEgRkSYVTqGnAZvqPM6rfa7BMe5eBRQDXSMRUEREwhNOoTc00/ZDGIOZXWFm88xsXkFBQTj5REQkTOEUeh6QUedxOrBlf2PMrCXQEdhR/43c/XF3z3b37NTU1ENLLCIiDQqn0OcCWWbW18xaA5OAGfXGzAB+XPvzD4B/ufs3ZugiItJ4DniUi7tXmSHN4icAAAQRSURBVNm1wFvUHLY41d2XmdldwDx3nwE8CTxrZrnUzMwnNWZoERH5prCOQ3f3WcCses/dUefncuD8yEYTEZGDoXPdRUTihAW11G1mBcCGQ/ztKUBhBONEinIdHOU6eNGaTbkOzuHk6u3uDR5VElihHw4zm+fu2UHnqE+5Do5yHbxozaZcB6excmnJRUQkTqjQRUTiRKwW+uNBB9gP5To4ynXwojWbch2cRskVk2voIiLyTbE6QxcRkXpU6CIicSLmC93Mfm5mbmYpQWcBMLNfm9liM1toZm+bWa+gMwGY2R/MbEVttr+bWaegMwGY2flmtszMQmYW+OFlZjbOzFaaWa6Z3RJ0HgAzm2pm+Wa2NOgsdZlZhpm9b2Y5tX+HNwSdCcDM2pjZHDNbVJvrV0FnqsvMEsxsgZm9Een3julCN7MMau6ktDHoLHX8wd2PcvcRwBvAHQf6DU3kHWCoux8FrAJuDTjPV5YC5wIfBR2kzt25xgODgclmNjjYVAA8BYwLOkQDqoCb3X0QcCxwTZT8eVUAp7r7cGAEMM7Mjg04U103ADmN8cYxXejAn4Ff0MC114Pi7iV1HiYRJdnc/e3am48AfE7NZZAD5+457r4y6By1wrk7V5Nz949o4HLUQXP3re7+Ze3Pu6kpqfo3v2lyXqO09mGr2l9R8T00s3TgTOCvjfH+MVvoZjYB2Ozui4LOUp+Z3W1mm4CLiJ4Zel2XAm8GHSIKhXN3LmmAmfUBRgJfBJukRu2yxkIgH3jH3aMiF3AfNZPQUGO8eVhXWwyKmb0L9GjgpduA/wK+17SJanxbLnd/3d1vA24zs1uBa4E7oyFX7ZjbqPmn8vNNkSncXFEirDtvydeZWTLwKvCzev9CDYy7VwMjavcV/d3Mhrp7oPsgzOz7QL67zzezkxtjG1Fd6O4+tqHnzWwY0BdYVHsv6nTgSzMb4+7bgsrVgBeAmTRRoR8ol5n9GPg+cFpT3oDkIP68ghbO3bmkDjNrRU2ZP+/urwWdpz5332VmH1CzDyLoncrHAxPM7AygDdDBzJ5z9x9FagMxueTi7kvcvZu793H3PtR8EUc1RZkfiJll1Xk4AVgRVJa6zGwc8EtggrvvCTpPlArn7lxSy2pmU08COe7+P0Hn+YqZpX51FJeZtQXGEgXfQ3e/1d3TaztrEjV3dotYmUOMFnqU+52ZLTWzxdQsCUXFoVzAg0B74J3aQyofDToQgJmdY2Z5wHeAmWb2VlBZancaf3V3rhzgJXdfFlSer5jZNOAz4EgzyzOzy4LOVOt44GLg1NrP1MLa2WfQegLv134H51Kzhh7xQwSjkU79FxGJE5qhi4jECRW6iEicUKGLiMQJFbqISJxQoYuIxAkVuohInFChi4jEif8H2NvEDAneJ2QAAAAASUVORK5CYII=\n",
|
||
"text/plain": [
|
||
"<Figure size 432x288 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's update `mnist_loss` to first apply `sigmoid` to the inputs:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def mnist_loss(inputs, targets):\n",
|
||
" inputs = inputs.sigmoid()\n",
|
||
" return torch.where(targets==1, 1-inputs, inputs).mean()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Sidebar: loss versus metric"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We now have two terms which are somewhat similar: loss and metric. They are similar because they are both measures of how well your model is performing. The key difference, though, is that the loss must be a function which has a meaningful derivative. It can't have big flat sections, and large jumps, but instead must be reasonably smooth. Therefore, sometimes it does not really reflect exactly what we are trying to achieve, but is something that is a compromise between our real goal, and a function that can be optimised using its gradient. The loss function is calculated for each item in our dataset, and then at the end of an epoch these are all averaged, and the overall mean loss is reported for the epoch.\n",
|
||
"\n",
|
||
"Metrics, on the other hand, are the numbers that we really care about. These are the things which are printed at the end of each epoch, and tell us how our model is really doing. It is important that we learn to focus on these metrics, rather than the loss, when judging the performance of a model."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### End sidebar"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Stepping with a learning rate"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The gradient only tells us the slope of our function, it doesn't actually tell us how far to adjust the parameters. It gives us some idea of how far to adjust them; if the slope is very large, then that may suggest that we have more adjustments to do, whereas if the slope is very small, that may suggest that we are close to the optimal value.\n",
|
||
"\n",
|
||
"Deciding how to change our parameters based on the value of the gradients is an important part of the deep learning process. Nearly all approaches start with the basic idea of multiplying the gradient by some small number, called the *learning rate* (LR). The learning rate is often a number between 0.001 and 0.1, although it could be anything. Often, people select a learning rate just by trying a few, and finding which results in the best model after training (we'll show you a better approach later in this book, called the *learning rate finder*). Once you've picked a learning rate, you can adjust your parameters using this simple function:\n",
|
||
"\n",
|
||
"```\n",
|
||
"w -= gradient(w) * lr\n",
|
||
"```\n",
|
||
"\n",
|
||
"This is known as *stepping* your parameters, using a *optimiser step*.\n",
|
||
"\n",
|
||
"If you pick a learning rate that's too low, it can mean having to do for a lot of steps:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"An illustration of gradient descent with a LR too low\" width=\"400\" caption=\"Gradient descent with low LR\" src=\"images/chapter2_small.svg\" id=\"descent_small\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Although picking a learning rate that's too high is even worse--it can actually result in the loss getting *worse*!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"An illustration of gradient descent with a LR too high\" width=\"400\" caption=\"Gradient descent with high LR\" src=\"images/chapter2_div.svg\" id=\"descent_div\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"If the learning rate is too high, it may also \"bounce\" around, rather than actually diverging; this has the result of taking many steps to train successfully:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"An illustation of gradient descent with a bouncy LR\" width=\"400\" caption=\"Gradient descent with bouncy LR\" src=\"images/chapter2_bouncy.svg\" id=\"descent_bouncy\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Summarizing gradient descent"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"To summarize, at the beginning, the weights of our model can be random (training *from scratch*) or come from of a pretrained model (*transfer learning*). In the first case, the output we will get from our inputs won't have anything to do with what we want, and even in the second case, it's very likely the pretrained model won't be very good at the speficic task we are targetting. So the model will need to *learn* better weights.\n",
|
||
"\n",
|
||
"To do this, we will compare the outputs the model gives us with our targets (we have labelled data, so we know what result the model should give) using a *loss function*, which returns a number that needs to be as low as possible. Our weights need to be improved. To do this, we take a few data items (such as images) that we feed to our model. After going through our model, we compare to the corresponding targets using our loss function. The score we get tells us how wrong our predictions were, and we will change the weights a little bit to make it slightly better.\n",
|
||
"\n",
|
||
"To find how to change the weights to make the loss a bit better, we use calculus to calculate the *gradient* (actually, we let PyTorch do it for us!) Let's imagine you are lost in the mountains with your car parked at the lowest point. To find your way, you might wander in a random direction but that probably won't help much. Since you know you your vehicle is at the lowest point, you would be better to go downhill. By always taking a step in the direction of the steepest slope, you should eventually arrive at your destination. We use the gradient to tell us how big a step to take; specifically, we multiply the gradient by a number we choose called the *learning rate* to decide on the step size."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Stochastic gradient descent and mini-batches"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In order to take an optimiser step we need to calculate the loss over one or more data items. We could calculate it for the whole dataset, and take the average, or we could calculate it for a single data item. But neither of these sounds ideal — calculating it for the whole dataset would take a very long time, but calculating it for a single item would result in a very imprecise and unstable gradient. So instead we take a compromise between the two: we calculate the average loss for a few data items at a time. This is called a *mini-batch*. The number of data items in the mini batch is called the *batch size*. A larger batch size means that you will get a more accurate and stable estimate of your datasets gradient on the loss function, but it will take longer, and you will get less mini-batches per epoch. Choosing a good batch size is one of the decisions you need to make as a deep learning practitioner to train your model quickly and accurately. We will talk about how to make this choice throughout this book.\n",
|
||
"\n",
|
||
"Another good reason for using mini-batches rather than calculating the gradient on individual data items is that, in practice, we nearly always do our training on an accelerator such as a GPU. These accelerators only perform well if they have lots of work to do at a time. So it is helpful if we can give them lots of data items to work on at a time. Using mini-batches is one of the best ways to do this. (Although if you give them too much data to work on at once, they run out of memory--making GPUs happy is tricky!)\n",
|
||
"\n",
|
||
"As we've seen, in the discussion of data augmentation, we get better generalisation if we can very things during training. A simple and effective thing we can vary during training is what data items we put in each mini batch. Rather than simply enumerating our data set in order for every epoch, instead what we normally do in practice is to randomly shuffle it on every epoch, before we create mini batches. PyTorch and fastai provide a class that will do the shuffling and mini batch collation for you, called `DataLoader`.\n",
|
||
"\n",
|
||
"A `DataLoader` can take any Python collection, and turn it into an iterator over many batches, like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[tensor([9, 3, 6, 8, 0]),\n",
|
||
" tensor([13, 1, 14, 4, 12]),\n",
|
||
" tensor([ 7, 11, 2, 5, 10])]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"coll = range(15)\n",
|
||
"dl = DataLoader(coll, batch_size=5, shuffle=True)\n",
|
||
"list(dl)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"For training a model, we don't just want any Python collection, but a collection containing independent and dependent variables. A collection that contains tuples of independent and dependent variables is known in PyTorch as a Dataset. Here's an example of an extremely simple Dataset:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(#26) [(0, 'a'),(1, 'b'),(2, 'c'),(3, 'd'),(4, 'e'),(5, 'f'),(6, 'g'),(7, 'h'),(8, 'i'),(9, 'j')...]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"ds = L(enumerate(string.ascii_lowercase))\n",
|
||
"ds"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"When we pass a Dataset to a DataLoader we will get back many batches which are themselves tuples of independent and dependent variable many batches:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[(tensor([ 7, 19, 17, 13, 25, 15]), ('h', 't', 'r', 'n', 'z', 'p')),\n",
|
||
" (tensor([11, 9, 23, 21, 3, 16]), ('l', 'j', 'x', 'v', 'd', 'q')),\n",
|
||
" (tensor([12, 2, 18, 22, 14, 24]), ('m', 'c', 's', 'w', 'o', 'y')),\n",
|
||
" (tensor([ 1, 0, 20, 4, 6, 10]), ('b', 'a', 'u', 'e', 'g', 'k')),\n",
|
||
" (tensor([8, 5]), ('i', 'f'))]"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dl = DataLoader(ds, batch_size=6, shuffle=True)\n",
|
||
"list(dl)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Putting it all together"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In code, our process will be implemented something like this for each epoch:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"for x,y in dl:\n",
|
||
" pred = model(x)\n",
|
||
" loss = loss_func(pred, y)\n",
|
||
" loss.backward()\n",
|
||
" parameters -= parameters.grad * lr\n",
|
||
"```\n",
|
||
"\n",
|
||
"We already have our `x`s--that's the images themselves. We'll concatenate them all into a single tensor, and also change them from a list of matrices (a rank 3 tensor) to a list of vectors (a rank 2 tensor). We can do this using `view`, which is a PyTorch method that changes the shape of a tensor without changing its contents. `-1` is a special parameter to `view`. It means: make this axis as big as necessary to fit all the data."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We need a label for each. We'll use `1` for threes and `0` for sevens:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([12396, 784]), torch.Size([12396, 1]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1)\n",
|
||
"train_x.shape,train_y.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"A Dataset in PyTorch is required to return a tuple of `(x,y)` when indexed. Python provides a `zip` function which, when combined with `list`, provides a simple way to get this functionality:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([784]), tensor([1]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dset = list(zip(train_x,train_y))\n",
|
||
"x,y = dset[0]\n",
|
||
"x.shape,y"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This is enough to allow us to create a `DataLoader`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([256, 784]), torch.Size([256, 1]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dl = DataLoader(dset, batch_size=256)\n",
|
||
"xb,yb = first(dl)\n",
|
||
"xb.shape,yb.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We'll do the same for the validation set:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28)\n",
|
||
"valid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1)\n",
|
||
"valid_dset = list(zip(valid_x,valid_y))\n",
|
||
"valid_dl = DataLoader(valid_dset, batch_size=256)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we need an (initially random) weight for every pixel (this is the *initialize* step in our 7-step process):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def init_params(size, std=1.0): return (torch.randn(size)*std).requires_grad_()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"weights = init_params((28*28,1))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The function `weights*pixels` won't be flexible enough--it is always equal to zero when the pixels are equal to zero (i.e. it's *intercept* is zero). You might remember from high school math that the formula for a line is `y=w*x+b`; we still need the `b`. We'll initialize it to a random number too:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"bias = init_params(1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In neural networks, the `w` in the equation `y=w*x+b` is called the *weights*, and the `b` is called the *bias*. Together, the weights and bias make up the *parameters*."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> jargon: Parameters: the *weights* and *biases* of a model. The weights are the `w` in the equation `w*x+b`, and the biases are the `b` in that equation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can now calculate a prediction for one image:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([4.5118], grad_fn=<AddBackward0>)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"(train_x[0]*weights.T).sum() + bias"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We need a way to do this for all the images in a mini-batch. Let's create a mini-batch of size 4 for testing:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"torch.Size([4, 784])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"batch = train_x[:4]\n",
|
||
"batch.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Whilst we could use a python for loop to calculate the prediction for each image, that would be very slow. Because Python loops don't run on the GPU, and because Python is a slow language for loops in general, we need to represent as much of the computation in a model as possible using higher-level functions.\n",
|
||
"\n",
|
||
"In this case, there's an extremely convenient mathematical operation that calculates `w*x` for every row of a matrix--it's called *matrix multiplication*. Here's what matrix multiplication looks like (diagram from Wikipedia):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<img alt=\"Matrix multiplication\" width=\"400\" caption=\"Matrix multiplication\" src=\"images/matmul2.svg\" id=\"matmul\"/>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This image shows two matrices, `A` and `B` being multiplied together. Each item of the result, which we'll call `AB`, contains each item of its corresponding row of `A` multiplied by each item of its corresponding column of `B`, added together. For instance, row 1 column 2 (the orange dot with a red border) is calculated as $a_{1,1} * b_{1,2} + a_{1,2} * b_{2,2}$. If you need a refresher on matrix multiplication, we suggest you take a look at the great *Introduction to Matrix Multiplcation* on *Khan Academy*, since this is the most important mathematical operation in deep learning.\n",
|
||
"\n",
|
||
"In Python, matrix multiplication is represented with the `@` operator. Let's try it:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[ 4.5118],\n",
|
||
" [ 3.6536],\n",
|
||
" [11.2975],\n",
|
||
" [14.1164]], grad_fn=<AddBackward0>)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"def linear1(xb): return xb@weights + bias\n",
|
||
"preds = linear1(batch)\n",
|
||
"preds"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The first element is the same as we calculated before, as we'd expect. This equation, `batch@weights + bias`, is one of the two fundamental equations of any neural network (the other one is the *activation function*, which we'll see in a moment).\n",
|
||
"\n",
|
||
"The `mnist_loss` function we wrote earlier already works on a mini-batch, thanks to the magic of broadcasting! Here's the loss for our mini-batch:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(0.0090, grad_fn=<MeanBackward0>)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"loss = mnist_loss(preds, train_y[:4])\n",
|
||
"loss"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we can calculate the gradients:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([784, 1]), tensor(-0.0013), tensor([-0.0088]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"loss.backward()\n",
|
||
"weights.grad.shape,weights.grad.mean(),bias.grad"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's put that all in a function:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def calc_grad(xb, yb, model):\n",
|
||
" preds = model(xb)\n",
|
||
" loss = mnist_loss(preds, yb)\n",
|
||
" loss.backward()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...and test it:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(-0.0025), tensor([-0.0177]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"calc_grad(batch, train_y[:4], linear1)\n",
|
||
"weights.grad.mean(),bias.grad"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"But look what happens if we call it twice:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(tensor(-0.0038), tensor([-0.0265]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"calc_grad(batch, train_y[:4], linear1)\n",
|
||
"weights.grad.mean(),bias.grad"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The gradients have changed! The reason for this is that `loss.backward` actually *adds* the gradients of `loss` to any gradients that are currently stored. So we have to set the current gradients to zero first."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"weights.grad.zero_()\n",
|
||
"bias.grad.zero_();"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> note: Methods in PyTorch that end in an underscore modify their object *in-place*. For instance, `bias.zero_()` sets all elements of the tensor `bias` to zero."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our only remaining step will be to update the weights and bias based on the gradient and learning rate. When we do so, we have to tell PyTorch not to take the gradient of this step too, otherwise things will get very confusing! If we assign to the `data` attribute of a tensor then PyTorch will not take the gradient of that step. Here's our basic training loop for an epoch:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_epoch(model, lr, params):\n",
|
||
" for xb,yb in dl:\n",
|
||
" calc_grad(xb, yb, model)\n",
|
||
" for p in params:\n",
|
||
" p.data -= p.grad*lr\n",
|
||
" p.grad.zero_()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We also want to know how we're doing, by looking at the accuracy of the validation set. To decide if an output represents a 3 or a 7, we can just check whether it's greater than zero. So our accuracy for each item can be calculated (using broadcasting, so no loops!) with:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([[True],\n",
|
||
" [True],\n",
|
||
" [True],\n",
|
||
" [True]])"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"(preds>0.0).float() == train_y[:4]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"That gives us this function to calculate our validation accuracy:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def batch_accuracy(xb, yb):\n",
|
||
" preds = xb.sigmoid()\n",
|
||
" correct = (preds>0.5) == yb\n",
|
||
" return correct.float().mean()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can check it works:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor(1.)"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"batch_accuracy(linear1(batch), train_y[:4])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...and then putting the batches together:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def validate_epoch(model):\n",
|
||
" accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl]\n",
|
||
" return round(torch.stack(accs).mean().item(), 4)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.4403"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"validate_epoch(linear1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"That's our starting point. Let's train for one epoch, and see if the accuracy improves:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.4992"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"lr = 1.\n",
|
||
"params = weights,bias\n",
|
||
"train_epoch(linear1, lr, params)\n",
|
||
"validate_epoch(linear1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"0.6772 0.8081 0.914 0.9453 0.9565 0.9619 0.9624 0.9633 0.9658 0.9677 0.9702 0.9716 0.9721 0.9736 0.9741 0.9745 0.9765 0.977 0.977 0.9765 "
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"for i in range(20):\n",
|
||
" train_epoch(linear1, lr, params)\n",
|
||
" print(validate_epoch(linear1), end=' ')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Looking good! We're already about at the same accuracy as our \"pixel similarity\" approach, and we've created a general purpose foundation we can build on."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Creating an optimizer"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Because this is such a useful general foundation, PyTorch provides some useful classes to make it easier to implement. The first we'll use is to replace our `linear()` function with PyTorch's `nn.Linear` *module*. A \"module\" is an object of a class that inherits from the PyTorch `nn.Module` class. Objects of this class behave identically to a standard Python function, in that you can call it using parentheses, and it will return the activations of a model.\n",
|
||
"\n",
|
||
"`nn.Linear` does the same thing as our `init_params` and `linear` together. It contains both the *weights* and *bias* in a single class. Here's how we replicate our model from the previous section:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"linear_model = nn.Linear(28*28,1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Every PyTorch module knows what parameters it has that can be trained; they are available through the `parameters` method:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(torch.Size([1, 784]), torch.Size([1]))"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"w,b = linear_model.parameters()\n",
|
||
"w.shape,b.shape"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can use this information to create an optimizer:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class BasicOptim:\n",
|
||
" def __init__(self,params,lr): self.params,self.lr = list(params),lr\n",
|
||
"\n",
|
||
" def step(self, *args, **kwargs):\n",
|
||
" for p in self.params: p.data -= p.grad.data * self.lr\n",
|
||
"\n",
|
||
" def zero_grad(self, *args, **kwargs):\n",
|
||
" for p in self.params: p.grad = None"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can create our optimizer by passing in the model's parameters:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"opt = BasicOptim(linear_model.parameters(), lr)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our training loop can now be simplified to:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_epoch(model):\n",
|
||
" for xb,yb in dl:\n",
|
||
" calc_grad(xb, yb, model)\n",
|
||
" opt.step()\n",
|
||
" opt.zero_grad()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our validation function doesn't need to change at all:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.6714"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"validate_epoch(linear_model)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's put our little training loop in a function, to make things simpler:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_model(model, epochs):\n",
|
||
" for i in range(epochs):\n",
|
||
" train_epoch(model)\n",
|
||
" print(validate_epoch(model), end=' ')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The results are the same as the previous section."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"0.4932 0.7935 0.8477 0.9165 0.9346 0.9482 0.956 0.9634 0.9658 0.9673 0.9702 0.9717 0.9731 0.9751 0.9756 0.9765 0.9775 0.978 0.9785 0.9785 "
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"train_model(linear_model, 20)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"fastai provides the `SGD` class which, by default, does the same thing as our `BasicOptim`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"0.4932 0.771 0.8594 0.918 0.9355 0.9492 0.9575 0.9634 0.9658 0.9682 0.9692 0.9717 0.9731 0.9751 0.9756 0.977 0.977 0.9785 0.9785 0.9785 "
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"linear_model = nn.Linear(28*28,1)\n",
|
||
"opt = SGD(linear_model.parameters(), lr)\n",
|
||
"train_model(linear_model, 20)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"fastai also provides `Learner.fit`, which we can use instead of `train_model`. To create a `Learner` we first need to create `DataLoaders`, by passing in our training and validation `DataLoader`s:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"dls = DataLoaders(dl, valid_dl)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"To create a `Learner` without using an application (such as `cnn_learner`) we need to pass in all the information that we've created in this chapter: the `DataLoaders`, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"learn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD,\n",
|
||
" loss_func=mnist_loss, metrics=batch_accuracy)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we can call `fit`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<table border=\"1\" class=\"dataframe\">\n",
|
||
" <thead>\n",
|
||
" <tr style=\"text-align: left;\">\n",
|
||
" <th>epoch</th>\n",
|
||
" <th>train_loss</th>\n",
|
||
" <th>valid_loss</th>\n",
|
||
" <th>batch_accuracy</th>\n",
|
||
" <th>time</th>\n",
|
||
" </tr>\n",
|
||
" </thead>\n",
|
||
" <tbody>\n",
|
||
" <tr>\n",
|
||
" <td>0</td>\n",
|
||
" <td>0.636918</td>\n",
|
||
" <td>0.503445</td>\n",
|
||
" <td>0.495584</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>1</td>\n",
|
||
" <td>0.500283</td>\n",
|
||
" <td>0.192597</td>\n",
|
||
" <td>0.839549</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>2</td>\n",
|
||
" <td>0.184349</td>\n",
|
||
" <td>0.182295</td>\n",
|
||
" <td>0.833660</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>3</td>\n",
|
||
" <td>0.081278</td>\n",
|
||
" <td>0.107260</td>\n",
|
||
" <td>0.912169</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>4</td>\n",
|
||
" <td>0.043316</td>\n",
|
||
" <td>0.078320</td>\n",
|
||
" <td>0.932777</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>5</td>\n",
|
||
" <td>0.028503</td>\n",
|
||
" <td>0.062712</td>\n",
|
||
" <td>0.946025</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>6</td>\n",
|
||
" <td>0.022414</td>\n",
|
||
" <td>0.052999</td>\n",
|
||
" <td>0.955348</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>7</td>\n",
|
||
" <td>0.019704</td>\n",
|
||
" <td>0.046531</td>\n",
|
||
" <td>0.962218</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>8</td>\n",
|
||
" <td>0.018323</td>\n",
|
||
" <td>0.041979</td>\n",
|
||
" <td>0.965653</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>9</td>\n",
|
||
" <td>0.017486</td>\n",
|
||
" <td>0.038622</td>\n",
|
||
" <td>0.966634</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" </tbody>\n",
|
||
"</table>"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.HTML object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"learn.fit(10, lr=lr)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As you can see, there's nothing magic about the PyTorch and fastai classes. They are just convenient pre-packaged pieces that make your life a bit easier! (They also provide a lot of extra functionality we'll be using in future chapters.)\n",
|
||
"\n",
|
||
"With these classes, we can now replace our linear model with a neural network."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Adding a non-linearity"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"So far we have a general procedure for optimising the parameters of a function, and we have tried it out on a very boring function: a simple linear classifier. A linear classifier is very constrained in terms of what it can do. Let's instead use a neural network. Here is the entire definition of a basic neural network:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def simple_net(xb): \n",
|
||
" res = xb@w1 + b1\n",
|
||
" res = res.max(tensor(0.0))\n",
|
||
" res = res@w2 + b2\n",
|
||
" return res"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"That's it! All we have in `simple_net` is two linear classifiers with a max function between them.\n",
|
||
"\n",
|
||
"Here, `w1` and `w2` are weight tensors, and `b1` and `b2` are bias tensors; that is, parameters that are initially randomly initialised, just like we did in the previous section."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"w1 = init_params((28*28,30))\n",
|
||
"b1 = init_params(30)\n",
|
||
"w2 = init_params((30,1))\n",
|
||
"b2 = init_params(1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The key point about this is that `w1` has 30 output activations (which means that `w2` must have 30 input activations, so they match). That means that the first layer can construct 30 different features, each representing some different mix of pixels. You can change that `30` to anything you like, to make the model more or less complex.\n",
|
||
"\n",
|
||
"That little function `res.max(tensor(0.0))` is called a *rectified linear unit*, also known as *ReLU*. I think we can all agree that *rectified linear unit* sounds pretty fancy and complicated... But actually, there's nothing more to it than `res.max(tensor(0.0))`, in other words: replace every negative number with a zero. This tiny function is also available in PyTorch as `F.relu`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deXxU5dn/8c8lqwLKFpEdVERQBEIElVq3qkhVtLYVRKtVHypCrbW1RVu1j3azi7UqLrTlZ1tWd6nFBetWa1GSEPYtgkgMkrDvS5Lr98ccnk7DhBzITE5m8n2/XvPKnHPfZ841h3DNyX3OXLe5OyIikrmOiDoAERFJLSV6EZEMp0QvIpLhlOhFRDKcEr2ISIZrGHUAibRt29a7desWdRgiImkjLy9vvbtnJWqrk4m+W7du5ObmRh2GiEjaMLPVVbVp6EZEJMMp0YuIZDglehGRDKdELyKS4ZToRUQyXLWJ3sw6m9nbZrbEzBaZ2XcS9DEze8TMCs1svpllx7Vdb2Yrgsf1yX4DIiJycGFurywDvufu+WbWAsgzs1nuvjiuzyVAj+AxCHgCGGRmrYH7gBzAg21nuPumpL4LERGpUrVn9O6+1t3zg+fbgCVAx0rdhgF/8ZjZQEszaw9cDMxy941Bcp8FDEnqOxARyQAfrdrIH/+5klSUjj+kMXoz6wb0Bz6s1NQRWBO3XBSsq2p9otceZWa5ZpZbWlp6KGGJiKS1km27GTMln8kffsqufeVJf/3Qid7MmgPPA7e7+9bKzQk28YOsP3Cl+wR3z3H3nKyshN/iFRHJOGXlFXx7yly27d7HE9dmc1Tj5BcsCJXozawRsSQ/2d1fSNClCOgct9wJKD7IehERAX7zxnI+XLWRn1/Zh5OPOzol+whz140BfwKWuPtDVXSbAXwjuPvmDGCLu68FXgcuMrNWZtYKuChYJyJS781avI4n3/2YawZ14SvZnVK2nzB/IwwGrgMWmFlBsO5uoAuAuz8JzASGAoXATuCbQdtGM3sAmBNsd7+7b0xe+CIi6Wn1hh3c8UwBfToew72X9k7pvqpN9O7+PonH2uP7ODCmiraJwMTDik5EJAPt3lfO6En5HGHG4yOzadqoQUr3VyfLFIuIZLL7Xl7E4rVbmXhDDp1bH5Xy/akEgohILXomdw3Tc9cw9rwTOf/kdrWyTyV6EZFasqh4C/e8tJCzTmjDdy88qdb2q0QvIlILtuzax+hJ+bQ6qjGPjOhPgyMOeukzqTRGLyKSYu7O95+dR/HmXUz/1hm0bd6kVvevM3oRkRR76r2VzFq8jruG9mJA19a1vn8lehGRFJq9cgO/em0pX+7TnhsHd4skBiV6EZEUKdm6m7FT5tKtTTN+eVUfYoUGap/G6EVEUqCsvIKxU+eyY08Zk28eRIumjSKLRYleRCQFfv36Mj5atZHfXd2Xnse1iDQWDd2IiCTZ64s+56n3VjJyUBeu7J+6YmVhKdGLiCTRJ+t38P1n5nFap2O497LUFisLS4leRCRJdu8rZ/TkfBo0iBUra9IwtcXKwtIYvYhIktzz0kKWfr6ViTecTqdWqS9WFpbO6EVEkmD6nE95Nq+Ib593Iuf1PDbqcP6LEr2ISA0t/GwL97y8iLN7tOU7X6q9YmVhVTt0Y2YTgUuBEnc/NUH7ncDIuNfrBWQFs0t9AmwDyoEyd89JVuAiInXBlp37GD05jzbNGvPw1f1qtVhZWGHO6J8GhlTV6O6/dvd+7t4PuAt4t9J0gecF7UryIpJRKiqc7z1bwNrNu3nsmmza1HKxsrCqTfTu/h4Qdp7XEcDUGkUkIpImnnzvY95cUsKPv9yLAV1bRR1OlZI2Rm9mRxE7838+brUDb5hZnpmNqmb7UWaWa2a5paWlyQpLRCQlPvh4Pb95fRmXntae68/qFnU4B5XMi7GXAf+qNGwz2N2zgUuAMWb2xao2dvcJ7p7j7jlZWVlJDEtEJLnWbd3NbVPn0r1tMx686rTIipWFlcxEP5xKwzbuXhz8LAFeBAYmcX8iIrVuX3kFY6fks2NPOU9eO4BmTer+15GSkujN7BjgHODluHXNzKzF/ufARcDCZOxPRCQqv3ptKXM+2cQvr+pDj3bRFisLK8ztlVOBc4G2ZlYE3Ac0AnD3J4NuVwJvuPuOuE3bAS8Gf9I0BKa4+2vJC11EpHa9tnAtf/jnKq47oyvD+nWMOpzQqk307j4iRJ+nid2GGb9uJdD3cAMTEalLVq3fwZ3Pzqdv55b8+NJeUYdzSPTNWBGRauzaW87oSXk0rGPFysKq+1cRREQi5O78+KWFLFu3jf93w+l0bHlk1CEdMp3Ri4gcxLQ5a3g+v4jbzu/BuXWsWFlYSvQiIlVYULSF+4JiZbdd0CPqcA6bEr2ISAKbd+5l9OQ82jZvzO+H96+TxcrC0hi9iEglFRXOHc/MY93W3TzzrTNp3axx1CHViM7oRUQqeeLdj3lraQn3XNqb/l3qbrGysJToRUTi/KtwPb99YxmX9+3AdWd0jTqcpFCiFxEJfL4lVqzs+Kzm/OIrfep8sbKwNEYvIsJ/ipXt2lfOtJHZaVGsLKzMeSciIjXwy1eXkrt6E4+M6J82xcrC0tCNiNR7Mxes5U/vr+KGs7pxed8OUYeTdEr0IlKvrSzdzg+em0//Li25e2h6FSsLS4leROqtnXvLGD0pn8YNj2D8Ndk0bpiZKVFj9CJSL7k7P35xIctLtvGXGwfSIQ2LlYWVmR9fIiLVmPLRp7ww9zNuv+Akzu6R2fNUV5vozWyimZWYWcJpAM3sXDPbYmYFwePeuLYhZrbMzArNbFwyAxcROVzzizbzvzMWc85JWXz7/BOjDiflwpzRPw0MqabPP929X/C4H8DMGgDjgUuA3sAIM+tdk2BFRGpq0469jJ6UT1aLJjx8dT+OSONiZWFVm+jd/T1g42G89kCg0N1XuvteYBow7DBeR0QkKSoqnO8+U0DJtt2MH5lNqzQvVhZWssbozzSzeWb2qpmdEqzrCKyJ61MUrEvIzEaZWa6Z5ZaWliYpLBGR/xj/diHvLCvl3kt7069zy6jDqTXJSPT5QFd37ws8CrwUrE/095BX9SLuPsHdc9w9Jysrsy+MiEjte3/Feh56czlX9OvAtRlSrCysGid6d9/q7tuD5zOBRmbWltgZfOe4rp2A4pruT0TkUBVv3sVt0+ZyYlZzfp5BxcrCqnGiN7PjLDhqZjYweM0NwBygh5l1N7PGwHBgRk33JyJyKPaWVTBmSj579pXz5HUDOKpx/fv6ULXv2MymAucCbc2sCLgPaATg7k8CXwVGm1kZsAsY7u4OlJnZWOB1oAEw0d0XpeRdiIhU4eczlzD3082MvyabE7KaRx1OJKpN9O4+opr2x4DHqmibCcw8vNBERGrmlfnFPP3BJ3xzcDe+fFr7qMOJjL4ZKyIZqbBkOz98bj7ZXVpy1yWZWawsLCV6Eck4O/aUMXpSHk0aNWD8yMwtVhZW/bsqISIZzd25+8UFFJZu5683DqL9MZlbrCys+v0xJyIZZ9Ls1bxcUMwdXzqJL/RoG3U4dYISvYhkjII1m7n/lcWc2zOLMedlfrGysJToRSQjbNqxlzGT8zm2RdN6U6wsLI3Ri0jaq6hwbp9eQOm2PTw3+kxaHlU/ipWFpTN6EUl7j75VyLvLS7n3st6c1qn+FCsLS4leRNLae8tLefgfy/lK/46MHNQl6nDqJCV6EUlbxZt38Z1pcznp2Bb87Mr6V6wsLCV6EUlLe8squHVyPvvKnSeuzebIxg2iDqnO0sVYEUlLP5+5hII1m3l8ZDbH19NiZWHpjF5E0s6MebFiZTd9oTtD+9TfYmVhKdGLSFpZsW4b456fT07XVoy75OSow0kLSvQikjZ27Clj9OR8jmzUgMeuyaZRA6WwMKo9SmY20cxKzGxhFe0jzWx+8PjAzPrGtX1iZgvMrMDMcpMZuIjUL+7OuBcWsLJ0O4+O6M9xxzSNOqS0Eebj8GlgyEHaVwHnuPtpwAPAhErt57l7P3fPObwQRUTgL/9ezd/mFfO9i3py1okqVnYowsww9Z6ZdTtI+wdxi7OJTQIuIpI0+Z9u4qd/X8wFJx/L6HNOiDqctJPsAa6bgFfjlh14w8zyzGzUwTY0s1FmlmtmuaWlpUkOS0TS1cYdexk7OZ92Rzfloa+rWNnhSNp99GZ2HrFE/4W41YPdvdjMjgVmmdlSd38v0fbuPoFg2CcnJ8eTFZeIpK/yCuc70+ayfsdeXhh9Fscc1SjqkNJSUs7ozew04I/AMHffsH+9uxcHP0uAF4GBydifiNQPj/xjBf9csZ7/vfwUTu14TNThpK0aJ3oz6wK8AFzn7svj1jczsxb7nwMXAQnv3BERqeydZSU88tYKrsruxPDTO0cdTlqrdujGzKYC5wJtzawIuA9oBODuTwL3Am2Ax4OCQmXBHTbtgBeDdQ2BKe7+Wgreg4hkmKJNO7l9egE927Xgp1ecqmJlNRTmrpsR1bTfDNycYP1KoO+BW4iIVG1PWTljpsylvNx54toBKlaWBCpqJiJ1yk9fWcK8NZt58tpsurdtFnU4GUHfHxaROuPlgs/46+zV/M/Z3RlyqoqVJYsSvYjUCcvXbWPc8ws4vVsrfjBExcqSSYleRCK3fU8Zoyfl0axJQxUrSwEdTRGJlLvzw+fns2r9Dh4d0Z92R6tYWbIp0YtIpJ7+4BP+Pn8td158Mmee0CbqcDKSEr2IRCZv9SZ+9vclfKlXO2455/iow8lYSvQiEokN2/cwdko+HVoeyW+/3ldfikoh3UcvIrUuVqysgA37i5UdqWJlqaQzehGpdb9/cznvF67ngWEqVlYblOhFpFa9vbSER94q5GsDOnH16V2iDqdeUKIXkVqzZmOsWFmv9kfzwBWnRh1OvaFELyK1IlasLJ+KCueJkdk0baRiZbVFF2NFpFbc/7fFzC/awlPXDaCbipXVKp3Ri0jKvTi3iMkffsq3vng8F59yXNTh1DtK9CKSUss+38bdLyxkYPfW3Hlxz6jDqZdCJXozm2hmJWaWcCpAi3nEzArNbL6ZZce1XW9mK4LH9ckKXETqvm279zF6Uh7NmzbksRH9aahiZZEIe9SfBoYcpP0SoEfwGAU8AWBmrYlNPTiI2MTg95lZq8MNVkTSx/5iZas37uSxEf05VsXKIhMq0bv7e8DGg3QZBvzFY2YDLc2sPXAxMMvdN7r7JmAWB//AEJEMMfFfnzBzwef84OKeDDpexcqilKy/ozoCa+KWi4J1Va0/gJmNMrNcM8stLS1NUlgiEoW81Rv5xcwlXNS7HaO+qGJlUUtWok9UjcgPsv7Ale4T3D3H3XOysrKSFJaI1Lb12/dw6+R8OrY6kl9/TcXK6oJkJfoioHPccieg+CDrRSQDxYqVzWXzzn08MXKAipXVEclK9DOAbwR335wBbHH3tcDrwEVm1iq4CHtRsE5EMtDvZi3nX4UbeOCKU+nd4eiow5FAqG/GmtlU4FygrZkVEbuTphGAuz8JzASGAoXATuCbQdtGM3sAmBO81P3ufrCLuiKSpt5auo7H3i5k+Omd+XpO5+o3kFoTKtG7+4hq2h0YU0XbRGDioYcmIulizcad3D6tgFM6HM1PLj8l6nCkEn17QURqZPe+ckZPzgPgiZEDVKysDlJRMxGpkftfWczCz7byh2/k0KXNUVGHIwnojF5EDtsL+UVM+fBTbjnnBC7s3S7qcKQKSvQicliWfr6Vu19cwKDurfn+RSdFHY4chBK9iByyrbv3MXpSPkc3bcSj16hYWV2nMXoROSTuzg+enc+nG3cy9X/O4NgWKlZW1+ljWEQOyZ/eX8Vriz5n3JCTGdi9ddThSAhK9CIS2pxPNvKLV5cy5JTjuPns7lGHIyEp0YtIKKXb9jBmcj6dWx3Jr752moqVpRGN0YtItcrKK7ht6ly27t7Hn28cyNFNVawsnSjRi0i1Hpq1nH+v3MBvvtaXXu1VrCzdaOhGRA5q1uJ1PP7Ox4wY2JmvDugUdThyGJToRaRKn27YyR3PFHBqx6O57zIVK0tXSvQiktD+YmWGipWlO43Ri0hCP5mxiEXFW/nT9Tl0bq1iZelMZ/QicoBnc9cwbc4abj33BC7opWJl6S5UojezIWa2zMwKzWxcgvbfmVlB8FhuZpvj2srj2mYkM3gRSb7FxVv58UsLOfP4NtxxoYqVZYJqh27MrAEwHriQ2GTfc8xshrsv3t/H3b8b1//bQP+4l9jl7v2SF7KIpMrW3fu4dXIeLY9qxCMjVKwsU4T5VxwIFLr7SnffC0wDhh2k/whgajKCE5Ha4+7c+ew8ijbtYvw12WS1aBJ1SJIkYRJ9R2BN3HJRsO4AZtYV6A68Fbe6qZnlmtlsM7uiqp2Y2aigX25paWmIsEQkmf7wz5W8vmgd4y45mZxuKlaWScIk+kQFLbyKvsOB59y9PG5dF3fPAa4BHjazExJt6O4T3D3H3XOysrJChCUiyfLhyg08+NoyhvY5jpu+oGJlmSZMoi8COsctdwKKq+g7nErDNu5eHPxcCbzDf4/fi0jESrbtZuzUuXRtfRQPXqViZZkoTKKfA/Qws+5m1phYMj/g7hkz6wm0Av4dt66VmTUJnrcFBgOLK28rItEoK6/g21Pmsm33Ph6/NpsWKlaWkaq968bdy8xsLPA60ACY6O6LzOx+INfd9yf9EcA0d48f1ukFPGVmFcQ+VH4Zf7eOiETrN28s58NVG3no6305+TgVK8tUob4Z6+4zgZmV1t1bafknCbb7AOhTg/hEJEXeWPQ5T777MdcM6sJXslWsLJPpJlmRemj1hh1879l59Ol4DPde2jvqcCTFlOhF6pnd+8oZPSmfI8x4fGS2ipXVAypqJlLP3PvyQhav3crEG1SsrL7QGb1IPfLMnDU8k1vE2PNO5PyTVaysvlCiF6knFhVv4Z6XFzL4xDZ8V8XK6hUlepF6YMuufdw6OZ9WRzXm98P70+AIfSmqPtEYvUiGc3e+/+w8Ptu0i+nfOoO2zVWsrL7RGb1IhnvqvZXMWryOu4f2YkBXFSurj5ToRTLY7JUb+NVrS/nyae355uBuUYcjEVGiF8lQJVt3M3bKXLq1baZiZfWcxuhFMlBZeQVjp85lx54yJt88iOZN9F+9PtO/vkgG+vXry/ho1UYevrofPY9rEXU4EjEN3YhkmNcWfs5T763k2jO6cEX/hJPBST2jRC+SQT5Zv4M7n51H307HcI+KlUlAiV4kQ+zaW84tk/Jo0MAYPzKbJg1VrExiQiV6MxtiZsvMrNDMxiVov8HMSs2sIHjcHNd2vZmtCB7XJzN4EYlxd+55eSHL1m3jd1f3o1MrFSuT/6j2YqyZNQDGAxcSmz92jpnNSDBT1HR3H1tp29bAfUAOsQnF84JtNyUlehEBYPqcNTyXV8Rt55/IeT2PjTocqWPCnNEPBArdfaW77wWmAcNCvv7FwCx33xgk91nAkMMLVUQSWfjZFu6dsYize7TlO19SsTI5UJhE3xFYE7dcFKyr7Cozm29mz5lZ50PcFjMbZWa5ZpZbWloaIiwR2bJzH7dMyqNNs8Y8fHU/FSuThMIk+kS/OV5p+W9AN3c/DXgT+PMhbBtb6T7B3XPcPScrKytEWCL1W0WFc8czBazbupvxI7Npo2JlUoUwib4I6By33Akoju/g7hvcfU+w+AdgQNhtReTwPPHux/xjaQk/GtqL7C6tog5H6rAwiX4O0MPMuptZY2A4MCO+g5m1j1u8HFgSPH8duMjMWplZK+CiYJ2I1MAHH6/nt28s47K+Hbj+rG5RhyN1XLV33bh7mZmNJZagGwAT3X2Rmd0P5Lr7DOA2M7scKAM2AjcE2240sweIfVgA3O/uG1PwPkTqjc+37Oa2qXPp3rYZv/hKHxUrk2qZe8Ih80jl5OR4bm5u1GGI1Dn7yisYMWE2i9du5eUxg+nRTnVsJMbM8tw9J1GbipqJpJEHX11K7upN/H54PyV5CU0lEETSxKsL1vLH91fxjTO7MqyfipVJeEr0ImlgZel27nxuPn07t+RHX+4VdTiSZpToReq4XXvLGT0pn0YNjMdVrEwOg8boReowd+dHLy1geck2nv7mQDq2PDLqkCQN6YxepA6b+tEaXsj/jNvO78E5J+kb43J4lOhF6qj5RZv5SVCs7LYLekQdjqQxJXqROmjzzr2MnpRP2+aN+f3w/ipWJjWiMXqROqaiwvnu9AJKtu3m2VvOonWzxlGHJGlOZ/Qidczj7xTy9rJS7rm0N/06t4w6HMkASvQidci/Ctfz0KzlXN63A9ed0TXqcCRDKNGL1BH7i5Udn9VcxcokqTRGL1IH7CuvYMyUfHbtK2f6tdk0a6L/mpI8+m0SqQN+MXMpeas38eiI/px4rIqVSXJp6EYkYq/ML2biv1Zxw1nduKxvh6jDkQykRC8SocKS7fzwufn079KSu4eqWJmkRqhEb2ZDzGyZmRWa2bgE7XeY2WIzm29m/zCzrnFt5WZWEDxmVN5WpL7aubeMWyfn0aRRA8Zfk03jhjrvktSodozezBoA44ELiU32PcfMZrj74rhuc4Ecd99pZqOBXwFXB2273L1fkuMWSWvuzt0vLGBFyXb+cuNAOqhYmaRQmFOIgUChu690973ANGBYfAd3f9vddwaLs4FOyQ1TJLNM+vBTXioo5vYLTuLsHipWJqkVJtF3BNbELRcF66pyE/Bq3HJTM8s1s9lmdkVVG5nZqKBfbmlpaYiwRNLTvDWbeeBvizm3ZxbfPv/EqMOReiDM7ZWJvrWRcEZxM7sWyAHOiVvdxd2Lzex44C0zW+DuHx/wgu4TgAkQmxw8RFwiaWfTjr3cOjmfrBZN+N3X+3GEipVJLQhzRl8EdI5b7gQUV+5kZl8CfgRc7u579q939+Lg50rgHaB/DeIVSVsVFc53nymgdNseHh+ZTSsVK5NaEibRzwF6mFl3M2sMDAf+6+4ZM+sPPEUsyZfErW9lZk2C522BwUD8RVyReuOxtwt5Z1kp917Wm74qVia1qNqhG3cvM7OxwOtAA2Ciuy8ys/uBXHefAfwaaA48G9Tn+NTdLwd6AU+ZWQWxD5VfVrpbR6Re+OeKUn735nKu7N+RkYO6RB2O1DPmXveGw3Nycjw3NzfqMESSonjzLi599H3aNm/MS2MGc1RjVR6R5DOzPHfPSdSmb2iIpNDeslixsr1lFTxx7QAleYmEfutEUujnM5cw99PNjL8mmxOymkcdjtRTOqMXSZEZ84p5+oNPuHFwd758Wvuow5F6TIleJAUKS7Yx7vn5DOjairuGnhx1OFLPKdGLJNmOPWWMnpTPkUGxskYN9N9MoqUxepEkcnfuemEBH5du5683DeK4Y5pGHZKIzuhFkumvs1czY14xd1x4EoNPbBt1OCKAEr1I0sz9dBMPvLKY808+llvPVbEyqTuU6EWSYOOOvYyZnE+7o5vy0Nf7qliZ1CkaoxepofIK5/bpBazfvpfnR59Fy6NUrEzqFiV6kRp69K0VvLe8lJ9f2Yc+nY6JOhyRA2joRqQG3l1eyu//sYKvZHdkxMDO1W8gEgElepHDVLx5F7dPm0vPdi342RV9CCq3itQ5SvQih2FvWQW3Ts5nX7nz+MhsjmzcIOqQRKqkMXqRw/Czvy+mYM1mnrw2m+NVrEzqOJ3Rixyilws+48//Xs3NX+jOkFNVrEzqvlCJ3syGmNkyMys0s3EJ2puY2fSg/UMz6xbXdlewfpmZXZy80EVq32sL13LXCws4vVsrfniJipVJeqh26MbMGgDjgQuJTRQ+x8xmVJoS8CZgk7ufaGbDgQeBq82sN7E5Zk8BOgBvmtlJ7l6e7Dcikkol23Zz38uLeHXh55zS4WgeU7EySSNhxugHAoXuvhLAzKYBw/jvSb6HAT8Jnj8HPGaxWxCGAdPcfQ+wyswKg9f7d3LC/2+XPfo+u/fpM0SSb+2W3ewtr+AHQ3ryP2cfryQvaSVMou8IrIlbLgIGVdUnmEx8C9AmWD+70rYdE+3EzEYBowC6dDm8yZNPyGrG3vKKw9pW5GD6dW7Jt845gROP1YVXST9hEn2im4MrzyheVZ8w28ZWuk8AJkBscvAQcR3g4eH9D2czEZGMFubvzyIg/it/nYDiqvqYWUPgGGBjyG1FRCSFwiT6OUAPM+tuZo2JXVydUanPDOD64PlXgbfc3YP1w4O7croDPYCPkhO6iIiEUe3QTTDmPhZ4HWgATHT3RWZ2P5Dr7jOAPwF/DS62biT2YUDQ7xliF27LgDG640ZEpHZZ7MS7bsnJyfHc3NyowxARSRtmlufuOYnadI+YiEiGU6IXEclwSvQiIhlOiV5EJMPVyYuxZlYKrD7MzdsC65MYTrIorkOjuA6N4jo0mRhXV3fPStRQJxN9TZhZblVXnqOkuA6N4jo0iuvQ1Le4NHQjIpLhlOhFRDJcJib6CVEHUAXFdWgU16FRXIemXsWVcWP0IiLy3zLxjF5EROIo0YuIZLi0T/Rm9mszW2pm883sRTNrWUW/g05wnoK4vmZmi8yswsyqvF3KzD4xswVmVmBmKa/kdghx1fbxam1ms8xsRfCzVRX9yoNjVWBmlctlJzOeg77/oPT29KD9QzPrlqpYDjGuG8ysNO4Y3VwLMU00sxIzW1hFu5nZI0HM880sO9UxhYzrXDPbEnes7q2luDqb2dtmtiT4v/idBH2Se8zcPa0fwEVAw+D5g8CDCfo0AD4GjgcaA/OA3imOqxfQE3gHyDlIv0+AtrV4vKqNK6Lj9StgXPB8XKJ/x6Btey0co2rfP3Ar8GTwfDgwvY7EdQPwWG39PgX7/CKQDSyson0o8CqxGefOAD6sI3GdC7xSm8cq2G97IDt43gJYnuDfManHLO3P6N39DXcvCxZnE5vFqrL/m+Dc3fcC+yc4T2VcS9x9WSr3cThCxlXrxyt4/T8Hz/8MXJHi/R1MmPcfH+9zwAVmlmjqzNqOq9a5+3vE5qGoyjDgLx4zG2hpZu3rQFyRcPe17p4fPN8GLOHAubSTeszSPtFXciOxT8HKEk1wnnCS8gg48IaZ5QUTpNcFURyvdu6+FmL/EYBjq+jX1MxyzQYKpccAAAKxSURBVGy2maXqwyDM+/+/PsGJxhagTYriOZS4AK4K/tx/zsw6J2ivbXX5/9+ZZjbPzF41s1Nqe+fBkF9/4MNKTUk9ZmEmB4+cmb0JHJeg6Ufu/nLQ50fEZrGanOglEqyr8X2lYeIKYbC7F5vZscAsM1sanIlEGVetH69DeJkuwfE6HnjLzBa4+8c1ja2SMO8/JceoGmH2+TdgqrvvMbNbiP3VcX6K46pOFMcqjHxi9WG2m9lQ4CVi053WCjNrDjwP3O7uWys3J9jksI9ZWiR6d//SwdrN7HrgUuACDwa4KknJJOXVxRXyNYqDnyVm9iKxP89rlOiTEFetHy8zW2dm7d19bfAnakkVr7H/eK00s3eInQ0lO9GHef/7+xSZWUPgGFI/TFBtXO6+IW7xD8SuW0UtJb9PNRWfXN19ppk9bmZt3T3lxc7MrBGxJD/Z3V9I0CWpxyzth27MbAjwQ+Byd99ZRbcwE5zXOjNrZmYt9j8ndmE54R0CtSyK4xU/wfz1wAF/eZhZKzNrEjxvCwwmNh9xsoV5//HxfhV4q4qTjFqNq9I47uXExn+jNgP4RnAnyRnAlv3DdFEys+P2X1cxs4HE8uGGg2+VlP0asXm2l7j7Q1V0S+4xq+0rzsl+AIXExrIKgsf+OyE6ADPj+g0ldnX7Y2JDGKmO60pin8p7gHXA65XjInb3xLzgsaiuxBXR8WoD/ANYEfxsHazPAf4YPD8LWBAcrwXATSmM54D3D9xP7IQCoCnwbPD79xFwfKqPUci4fhH8Ls0D3gZOroWYpgJrgX3B79ZNwC3ALUG7AeODmBdwkLvQajmusXHHajZwVi3F9QViwzDz4/LW0FQeM5VAEBHJcGk/dCMiIgenRC8ikuGU6EVEMpwSvYhIhlOiFxHJcEr0IiIZToleRCTD/X9FlVcOvV+zDQAAAABJRU5ErkJggg==\n",
|
||
"text/plain": [
|
||
"<Figure size 432x288 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plot_function(F.relu)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> j: There is an enormous amount of jargon in deep learning, such as: _rectified linear unit_. The vast vast majority of this jargon is no more complicated than can be implemented in a short line of code and Python, as we saw in this example. The reality is that for academics to get their papers published they need to make them sound as impressive and sophisticated as possible. One of the ways that they do that is to introduce jargon. Unfortunately, this has the result that the field ends up becoming far more intimidating and difficult to get into than it should be. You do have to learn the jargon, because otherwise papers and tutorials are not going to mean much to you. But that doesn't mean you have to find the jargon intimidating. Just remember, when you come across a word or phrase that you haven't seen before, it will almost certainly turn out that it is a very simple concept that it is referring to."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The basic idea is that by using more linear layers, we can have our model do more computation, and therefore model more complex functions. But there's no point just putting one linear layout directly after another one, because when we multiply things together and then at them up multiple times, that can be replaced by multiplying different things together and adding them up just once! That is to say, a series of any number of linear layers in a row can be replaced with a single linear layer with a different set of parameters.\n",
|
||
"\n",
|
||
"But if we put a non-linear function between them, such as max, then this is no longer true. Now, each linear layer is actually somewhat decoupled from the other ones, and can do its own useful work. The max function is particularly interesting, because it operates as a simple \"if\" statement. For any arbitrarily wiggly function, we can approximate it as a bunch of lines joined together; to make it more close to the wiggly function, we just have to use shorter lines.\n",
|
||
"\n",
|
||
"Amazingly enough, it can be mathematically proven that this little function can solve any computable problem to an arbitrarily high level of accuracy, if you can find the right parameters for `w1` and `w2`, and if you make these matrices big enough. This is known as the *universal approximation theorem* . The three lines of code that we have here are known as *layers*. The first and third are known as *linear layers*, and the second line of code is known variously as a *nonlinearity*, or *activation function*.\n",
|
||
"\n",
|
||
"Just like the previous section, we can replace this code with something a bit simpler, by taking advantage of PyTorch:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"simple_net = nn.Sequential(\n",
|
||
" nn.Linear(28*28,30),\n",
|
||
" nn.ReLU(),\n",
|
||
" nn.Linear(30,1)\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"`nn.Sequential` creates a module which will call each of the listed layers or functions in turn.\n",
|
||
"\n",
|
||
"`F.relu` is a function, not a PyTorch module. `nn.ReLU` is a PyTorch module that does exactly the same thing. Most functions that can appear in a model also have identical forms that are modules. Generally, it's just a case of replacing `F` with `nn`, and changing the capitalization. When using `nn.Sequential` PyTorch requires us to use the module version. Since modules are classes, we have to instantiate them, which is why you see `nn.ReLU()` above. Because `nn.Sequential` is a module, we can get its parameters--which will return a list of all the parameters of all modules it contains.\n",
|
||
"\n",
|
||
"Let's try it out! For deeper models, we may need to use a lower learning rate and a few more epochs."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"learn = Learner(dls, simple_net, opt_func=SGD,\n",
|
||
" loss_func=mnist_loss, metrics=batch_accuracy)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<table border=\"1\" class=\"dataframe\">\n",
|
||
" <thead>\n",
|
||
" <tr style=\"text-align: left;\">\n",
|
||
" <th>epoch</th>\n",
|
||
" <th>train_loss</th>\n",
|
||
" <th>valid_loss</th>\n",
|
||
" <th>batch_accuracy</th>\n",
|
||
" <th>time</th>\n",
|
||
" </tr>\n",
|
||
" </thead>\n",
|
||
" <tbody>\n",
|
||
" <tr>\n",
|
||
" <td>0</td>\n",
|
||
" <td>0.294820</td>\n",
|
||
" <td>0.416238</td>\n",
|
||
" <td>0.504907</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>1</td>\n",
|
||
" <td>0.141692</td>\n",
|
||
" <td>0.216893</td>\n",
|
||
" <td>0.816487</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>2</td>\n",
|
||
" <td>0.079073</td>\n",
|
||
" <td>0.110840</td>\n",
|
||
" <td>0.921001</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>3</td>\n",
|
||
" <td>0.052444</td>\n",
|
||
" <td>0.075782</td>\n",
|
||
" <td>0.941119</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>4</td>\n",
|
||
" <td>0.040078</td>\n",
|
||
" <td>0.059658</td>\n",
|
||
" <td>0.957802</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>5</td>\n",
|
||
" <td>0.033729</td>\n",
|
||
" <td>0.050542</td>\n",
|
||
" <td>0.962709</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>6</td>\n",
|
||
" <td>0.030057</td>\n",
|
||
" <td>0.044751</td>\n",
|
||
" <td>0.965653</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>7</td>\n",
|
||
" <td>0.027653</td>\n",
|
||
" <td>0.040775</td>\n",
|
||
" <td>0.967615</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>8</td>\n",
|
||
" <td>0.025914</td>\n",
|
||
" <td>0.037867</td>\n",
|
||
" <td>0.969087</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>9</td>\n",
|
||
" <td>0.024563</td>\n",
|
||
" <td>0.035642</td>\n",
|
||
" <td>0.970069</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>10</td>\n",
|
||
" <td>0.023465</td>\n",
|
||
" <td>0.033873</td>\n",
|
||
" <td>0.972031</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>11</td>\n",
|
||
" <td>0.022547</td>\n",
|
||
" <td>0.032421</td>\n",
|
||
" <td>0.972031</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>12</td>\n",
|
||
" <td>0.021761</td>\n",
|
||
" <td>0.031202</td>\n",
|
||
" <td>0.973013</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>13</td>\n",
|
||
" <td>0.021081</td>\n",
|
||
" <td>0.030153</td>\n",
|
||
" <td>0.974485</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>14</td>\n",
|
||
" <td>0.020482</td>\n",
|
||
" <td>0.029238</td>\n",
|
||
" <td>0.974485</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>15</td>\n",
|
||
" <td>0.019949</td>\n",
|
||
" <td>0.028429</td>\n",
|
||
" <td>0.975957</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>16</td>\n",
|
||
" <td>0.019472</td>\n",
|
||
" <td>0.027706</td>\n",
|
||
" <td>0.976938</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>17</td>\n",
|
||
" <td>0.019039</td>\n",
|
||
" <td>0.027055</td>\n",
|
||
" <td>0.977429</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>18</td>\n",
|
||
" <td>0.018645</td>\n",
|
||
" <td>0.026466</td>\n",
|
||
" <td>0.977920</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>19</td>\n",
|
||
" <td>0.018283</td>\n",
|
||
" <td>0.025931</td>\n",
|
||
" <td>0.977920</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>20</td>\n",
|
||
" <td>0.017950</td>\n",
|
||
" <td>0.025441</td>\n",
|
||
" <td>0.978901</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>21</td>\n",
|
||
" <td>0.017641</td>\n",
|
||
" <td>0.024991</td>\n",
|
||
" <td>0.979882</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>22</td>\n",
|
||
" <td>0.017353</td>\n",
|
||
" <td>0.024576</td>\n",
|
||
" <td>0.979882</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>23</td>\n",
|
||
" <td>0.017084</td>\n",
|
||
" <td>0.024192</td>\n",
|
||
" <td>0.980373</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>24</td>\n",
|
||
" <td>0.016832</td>\n",
|
||
" <td>0.023837</td>\n",
|
||
" <td>0.980864</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>25</td>\n",
|
||
" <td>0.016595</td>\n",
|
||
" <td>0.023506</td>\n",
|
||
" <td>0.981354</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>26</td>\n",
|
||
" <td>0.016371</td>\n",
|
||
" <td>0.023198</td>\n",
|
||
" <td>0.981354</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>27</td>\n",
|
||
" <td>0.016159</td>\n",
|
||
" <td>0.022910</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>28</td>\n",
|
||
" <td>0.015959</td>\n",
|
||
" <td>0.022641</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>29</td>\n",
|
||
" <td>0.015768</td>\n",
|
||
" <td>0.022389</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>30</td>\n",
|
||
" <td>0.015587</td>\n",
|
||
" <td>0.022154</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>31</td>\n",
|
||
" <td>0.015414</td>\n",
|
||
" <td>0.021932</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>32</td>\n",
|
||
" <td>0.015249</td>\n",
|
||
" <td>0.021725</td>\n",
|
||
" <td>0.981845</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>33</td>\n",
|
||
" <td>0.015092</td>\n",
|
||
" <td>0.021529</td>\n",
|
||
" <td>0.982336</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>34</td>\n",
|
||
" <td>0.014941</td>\n",
|
||
" <td>0.021345</td>\n",
|
||
" <td>0.982336</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>35</td>\n",
|
||
" <td>0.014796</td>\n",
|
||
" <td>0.021171</td>\n",
|
||
" <td>0.982826</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>36</td>\n",
|
||
" <td>0.014658</td>\n",
|
||
" <td>0.021007</td>\n",
|
||
" <td>0.982826</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>37</td>\n",
|
||
" <td>0.014524</td>\n",
|
||
" <td>0.020852</td>\n",
|
||
" <td>0.982826</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>38</td>\n",
|
||
" <td>0.014396</td>\n",
|
||
" <td>0.020704</td>\n",
|
||
" <td>0.983317</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" <tr>\n",
|
||
" <td>39</td>\n",
|
||
" <td>0.014272</td>\n",
|
||
" <td>0.020564</td>\n",
|
||
" <td>0.983317</td>\n",
|
||
" <td>00:00</td>\n",
|
||
" </tr>\n",
|
||
" </tbody>\n",
|
||
"</table>"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.HTML object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"#hide_output\n",
|
||
"learn.fit(40, 0.1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We're not showing the 40 lines of output here to save room; the training process is recorded in `learn.recorder`, with the table of output stored in the `values` attribute, so we can plot the accuracy over training as:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD5CAYAAAA3Os7hAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAYtElEQVR4nO3dfXAc933f8fcXhzuAIB74BJGKQJGUSkuiHcWyWMlTtbbj1K4ku1Jcp63UaGp5JLPNREmbNG3kqUd1NUk6zbR124nqjCrLsh3bsqKkNeOwo6a2PMk4iUQoeogkkA5FWSFEQoTEhzs8HO7p2z92DzweD8ARPGKxu5/XzM3t7i3uvtgRP/rhe/vbNXdHRETiryvqAkREpDMU6CIiCaFAFxFJCAW6iEhCKNBFRBJCgS4ikhDdS+1gZo8CHweOu/t7WrxuwH8DbgVmgLvd/S+Wet9Nmzb59u3bz7tgEZE0e+6559529+FWry0Z6MBjwG8BX13g9VuAneHjRuCL4fOitm/fzujoaBsfLyIidWb2xkKvLdlycfc/Bk4sssvtwFc98OfAOjO79PzLFBGRC9GJHvplwJGG9fFw2znMbI+ZjZrZ6OTkZAc+WkRE6joR6NZiW8vrCbj7w+6+2913Dw+3bAGJiMgydSLQx4GtDesjwNEOvK+IiJyHTgT6XuCfWuD9wGl3P9aB9xURkfPQzmmL3wQ+BGwys3Hg3wFZAHf/bWAfwSmLhwhOW/z0xSpWREQWtmSgu/udS7zuwM93rCIREVmWds5DFxGJlLtTqTnlao1SpUapWmOxWzm4E+xb379Sm//ZuWqNSvXC7gNRrdWYq9QoVz18/2qwHH7GUveZ+KlrNvMTW9ddUA2tKNBFlqlWc6ZLFeYqtQX3cYeaB//o5xpCpVStUQ7DpTFs6stz8/s4pWo13O7B9ob9K7WFPxugWvP592n1WbVVfIObas3nj9VSAb7aWKtz/xpcMtirQBdpVKmeGSHNVavzy/MjsYblxbKg5s5sqcpUscLUXIXpuQpTpQpTxXB5rho+V856ni5VV+T3zHQZuUwX2YyR687Q011f7qK7q2vR8OiyYL9sxhjKZcllush1W/jcRaZrieSJUL32XHdXUG9YczbTRba7i8wSqVk/RvWfq/9srruL7BLHre3aznrvM9vsQt78AijQZUWVqzVOzZQ5OVPixHSJk9MlTsyUODVTphAG6PRchcLcmeWpuQqzpSql6tkhXbuII7ZcdxcDPd2sDR/9PRk29efYtrGP/vltwaMn29VyMkZdpqtr0XDJNT3Xt/eEy6s5dGV1UaBLS6VK7cxoNBytnp4tc2I6CN8TM2EYT5c4OVPi5EyZSnXhP/+r7pyeKZMvVhbcp7vL6O89E5Rre7pZ15djZH0fa3KZRcMv2FYPzcyZAG1jxGQGfbnMWZ+bzehCpBI/CvSEcHdmy1WO5+eYyBd5a/4Rrp8ucnKmtGjroVytMT1XZWquQmmRvjAEf86u78uxYW2O9X053rW5f9EQNGBdXy78mSzr1+bY0JdjXfge6/qy9HRH96eqSBIo0FeRWs0pFCucaGpHND4Xis293CqFYpnpUpVqix7EmmyGLUO9bB7s4eotg4v2DbOZLtb2ZIJ2Qq6b/t4zrYW1Pd2sW5OdD9/+nm6Fr8gqo0BfIbWa8+apWY6cnOGtfJGJ03MNo+hgJH28UKS8wOlUuUwX69dmGezNzofsJQO98/3doFWRZXighy2DvWwZ6uGSwV4GFLwiqaFA77DZUpXDb0/x2uQ0rx2f4rXJYPn1t6cols9uYwz0dHPJYA9bhnq5cccGNg/1MtzfMz8KrrczNqzN0ZfLKJhFZFEK9As0NVdh/+sn+MGht/nT195hbCI/f76sGWxd38eVw2u56cqNXHlJP9s29LF5qJctg8HoWkSkU5Qo56lYrvL8X5/iT18LAvzFI6eo1JxcdxfXX76eX/zwTq7aMsCVw/1s29hHbzYTdckikhIK9PPwnZeO8qtPvsR0qUqXwY+PrGPPB67gpr+xieu3rVd4i0ikFOhteuRPDvNrfzjG9dvW888/eCU3XrGBwd5s1GWJiMxToC+hWnN+7Q9f5cs/+BG3vGcLX/jH79VIXERWJQX6IorlKr/0rRf4Py9P8OmbtvO5j+3SNGwRWbUU6As4NVPi3q+MMvrGST73sWu49+9cEXVJIiKLUqC3cOTEDHd/+VmOnJjlt/7JdXz82h+LuiQRkSUp0Ju8/OZpPv3YfubKVb52zw3ceMXGqEsSEWmLAr3BC0dOcdcjzzC0Jss37r2RnZsHoi5JRKRtCvTQwYkCd3/5WdavzfK7/+xvsWWoN+qSRETOiy76DLzxzjR3fekZerq7+Po971eYi0gspT7QJ04X+dlHnqFSrfE799zI5Rv7oi5JRGRZUt1yOTFd4q4vPcOpmTLf+Ix65iISb6kdoeeLZT716LMcOTHDI5/azbUjnb8Dt4jISkploM+Wqtz72Chjx/J88a738X6dmigiCZC6lkupUuPnvv4c+984wX+/4zo+fPXmqEsSEemI1I3QH/j2y3z/4CS/8Ykf5+//hGaAikhypCrQazXnD148yj+8foQ7b7g86nJERDoqVYF+5OQM06Uq129bH3UpIiIdl6pAHztWAODqSwcjrkREpPNSFegHJvKYwbs290ddiohIx6Ur0I8V2LFxLX251J3cIyIpkKpAH5vIc/Wlmg0qIsnUVqCb2c1mdtDMDpnZ/S1e32Zm3zWzl8zs+2Y20vlSL8z0XIU33pnh6i3qn4tIMi0Z6GaWAR4CbgF2AXea2a6m3f4T8FV3vxZ4EPgPnS70Qh18K/xCdItG6CKSTO2M0G8ADrn7YXcvAY8Dtzftswv4brj8dIvXI3cgPMPlGp3hIiIJ1U6gXwYcaVgfD7c1ehH4ZLj8CWDAzM65QIqZ7TGzUTMbnZycXE69yzZ2LE9/Tzcj69es6OeKiKyUdgLdWmzzpvVfAT5oZs8DHwTeBCrn/JD7w+6+2913Dw8Pn3exF+LARJ6rtwxg1urXERGJv3bO3xsHtjasjwBHG3dw96PAPwAws37gk+5+ulNFXih358CxAj99XfMfFiIiydHOCH0/sNPMdphZDrgD2Nu4g5ltMrP6e30WeLSzZV6YN0/NUpir6JRFEUm0JQPd3SvAfcBTwBjwhLu/YmYPmtlt4W4fAg6a2Q+BzcCvX6R6l2V+yr9OWRSRBGtryqS77wP2NW17oGH5SeDJzpbWOQeO5QG4SqcsikiCpWKm6IGJAts29tHfoyn/IpJcqQj0sWN5TSgSkcRLfKDPlqq8/s60+ucikniJD/QfvlXAHa7RGS4iknCJD/QDE8EXopryLyJJl/hAHztWoC+XYev6vqhLERG5qFIQ6Hmu2jJAV5em/ItIsiU60N2dAxMFtVtEJBUSHegT+SKnZ8tco1MWRSQFEh3oY+EM0as1QheRFEh4oAfXcNGUfxFJg0QH+oGJAiPr1zDYm426FBGRiy7ZgX4srxmiIpIaiQ30YrnK4benNUNURFIjsYF+6PgU1ZprhC4iqZHYQK+f4aIRuoikRYIDvUBvtottG9dGXYqIyIpIbKAfmMhz1eYBMpryLyIpkchAd3fGjuU15V9EUiWRgT5ZmOPkTFl3KRKRVElkoL+qKf8ikkKJDPQDE8GU/2t0yqKIpEgyA/1Ynh8b6mWoT1P+RSQ9khnoEwW1W0QkdRIX6HOVKoeOT+kLURFJncQF+mvHp6nUXCN0EUmdxAX6gYngDJddmvIvIimTuEAfO5Yn193Fdk35F5GUSVygHzkxy+Ub+ujOJO5XExFZVOJSL18sM7RGpyuKSPokMtAHe7ujLkNEZMUlLtALxQoDuoeoiKRQW4FuZjeb2UEzO2Rm97d4/XIze9rMnjezl8zs1s6X2p78bJnBNRqhi0j6LBnoZpYBHgJuAXYBd5rZrqbdPgc84e7XAXcA/6PThbbD3TVCF5HUameEfgNwyN0Pu3sJeBy4vWkfB+ozeYaAo50rsX2z5SqVmjOoQBeRFGon0C8DjjSsj4fbGn0euMvMxoF9wC+0eiMz22Nmo2Y2Ojk5uYxyF1coVgDUchGRVGon0Fvdw82b1u8EHnP3EeBW4Gtmds57u/vD7r7b3XcPDw+ff7VLyM+WAdRyEZFUaifQx4GtDesjnNtSuQd4AsDd/wzoBTZ1osDzka+P0HXaooikUDuBvh/YaWY7zCxH8KXn3qZ9/hr4KQAzu4Yg0DvfU1lCvqgRuoik15KB7u4V4D7gKWCM4GyWV8zsQTO7LdztXwGfMbMXgW8Cd7t7c1vmoqv30IfUQxeRFGor+dx9H8GXnY3bHmhYfhW4qbOlnT/10EUkzRI1U3T+LBcFuoikUKICPV8s091l9GYT9WuJiLQlUclXKJYZXJPFrNWZliIiyZaoQM/PVhjQKYsiklKJCvRCsaz+uYikVqICPV/UCF1E0itRga4RuoikWaICXT10EUmzRAV6/SwXEZE0SkygV6o1pktVjdBFJLUSE+hTc5olKiLplphAz88Gga4RuoikVXICPbx0rnroIpJWiQt0jdBFJK0SE+i60qKIpF1iAr1+LXQFuoikVWICfX6ErrsViUhKJSbQ6z30/h4FuoikU2ICvVCssDaXoTuTmF9JROS8JCb98rOa9i8i6ZaYQC/o0rkiknKJCfS8Lp0rIimXqEDXCF1E0iwxgV4oVtRDF5FUS0yg52c1QheRdEtEoLt7MEJXD11EUiwRgT5brlKpOQMKdBFJsUQEuqb9i4gkJNDrF+bSCF1E0iwZgT5/6VyN0EUkvRIS6Bqhi4gkItDrPfQh9dBFJMXaCnQzu9nMDprZITO7v8XrXzCzF8LHD83sVOdLXZh66CIisOSQ1swywEPAR4BxYL+Z7XX3V+v7uPsvNez/C8B1F6HWBen2cyIi7Y3QbwAOufthdy8BjwO3L7L/ncA3O1Fcu/LFMt1dRm82ER0kEZFlaScBLwOONKyPh9vOYWbbgB3A9xZ4fY+ZjZrZ6OTk5PnWuqBCMbgWupl17D1FROKmnUBvlZK+wL53AE+6e7XVi+7+sLvvdvfdw8PD7da4pPysroUuItJOoI8DWxvWR4CjC+x7ByvcboFwhK7+uYikXDuBvh/YaWY7zCxHENp7m3cys6uA9cCfdbbEpeV1tyIRkaUD3d0rwH3AU8AY8IS7v2JmD5rZbQ273gk87u4LtWMuGo3QRUTaOG0RwN33Afuatj3QtP75zpV1ftRDFxFJzEzRsu5WJCKpF/tAr1RrTJeqarmISOrFPtCn5oJZomq5iEjaxT7Q87P1m1tohC4i6Rb/QJ+/dK5G6CKSbokJdPXQRSTtYh/o9SstaoQuImkX+0CvXwt9SD10EUm52Ae6RugiIoHYB3q9h97fo0AXkXSLfaAXihXW5jJ0Z2L/q4iIXJDYp2B+VtP+RUQgAYFe0KVzRUSABAR6XpfOFREBEhDoGqGLiARiH+h5XTpXRARIQqDPljVCFxEh5oHu7hSKFfXQRUSIeaDPlqtUas6AAl1EJN6BXp/2P7hGLRcRkVgHev3CXBqhi4jEPdDrI3R9KSoiEvdA1whdRKQu1oFe76EPqYcuIhLvQFcPXUTkjFgH+vxZLgp0EZF4B3q+WCabMXqzsf41REQ6ItZJWCiWGejNYmZRlyIiErlYB3p+tqJTFkVEQrEO9PoIXUREYh7o+WJF0/5FREKxDvRCscxAj0boIiLQZqCb2c1mdtDMDpnZ/Qvs84/M7FUze8XMvtHZMlvLz2qELiJSt2QamlkGeAj4CDAO7Dezve7+asM+O4HPAje5+0kzu+RiFdxIPXQRkTPaGaHfABxy98PuXgIeB25v2uczwEPufhLA3Y93tsxzVao1pktVTSoSEQm1E+iXAUca1sfDbY3eBbzLzH5gZn9uZjd3qsCFTM0Fs0R1+zkRkUA7adhq1o63eJ+dwIeAEeBPzOw97n7qrDcy2wPsAbj88svPu9hG+dn6zS00QhcRgfZG6OPA1ob1EeBoi32+7e5ld38dOEgQ8Gdx94fdfbe77x4eHl5uzUDjpXM1QhcRgfYCfT+w08x2mFkOuAPY27TP/wZ+EsDMNhG0YA53stBm9UBXD11EJLBkoLt7BbgPeAoYA55w91fM7EEzuy3c7SngHTN7FXga+Nfu/s7FKhrOXGlRI3QRkUBbaeju+4B9TdseaFh24JfDx4qoXwt9SD10EREgxjNFNUIXETlbbAO93kPv71Ggi4hAjAO9UKywNpehOxPbX0FEpKNim4b52bLOQRcRaRDbQC8UK+qfi4g0iG2g54tlnYMuItIgtoGuEbqIyNliG+j5onroIiKNYhvoGqGLiJwtloHu7sFZLuqhi4jMi2Wgz5arVGqulouISINYBrqm/YuInCuWgV6/MJdaLiIiZ8Qz0DVCFxE5R0wDPRyhq4cuIjIvloFe76EPaoQuIjIvloGuHrqIyLliGehnznJRoIuI1MUy0PPFMtmM0ZuNZfkiIhdFLBOxUCwz0JvFzKIuRURk1YhloOdnK/pCVESkSSwDvT5CFxGRM2IZ6PlihcE1GqGLiDSKZaAXimUGejRCFxFpFMtAz89qhC4i0iyWga4euojIuWIX6JVqjelSVbNERUSaxC7Qp+Z0pUURkVZiF+j52fDCXLrSoojIWeIX6OGlczVCFxE5W2wDXT10EZGzxS7QdT9REZHWYhfo9WuhD6mHLiJylrYC3cxuNrODZnbIzO5v8frdZjZpZi+Ej3s7X2pAI3QRkdaWTEUzywAPAR8BxoH9ZrbX3V9t2vVb7n7fRajxLCPr1/D33r2Z/h4FuohIo3ZS8QbgkLsfBjCzx4HbgeZAXxEfffcWPvruLVF8tIjIqtZOy+Uy4EjD+ni4rdknzewlM3vSzLa2eiMz22Nmo2Y2Ojk5uYxyRURkIe0EeqvbAnnT+h8A2939WuD/AV9p9Ubu/rC773b33cPDw+dXqYiILKqdQB8HGkfcI8DRxh3c/R13nwtX/ydwfWfKExGRdrUT6PuBnWa2w8xywB3A3sYdzOzShtXbgLHOlSgiIu1Y8ktRd6+Y2X3AU0AGeNTdXzGzB4FRd98L/KKZ3QZUgBPA3RexZhERacHcm9vhK2P37t0+OjoayWeLiMSVmT3n7rtbvRa7maIiItKaAl1EJCEia7mY2STwxjJ/fBPwdgfL6STVtjyqbXlU2/LEubZt7t7yvO/IAv1CmNnoQj2kqKm25VFty6PalieptanlIiKSEAp0EZGEiGugPxx1AYtQbcuj2pZHtS1PImuLZQ9dRETOFdcRuoiINFGgi4gkROwCfanb4UXJzH5kZn8Z3oYv0usamNmjZnbczF5u2LbBzP7IzP4qfF6/imr7vJm92XAbw1sjqm2rmT1tZmNm9oqZ/Ytwe+THbpHaIj92ZtZrZs+a2Ythbf8+3L7DzJ4Jj9u3wgv8rZbaHjOz1xuO23tXuraGGjNm9ryZfSdcX95xc/fYPAguDvYacAWQA14EdkVdV0N9PwI2RV1HWMsHgPcBLzds+03g/nD5fuA/rqLaPg/8yio4bpcC7wuXB4AfArtWw7FbpLbIjx3BfRP6w+Us8AzwfuAJ4I5w+28DP7eKansM+Jmo/5sL6/pl4BvAd8L1ZR23uI3Q52+H5+4loH47PGni7n9McOXLRrdz5uYjXwF+ekWLCi1Q26rg7sfc/S/C5QLBpaAvYxUcu0Vqi5wHpsLVbPhw4MPAk+H2qI7bQrWtCmY2AnwMeCRcN5Z53OIW6O3eDi8qDvxfM3vOzPZEXUwLm939GAThAFwScT3N7gtvY/hoVO2gRma2HbiOYES3qo5dU22wCo5d2DZ4ATgO/BHBX9On3L0S7hLZv9fm2ty9ftx+PTxuXzCznihqA/4r8G+AWri+kWUet7gFeju3w4vSTe7+PuAW4OfN7ANRFxQjXwSuBN4LHAP+c5TFmFk/8HvAv3T3fJS1NGtR26o4du5edff3EtzV7Abgmla7rWxV4Yc21WZm7wE+C1wN/E1gA/CrK12XmX0cOO7uzzVubrFrW8ctboG+5O3wouTuR8Pn48D/IviPejV5q353qfD5eMT1zHP3t8J/dDWC2xhGduzMLEsQmF93998PN6+KY9eqttV07MJ6TgHfJ+hTrzOz+o10Iv/32lDbzWELyz24feaXiea43QTcZmY/Imghf5hgxL6s4xa3QF/ydnhRMbO1ZjZQXwY+Cry8+E+tuL3Ap8LlTwHfjrCWs9jZtzH8BBEdu7B/+SVgzN3/S8NLkR+7hWpbDcfOzIbNbF24vAb4uwQ9/qeBnwl3i+q4tartQMP/oI2gR73ix83dP+vuI+6+nSDPvufuP8tyj1vU3+4u49vgWwm+3X8N+LdR19NQ1xUEZ928CLwSdW3ANwn+/C4T/GVzD0Fv7rvAX4XPG1ZRbV8D/hJ4iSA8L42otr9N8OftS8AL4ePW1XDsFqkt8mMHXAs8H9bwMvBAuP0K4FngEPC7QM8qqu174XF7GfgdwjNhonoAH+LMWS7LOm6a+i8ikhBxa7mIiMgCFOgiIgmhQBcRSQgFuohIQijQRUQSQoEuIpIQCnQRkYT4/5QrJU47aWgaAAAAAElFTkSuQmCC\n",
|
||
"text/plain": [
|
||
"<Figure size 432x288 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {
|
||
"needs_background": "light"
|
||
},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plt.plot(L(learn.recorder.values).itemgot(2));"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"...and we can view the final accuracy:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.983316957950592"
|
||
]
|
||
},
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"learn.recorder.values[-1][2]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"At this point we have something that is rather magical:\n",
|
||
"\n",
|
||
"1. A function that can solve any problem to any level of accuracy (the neural network) given the correct set of parameters\n",
|
||
"1. A way to find the best set of parameters for any function (stochastic gradient descent)\n",
|
||
"\n",
|
||
"This is why deep learning can do things which seem rather magical. Believing that this combination of simple techniques can really solve any problem here is one of the biggest steps that we find many students have to take. It seems too good to be true. It seems like things should be more difficult and complicated than this. Our recommendation: try it out! We will take our own recommendation and try this model on the MNIST dataset. Since we are doing everything from scratch ourselves (except for calculating the gradients) you know that there is no special magic hiding behind the scenes…\n",
|
||
"\n",
|
||
"There is no need to stop at just two linear layers. We can add as many as we want, as long as we add a nonlinearity between each pair of linear layers. As we will learn, however, the deeper the model gets, the harder it is to optimise the parameters in practice. Later in this book we will learn about some simple but brilliantly effective techniques for training deeper models.\n",
|
||
"\n",
|
||
"We already know that a single nonlinearity with two linear layers is enough to approximate any function. So why would we use deeper models? The reason is performance. With a deeper model (that is, one with more layers) we do not need to use as many parameters; it turns out that we can use smaller matrices, with more layers, and get better results than we would get with larger matrices, and few layers.\n",
|
||
"\n",
|
||
"That means that we can train them more quickly, and our model will take up less memory. In the 1990s researchers were so focused on the universal approximation theorem that very few were experimenting with more than one nonlinearity. This theoretical but not practical foundation held back the field for years. Some researchers, however, did experiment with deep models, and eventually were able to show that these models could perform much better in practice. Eventually, theoretical results were developed which showed why this happens. Today, it is extremely unusual to find anybody using a neural network with just one nonlinearity.\n",
|
||
"\n",
|
||
"Here what happens when we train 18 layer model using the same approach we saw in <<chapter_intro>>:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<table border=\"1\" class=\"dataframe\">\n",
|
||
" <thead>\n",
|
||
" <tr style=\"text-align: left;\">\n",
|
||
" <th>epoch</th>\n",
|
||
" <th>train_loss</th>\n",
|
||
" <th>valid_loss</th>\n",
|
||
" <th>accuracy</th>\n",
|
||
" <th>time</th>\n",
|
||
" </tr>\n",
|
||
" </thead>\n",
|
||
" <tbody>\n",
|
||
" <tr>\n",
|
||
" <td>0</td>\n",
|
||
" <td>0.125685</td>\n",
|
||
" <td>0.026256</td>\n",
|
||
" <td>0.992640</td>\n",
|
||
" <td>00:11</td>\n",
|
||
" </tr>\n",
|
||
" </tbody>\n",
|
||
"</table>"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.HTML object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"dls = ImageDataLoaders.from_folder(path)\n",
|
||
"learn = cnn_learner(dls, resnet18, pretrained=False,\n",
|
||
" loss_func=F.cross_entropy, metrics=accuracy)\n",
|
||
"learn.fit_one_cycle(1, 0.1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Nearly 100% accuracy! That's a big difference compared to our simple neural net. But as you'll learn in the remainder of this book, there are just a few little tricks you need to use to get such great results from scratch yourself. You already know the key foundational pieces. (Of course, even once you know all the tricks, you'll nearly always want to work with the pre-built classes provided by PyTorch and fastai, because they save you having to think about all the little details yourself.)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Deep learning"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Congratulations: you now know how to create and train a deep neural network from scratch! There has been quite a few steps to get to this point, but you might be surprised at how simple it really has ended up.\n",
|
||
"\n",
|
||
"Now that we are at this point, it is a good opportunity to define, and review, some jargon and concepts.\n",
|
||
"\n",
|
||
"The neural network contains a lot of numbers. But those numbers only have one of two types: numbers that are calculated, and the parameters that these are calculated from. This gives us the two most important pieces of jargon to learn:\n",
|
||
"\n",
|
||
"- *activations*: numbers that are calculated (both by linear and non-linear layers)\n",
|
||
"- *parameters*: numbers that are randomly initialised, and optimised (that is, the numbers that define the model)\n",
|
||
"\n",
|
||
"We will often talk in this book about activations and parameters. Remember that they have very specific meanings. They are numbers. They are not abstract concepts, but they are actual specific numbers that are in your model. Part of becoming a good deep learning practitioner is getting used to the idea of actually looking at your activations and parameters, and plotting them and testing whether they are behaving correctly.\n",
|
||
"\n",
|
||
"Our activations and parameters are all contained in tensors. These are simply regularly shaped arrays. For example, a matrix. Matrices have rows and columns; we call these the *axes* or *dimensions*. The number of dimensions of a tensor is its *rank*. There are some special tensors:\n",
|
||
"\n",
|
||
"- rank zero: scalar\n",
|
||
"- rank one: vector\n",
|
||
"- rank two: matrix\n",
|
||
"\n",
|
||
"A neural network contains a number of layers. Each layer is either linear or nonlinear. We generally alternate between these two kinds of layers in a neural network. Sometimes people refer to both a linear layer and its subsequent nonlinearity together as a single *layer*. Yes, this is confusing. Sometimes a nonlinearity is referred to as an activation function.\n",
|
||
"\n",
|
||
"TK: Table jargon recap"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### _Choose Your Own Adventure_ reminder"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Did you choose to skip over chapters 2 & 3, in your excitement to peak under the hood? Well, here's your reminder to head back to chapter 2 now, because you'll be needing to know that stuff very soon!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Questionnaire"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"1. How is a greyscale image represented on a computer? How about a color image?\n",
|
||
"1. How are the files and folders in the `MNIST_SAMPLE` dataset structured? Why?\n",
|
||
"1. Explain how the \"pixel similarity\" approach to classifying digits works.\n",
|
||
"1. What is a list comprehension? Create one now that selects odd numbers from a list and doubles them.\n",
|
||
"1. What is a \"rank 3 tensor\"?\n",
|
||
"1. What is the difference between tensor rank and shape? How do you get the rank from the shape?\n",
|
||
"1. What are RMSE and L1 norm?\n",
|
||
"1. How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?\n",
|
||
"1. Create a 3x3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom right 4 numbers.\n",
|
||
"1. What is broadcasting?\n",
|
||
"1. Are metrics generally calculated using the training set, or the validation set? Why?\n",
|
||
"1. What is SGD?\n",
|
||
"1. Why does SGD use mini batches?\n",
|
||
"1. What are the 7 steps in SGD for machine learning?\n",
|
||
"1. How do we initialize the weights in a model?\n",
|
||
"1. What is \"loss\"?\n",
|
||
"1. Why can't we always use a high learning rate?\n",
|
||
"1. What is a \"gradient\"?\n",
|
||
"1. Do you need to know how to calculate gradients yourself?\n",
|
||
"1. Why can't we use accuracy as a loss function?\n",
|
||
"1. Draw the sigmoid function. What is special about its shape?\n",
|
||
"1. What is the difference between loss and metric?\n",
|
||
"1. What is the function to calculate new weights using a learning rate?\n",
|
||
"1. What does the `DataLoader` class do?\n",
|
||
"1. Write pseudo-code showing the basic steps taken each epoch for SGD.\n",
|
||
"1. Create a function which, if passed two arguments `[1,2,3,4]` and `'abcd'`, returns `[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]`. What is special about that output data structure?\n",
|
||
"1. What does `view` do in PyTorch?\n",
|
||
"1. What are the \"bias\" parameters in a neural network? Why do we need them?\n",
|
||
"1. What does the `@` operator do in python?\n",
|
||
"1. What does the `backward` method do?\n",
|
||
"1. Why do we have to zero the gradients?\n",
|
||
"1. What information do we have to pass to `Learner`?\n",
|
||
"1. Show python or pseudo-code for the basic steps of a training loop.\n",
|
||
"1. What is \"ReLU\"? Draw a plot of it for values from `-2` to `+2`.\n",
|
||
"1. What is an \"activation function\"?\n",
|
||
"1. What's the difference between `F.relu` and `nn.ReLU`?\n",
|
||
"1. The universal approximation theorem shows that any function can be approximately as closely as needed using just one nonlinearity. So why do we normally use more?"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Further research"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"1. Create your own implementation of `Learner` from scratch, based on the training loop shown in this chapter.\n",
|
||
"1. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just threes and sevens). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"jupytext": {
|
||
"split_at_heading": true
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|