mirror of
https://github.com/fastai/fastbook.git
synced 2025-04-05 02:10:48 +00:00
2727 lines
711 KiB
Plaintext
2727 lines
711 KiB
Plaintext
|
{
|
|||
|
"cells": [
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"#hide\n",
|
|||
|
"from fastai2.vision.all import *\n",
|
|||
|
"from utils import *\n",
|
|||
|
"\n",
|
|||
|
"matplotlib.rc('image', cmap='Greys')"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "raw",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"[[chapter_convolutions]]"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"# Convolutional neural networks"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## The magic of convolutions"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"In <<chapter_mnist_basics>> we learned how to create a neural network recognising images. We were able to achieve a bit over 98% accuracy at recognising threes from sevens. But we also saw that fastai's built in classes were able to get close to 100%. Let's start trying to close the gap."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"path = untar_data(URLs.MNIST_SAMPLE)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"#hide\n",
|
|||
|
"Path.BASE_PATH = path"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"One of the most powerful tools that machine learning practitioners have at their disposal is *feature engineering*. A *feature* is a transformation of the data which is designed to make it easier to model. For instance, the `add_datepart` function that we used for our tabular data set preprocessing added date features to the Bulldozers dataset. What kind of features might we be able to create from images?"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"> jargon: Feature engineering: creating new transformations of the input data in order to make it easier to model."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"In the context of an image, a *feature* will be a visually distinctive attribute of an image. Here's an idea: the number seven is characterised by a horizontal edge near the top of the digit, and a bottom left to top right diagonal edge underneath that. On the other hand, the number three is characterised by a diagonal edge in one direction in the top left and bottom right of the digit, the opposite diagonal on the bottom left and top right, a horizontal edge in the middle of the top and the bottom, and so forth. So what if we could extract information about where the edges occur in each image, and then use that as our features, instead of raw pixels?\n",
|
|||
|
"\n",
|
|||
|
"It turns out that finding the edges in an image is a very common task in computer vision, and is surprisingly straightforward. To do it, we use something called a *convolution*. A convolution requires nothing more than multiplication, and addition — two operations which are responsible for the vast majority of work that we will see in every single deep learning model in this book!\n",
|
|||
|
"\n",
|
|||
|
"A convolution applies a *kernel* across an image. A kernel is a little matrix, such as the 3x3 matrix in the top right of this image:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img src=\"images/chapter9_conv_basic.png\" id=\"basic_conv\" caption=\"Applying a kernel to one location\" alt=\"Applying a kernel to one location\" width=\"700\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The grey grid to the left is our *image* we're going to apply the kernel to. The convolution operation multiplies each element of the kernel, to each element of a 3x3 block of the image. The results of these multiplications are then added together. The diagram above shows an example of applying a kernel to a single location in the image, the 3x3 block around cell 18.\n",
|
|||
|
"\n",
|
|||
|
"Let's do this with code. First, we create a little 3x3 matrix like so:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"top_edge = tensor([[-1,-1,-1],\n",
|
|||
|
" [ 0, 0, 0],\n",
|
|||
|
" [ 1, 1, 1]]).float()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We're going to call this our *kernel*\n",
|
|||
|
"(because that's what fancy computer vision researchers call these). And we'll need an image, of course:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAADyElEQVR4nO2aTSg1URjHf1eIS9gQETYWPhOiFLGwkiTJzs7OXpFsWMlKsqEoRT4WFmKhlI+wsWWluCtECIVh3oX3mNd5hzvGneum51d3MzPOee7//p3zPM8Zn2maCBZRPx1ApCGCaIggGiKIhgiiER3k/m/egnx2F8UhGiKIhgiiIYJoiCAaIoiGCKIhgmiIIBrBMlVbHh8fAVhfXwcgPj4egO3tbQCur68BGBkZAaClpQWArKysD8fMzMwEoLm5GYDs7Gw3oX0bcYiGL0jHzPbm0NAQAN3d3SEPKCrq9TeqqKgAoLOzE4DW1lYAUlJSQjWV1DJOcOWQgoICAA4PD23/KC0tDYCamppPJ8/Pzwfg4OCAs7MzADY3N22f3d/fB6C0tPTTMb+AOMQJrnaZra0tAE5OToD/d4TY2FgAEhMTHY/58PAAQGFhIQBHR0fv7s/PzwMhdYgt4hANV2uIF2xsbABQV1f37npcXBzwus4A5OTkhGpKWUMcYZrmZx9PMQzDNAzD7O3tNf1+v+n3+02fz/fuEwgEzEAg4MX0tt9ZHKLhapf5Lip/mZiYAGB4ePjtXkxMDACLi4sApKenhzU2cYhGWB1yfHwMQHFxMQDPz8//PaNqGVUZ+3y2m4FniEM0wuqQ2dlZwN4ZCpWxlpWVAVBfXw9Ae3s7AE1NTQBkZGR4EmNYEzOVjvf39wOwtrYGwOnpqeMx1L/U4OAgAF1dXQAkJCR8NRxJzJzwo6m7ajXe3NxweXkJwMzMDGA1oYLE99aeXFhYAL60CItDnBAxxZ2OKvYGBgYAa735iMnJSQA6OjqcTiEOccKPpO5OqK2tBWB1dRWwmsxLS0u2z6v2wHcRh2hErEMUKu+oqqoCPnZIUVFRaOYLySi/CE8dcnt7C8D09DQAJSUlAFRXVzse4+XlBbCOIXSio1+/QmVlpes4/0UcouGJQ5QzGhoaANjb2wPg/v7e8Rh3d3cAjI2NAVYmqlNeXg5AXl6eu2A1xCEanjhEHYIrZyguLi4A66hTtQsBnp6eABgfHwegp6cHsOodhcqsk5OTAZiamgpp7OIQDU9qmZWVFQAaGxtt76tD8NTU1Ldr5+fnwMeH3YqkpCQAdnZ2AOvA3AVSyzjBE4dcXV0B0NfXB8Do6KibYQArz1Adsra2NgByc3Ndj/kXcYgTPO2HGIYBwO7uLgDLy8uAVXfMzc29PatewlGo9Uc54bMX9lwiDnFCxHbMwoA4xAkiiIYIoiGCaIggGsGq3fC+ixABiEM0RBANEURDBNEQQTREEI0/H3jyQ4wdtXsAAAAASUVORK5CYII=\n",
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 72x72 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"im3 = Image.open(path/'train'/'3'/'12.png')\n",
|
|||
|
"show_image(im3);"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Now we're going to take the top 3x3 pixel square of our image, and we'll multiply each of those by each item in our kernel. Then we'll add them up. Like so:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"tensor([[-0., -0., -0.],\n",
|
|||
|
" [0., 0., 0.],\n",
|
|||
|
" [0., 0., 0.]])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"im3_t = tensor(im3)\n",
|
|||
|
"im3_t[0:3,0:3] * top_edge"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"tensor(0.)"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"(im3_t[0:3,0:3] * top_edge).sum()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Not very interesting so far - they are all white pixels in the top left corner. But let's pick a couple of more interesting spots:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/html": [
|
|||
|
"<style type=\"text/css\" >\n",
|
|||
|
" #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #f9f9f9;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #b9b9b9;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #c1c1c1;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #858585;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #777777;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #090909;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #5b5b5b;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #777777;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #777777;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #777777;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #777777;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #919191;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #e1e1e1;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #727272;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #020202;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #363636;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #9d9d9d;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #dfdfdf;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #161616;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #535353;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #535353;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #535353;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #535353;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #7c7c7c;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #535353;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #3d3d3d;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #999999;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #eaeaea;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #d0d0d0;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #eeeeee;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #eeeeee;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #f3f3f3;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #f9f9f9;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #232323;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col0 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col1 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col2 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col3 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col4 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col5 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col6 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col7 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col8 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col9 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col10 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col11 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col12 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col13 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col14 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #c2c2c2;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col15 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col16 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #000000;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col17 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #080808;\n",
|
|||
|
" color: #f1f1f1;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col18 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #c4c4c4;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" } #T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col19 {\n",
|
|||
|
" font-size: 6pt;\n",
|
|||
|
" background-color: #ffffff;\n",
|
|||
|
" color: #000000;\n",
|
|||
|
" }</style><table id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35\" ><thead> <tr> <th class=\"blank level0\" ></th> <th class=\"col_heading level0 col0\" >0</th> <th class=\"col_heading level0 col1\" >1</th> <th class=\"col_heading level0 col2\" >2</th> <th class=\"col_heading level0 col3\" >3</th> <th class=\"col_heading level0 col4\" >4</th> <th class=\"col_heading level0 col5\" >5</th> <th class=\"col_heading level0 col6\" >6</th> <th class=\"col_heading level0 col7\" >7</th> <th class=\"col_heading level0 col8\" >8</th> <th class=\"col_heading level0 col9\" >9</th> <th class=\"col_heading level0 col10\" >10</th> <th class=\"col_heading level0 col11\" >11</th> <th class=\"col_heading level0 col12\" >12</th> <th class=\"col_heading level0 col13\" >13</th> <th class=\"col_heading level0 col14\" >14</th> <th class=\"col_heading level0 col15\" >15</th> <th class=\"col_heading level0 col16\" >16</th> <th class=\"col_heading level0 col17\" >17</th> <th class=\"col_heading level0 col18\" >18</th> <th class=\"col_heading level0 col19\" >19</th> </tr></thead><tbody>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row0\" class=\"row_heading level0 row0\" >0</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col0\" class=\"data row0 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col1\" class=\"data row0 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col2\" class=\"data row0 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col3\" class=\"data row0 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col4\" class=\"data row0 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col5\" class=\"data row0 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col6\" class=\"data row0 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col7\" class=\"data row0 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col8\" class=\"data row0 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col9\" class=\"data row0 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col10\" class=\"data row0 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col11\" class=\"data row0 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col12\" class=\"data row0 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col13\" class=\"data row0 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col14\" class=\"data row0 col14\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col15\" class=\"data row0 col15\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col16\" class=\"data row0 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col17\" class=\"data row0 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col18\" class=\"data row0 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row0_col19\" class=\"data row0 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col0\" class=\"data row1 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col1\" class=\"data row1 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col2\" class=\"data row1 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col3\" class=\"data row1 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col4\" class=\"data row1 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col5\" class=\"data row1 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col6\" class=\"data row1 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col7\" class=\"data row1 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col8\" class=\"data row1 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col9\" class=\"data row1 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col10\" class=\"data row1 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col11\" class=\"data row1 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col12\" class=\"data row1 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col13\" class=\"data row1 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col14\" class=\"data row1 col14\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col15\" class=\"data row1 col15\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col16\" class=\"data row1 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col17\" class=\"data row1 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col18\" class=\"data row1 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row1_col19\" class=\"data row1 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row2\" class=\"row_heading level0 row2\" >2</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col0\" class=\"data row2 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col1\" class=\"data row2 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col2\" class=\"data row2 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col3\" class=\"data row2 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col4\" class=\"data row2 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col5\" class=\"data row2 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col6\" class=\"data row2 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col7\" class=\"data row2 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col8\" class=\"data row2 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col9\" class=\"data row2 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col10\" class=\"data row2 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col11\" class=\"data row2 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col12\" class=\"data row2 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col13\" class=\"data row2 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col14\" class=\"data row2 col14\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col15\" class=\"data row2 col15\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col16\" class=\"data row2 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col17\" class=\"data row2 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col18\" class=\"data row2 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row2_col19\" class=\"data row2 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row3\" class=\"row_heading level0 row3\" >3</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col0\" class=\"data row3 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col1\" class=\"data row3 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col2\" class=\"data row3 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col3\" class=\"data row3 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col4\" class=\"data row3 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col5\" class=\"data row3 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col6\" class=\"data row3 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col7\" class=\"data row3 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col8\" class=\"data row3 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col9\" class=\"data row3 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col10\" class=\"data row3 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col11\" class=\"data row3 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col12\" class=\"data row3 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col13\" class=\"data row3 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col14\" class=\"data row3 col14\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col15\" class=\"data row3 col15\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col16\" class=\"data row3 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col17\" class=\"data row3 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col18\" class=\"data row3 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row3_col19\" class=\"data row3 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row4\" class=\"row_heading level0 row4\" >4</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col0\" class=\"data row4 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col1\" class=\"data row4 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col2\" class=\"data row4 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col3\" class=\"data row4 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col4\" class=\"data row4 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col5\" class=\"data row4 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col6\" class=\"data row4 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col7\" class=\"data row4 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col8\" class=\"data row4 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col9\" class=\"data row4 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col10\" class=\"data row4 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col11\" class=\"data row4 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col12\" class=\"data row4 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col13\" class=\"data row4 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col14\" class=\"data row4 col14\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col15\" class=\"data row4 col15\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col16\" class=\"data row4 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col17\" class=\"data row4 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col18\" class=\"data row4 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row4_col19\" class=\"data row4 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row5\" class=\"row_heading level0 row5\" >5</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col0\" class=\"data row5 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col1\" class=\"data row5 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col2\" class=\"data row5 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col3\" class=\"data row5 col3\" >12</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col4\" class=\"data row5 col4\" >99</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col5\" class=\"data row5 col5\" >91</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col6\" class=\"data row5 col6\" >142</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col7\" class=\"data row5 col7\" >155</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col8\" class=\"data row5 col8\" >246</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col9\" class=\"data row5 col9\" >182</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col10\" class=\"data row5 col10\" >155</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col11\" class=\"data row5 col11\" >155</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col12\" class=\"data row5 col12\" >155</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col13\" class=\"data row5 col13\" >155</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col14\" class=\"data row5 col14\" >131</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col15\" class=\"data row5 col15\" >52</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col16\" class=\"data row5 col16\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col17\" class=\"data row5 col17\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col18\" class=\"data row5 col18\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row5_col19\" class=\"data row5 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row6\" class=\"row_heading level0 row6\" >6</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col0\" class=\"data row6 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col1\" class=\"data row6 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col2\" class=\"data row6 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col3\" class=\"data row6 col3\" >138</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col4\" class=\"data row6 col4\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col5\" class=\"data row6 col5\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col6\" class=\"data row6 col6\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col7\" class=\"data row6 col7\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col8\" class=\"data row6 col8\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col9\" class=\"data row6 col9\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col10\" class=\"data row6 col10\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col11\" class=\"data row6 col11\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col12\" class=\"data row6 col12\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col13\" class=\"data row6 col13\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col14\" class=\"data row6 col14\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col15\" class=\"data row6 col15\" >252</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col16\" class=\"data row6 col16\" >210</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col17\" class=\"data row6 col17\" >122</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col18\" class=\"data row6 col18\" >33</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row6_col19\" class=\"data row6 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row7\" class=\"row_heading level0 row7\" >7</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col0\" class=\"data row7 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col1\" class=\"data row7 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col2\" class=\"data row7 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col3\" class=\"data row7 col3\" >220</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col4\" class=\"data row7 col4\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col5\" class=\"data row7 col5\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col6\" class=\"data row7 col6\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col7\" class=\"data row7 col7\" >235</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col8\" class=\"data row7 col8\" >189</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col9\" class=\"data row7 col9\" >189</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col10\" class=\"data row7 col10\" >189</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col11\" class=\"data row7 col11\" >189</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col12\" class=\"data row7 col12\" >150</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col13\" class=\"data row7 col13\" >189</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col14\" class=\"data row7 col14\" >205</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col15\" class=\"data row7 col15\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col16\" class=\"data row7 col16\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col17\" class=\"data row7 col17\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col18\" class=\"data row7 col18\" >75</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row7_col19\" class=\"data row7 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row8\" class=\"row_heading level0 row8\" >8</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col0\" class=\"data row8 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col1\" class=\"data row8 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col2\" class=\"data row8 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col3\" class=\"data row8 col3\" >35</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col4\" class=\"data row8 col4\" >74</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col5\" class=\"data row8 col5\" >35</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col6\" class=\"data row8 col6\" >35</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col7\" class=\"data row8 col7\" >25</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col8\" class=\"data row8 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col9\" class=\"data row8 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col10\" class=\"data row8 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col11\" class=\"data row8 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col12\" class=\"data row8 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col13\" class=\"data row8 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col14\" class=\"data row8 col14\" >13</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col15\" class=\"data row8 col15\" >224</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col16\" class=\"data row8 col16\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col17\" class=\"data row8 col17\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col18\" class=\"data row8 col18\" >153</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row8_col19\" class=\"data row8 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <th id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35level0_row9\" class=\"row_heading level0 row9\" >9</th>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col0\" class=\"data row9 col0\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col1\" class=\"data row9 col1\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col2\" class=\"data row9 col2\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col3\" class=\"data row9 col3\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col4\" class=\"data row9 col4\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col5\" class=\"data row9 col5\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col6\" class=\"data row9 col6\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col7\" class=\"data row9 col7\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col8\" class=\"data row9 col8\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col9\" class=\"data row9 col9\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col10\" class=\"data row9 col10\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col11\" class=\"data row9 col11\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col12\" class=\"data row9 col12\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col13\" class=\"data row9 col13\" >0</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col14\" class=\"data row9 col14\" >90</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col15\" class=\"data row9 col15\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col16\" class=\"data row9 col16\" >254</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col17\" class=\"data row9 col17\" >247</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col18\" class=\"data row9 col18\" >53</td>\n",
|
|||
|
" <td id=\"T_508423a8_5672_11ea_9acc_8f0047ef1a35row9_col19\" class=\"data row9 col19\" >0</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" </tbody></table>"
|
|||
|
],
|
|||
|
"text/plain": [
|
|||
|
"<pandas.io.formats.style.Styler at 0x7fb709e80750>"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"#hide_output\n",
|
|||
|
"df = pd.DataFrame(im3_t[:10,:20])\n",
|
|||
|
"df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Top section of a digit\" width=\"490\" src=\"images/att_00059.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"There's a top edge at cell 5,7. Let's repeat our calculation there:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"tensor(762.)"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"(im3_t[4:7,6:9] * top_edge).sum()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"There's a right edge at cell 8,18. What does that give us?:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"tensor(-29.)"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"(im3_t[7:10,17:20] * top_edge).sum()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"As you can see, this little calculation is returning a high number where the 3x3 pixel square represents a top edge (i.e. where there are low values at the top of the square, and high values immediately underneath). That's because the `-1` values in our kernel have little impact in that case, but the `1` values have a lot.\n",
|
|||
|
"\n",
|
|||
|
"Let's look a tiny bit at the math. The filter will take any window of size 3 by 3 in our images, and if we name the pixel values like this:\n",
|
|||
|
"\n",
|
|||
|
"$$\\begin{matrix} a1 & a2 & a3 \\\\ a4 & a5 & a6 \\\\ a7 & a8 & a9 \\end{matrix}$$\n",
|
|||
|
"\n",
|
|||
|
"it will return $a1+a2+a3-a7-a8-a9$. Now if we are in a part of the image where there $a1$, $a2$ and $a3$ are kind of the same as $a7$, $a8$ and $a9$, then the terms will cancel each other and we will get 0. However if $a1$ is greater than $a7$, $a2$ is greater than $a8$ and $a3$ is greater than $a9$, we will get a bigger number as a result. So this filter detects horizontal edges, more precisely edges where we go from bright parts of the image at the top to darker parts at the bottom.\n",
|
|||
|
"\n",
|
|||
|
"Changing our filter to have the row of ones at the top and the -1 at the bottom would detect horizonal edges that go from dark to light. Putting the ones and -1 in columns versus rows would give us a filter that detect vertical edges. Each set of weights will produce a different kind of outcome.\n",
|
|||
|
"\n",
|
|||
|
"Let's create a function to do this for one location, and check it matches our result from before:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"def apply_kernel(row, col, kernel):\n",
|
|||
|
" return (im3_t[row-1:row+2,col-1:col+2] * kernel).sum()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"tensor(762.)"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"apply_kernel(5,7,top_edge)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"But note that we can't apply it to the corner (such as location 0,0), since there isn't a complete 3x3 square there."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Mapping a convolution kernel"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We can map `apply_kernel()` across the coordinate grid. That is, we'll be taking our 3x3 kernel, and applying it to each 3x3 section of our image. For instance, here are the positions a 3x3 kernel can be applied to in the first row of a 5x5 image:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img src=\"images/chapter9_nopadconv.svg\" id=\"nopad_conv\" caption=\"Applying a kernel across a grid\" alt=\"Applying a kernel across a grid\" width=\"400\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"To get a *grid* of coordinates we can use a *nested list comprehension*, like so:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"[[(1, 1), (1, 2), (1, 3), (1, 4)],\n",
|
|||
|
" [(2, 1), (2, 2), (2, 3), (2, 4)],\n",
|
|||
|
" [(3, 1), (3, 2), (3, 3), (3, 4)],\n",
|
|||
|
" [(4, 1), (4, 2), (4, 3), (4, 4)]]"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"[[(i,j) for j in range(1,5)] for i in range(1,5)]"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"> note: Nested list comprehensions are used a lot in Python, so if you haven't seen them before, take a few minutes to make sure you understand what's happening here, and experiment with writing your own nested list comprehensions."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here's the result of applying our kernel over a coordinate grid."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAE1UlEQVR4nO2c104cSxRFF9iYYHIGk0EkiSQsXuA3+Ak+iI/hAT8ihEBgRI4i2ORsgkn3wdpT0weu1YN75KurWi893dNd01TvPnXOrhIpz8/PeBypf/sG/mv4DjH4DjH4DjH4DjG8/92Xw8PD/9shaGhoKOW1414hBt8hBt8hBt8hBt8hht+OMuLp6QmAk5MTAM7OzgDY39+PnfPjxw8AVlZWALi9vQ20UVJSAsCHDx8Cx9PS0gAoKyuLHfv06RMA5eXlAGRmZoa5zUjwCjGEUoiUMDs7C8DY2BgABwcHoX9ofX099LkZGRkAVFRUANDb2wtAU1MT4NRk1RYFXiEG3yGGUK/M1dUV4CQqKXd1dcXOKSgoAKClpQWAwsJCALKysgA4PT0FIDU1+AwUfBWMAaanpwFYXV0FYGpqKvD7+fn5gf0o8QoxhFJIf38/ALW1tQBUV1cDkJ2d7Rp6/6upd+/eBbZWERZZmEdHR7FjCwsLAIyMjACwu7sLuGB7c3MDQE5OTpjbTwivEEMohZSWlgJQWVkJuCRLKgCXvN3f3wNwd3cHwMPDQ6AtfS8UQ+IVos9VVVUA1NXVBa6RqmxbUeAVYgilEI0Ah4eHAKSnp784R6n7z58/AacQ7Qs93Z2dHcDFi/jzpIzBwcHAvtjY2ABgc3MzzO0nhFeIIZRC9LTji7lEUS4jZUxOTgbabG5ujp2r/EaxQ/mGYouUmgy8QgyhFPInXF9fAzA3NwfAxMRE4HhfXx8AHR0dsWuklr29PcDFHRWIujYZeIUYkq6Q8/NzwOUjNTU1gDOB6uvrAVf7AIyOjgJuNPn27Rvgyv6GhobAvjLYKPAKMSRdIaqMW1tbAZfdpqT8mie6uLgAgnmIMtDj42MAvn79GjhXmbKMo87OTsCpLv53EsUrxJB0hagylkKkGJnLUoOePrjqViNPd3c34DLmmZkZwI1Yqod6enpibTQ2Nr7pfr1CDL5DDEl/ZWQdyqGXpVhUVBTYjzebNATLbtDQrGFYr4YKw+/fvwPuVQPIzc0FXAAOi1eIIekKUVGnQLi4uAg4ZeTl5QFQXFwcu0blvlT08eNHwD112ZJqQwVjPNZ2CItXiCHpCrHIStBWMSbe7FGs0NCpaQchZUg5GtJfI9F1uF4hhsgVoiJOxo+KO01TqCBT7HgNpexabfBvKP2Pjz9CprfaCotXiCEyhUgZy8vLgLMINSK0t7cD4SaXHh8fAVf2yzpUuq91I7ISXmvzrSaSV4ghMoUoS9ze3gbcdKMKNE2DyszRKCPiDWzZjePj44BTwOfPnwFnEKlNrTBS3IC3G9FeIYbIFKL6Q3WGtlLMly9fAJifnwderhu7vLyMfZZ6tMRC0xIyn2UMKYZIGYo5tr1E8AoxRKaQtrY24GVWKWUo+5QRZCe9NHIADAwMAC7+yCLUGjPFDilJ1e7W1tYf/x1eIQbfIYbIXhnJVyW7jBkVXprJl7xlGOm618p/tWHXlGmlgdpSoH5rII3HK8QQmUIULPXkZeZo9ZG2SuETQWm4htW1tTXAGUMyoaLAK8QQeQxZWloCnFI0L6MkS2ayCjXFhXjLT7FA1oFihewAxZBk4BViiNwgssZMogbN38YrxJDi/xlCEK8Qg+8Qg+8Qg+8Qg+8Qg+8Qwz/aP/Y2oVu6fAAAAABJRU5ErkJggg==\n",
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 72x72 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"rng = range(1,27)\n",
|
|||
|
"top_edge3 = tensor([[apply_kernel(i,j,top_edge) for j in rng] for i in rng])\n",
|
|||
|
"\n",
|
|||
|
"show_image(top_edge3);"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Looking good! Our top edges are black, and bottom edges are white (since they are the *opposite* of top edges). Now that our *image* contains negative numbers too, matplotlib has automatically changed our colors, so that white is the smallest number in the image, black the highest, and zeros appear as grey.\n",
|
|||
|
"\n",
|
|||
|
"We can try the same thing for left edges:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAEa0lEQVR4nO2cyUosSxCGv9brPM8TDqgLZ1EXIgouRHwZH8dXEcGFS9GFKCoqqOCEA87zrGdx+Dur8nhaqy37Xi75baqpLLOTyD8jIiOrjby/v+MwJP3bA/iv4Qxi4Qxi4Qxi4Qxi8U+sxrGxsf9tCBodHY18dN8pxMIZxMIZxMIZxMIZxCJmlLF5eXkB4PT0FICKiopo2/n5OQBZWVkAnJycANDS0gLA7e0tAI+Pjx/2nZaWFv2cn58PwP39PQBXV1dBhvktnEIsAilkY2MDMDPmVUiYRCK/U4Tc3FzAqE73pUYpKEycQiycQSwCLZmpqSkA2tra/vpMSkoKANnZ2TH7yszMBOD4+BiApCQzN5eXlwAUFRX5/kYOen19HYDt7e0vj/2rOIVYBFLIw8MDYGY/PT092paXlweY8KnZVfj9jLu7u+jnyclJAEZGRgAoLS319amyp1NIAgikkM7OTgBqampCH8jh4WH08+zsLAADAwMAHB0dAZCTkwPAxcVF6N8vnEIsAimkr68PMGv4I56engC4vr6O2dfNzQ0Ae3t7fzyvREz09vYCcHZ2BjiFJJRAClHuILyRQby9vQF/38RplpV2y3coTQcYGhoCoLu7GzDRRSn72tpakGEHwinEIpBCvoOyz7m5OQCam5t97eXl5dHPlZWVADQ2NgKQkZEBwPT0tK+vn8ApxCJhClHpQOtf+yGpwFtK0DZf7OzsACYzLisrA4wP+yyiBcEpxCJhClHuoKv8gh25wOQ5UoYUoAgmBdXV1QFGKbu7u9E+lA8FxSnEImEK0W54cHAQMDOo+ok3+1UUGR8fB0xEkqpsH6O+S0pKovdmZmbiGqdTiIUziEXClozCq9Lvr6Bt/+vrK2CWhq4Kw4WFhYBJ6AAaGhoA2NzcDDROpxCLhClEp3G6qhyp4nKskoLaFhYWAOjo6PC165yovb09es8pJCQSphAbb4HaRkVlu5QwPz8PGJUVFBQApqQg1QEUFxfHNS6nEIsfU4hmVQdRTU1NgCkUHxwcfNqHSgJSivxBV1eX7zklal7VxVsicAqxCF0hUoYigtZ7f38/ACsrK4H7kppUSlQeIqQ6XQEWFxcDjx2cQv4gdIUsLy8DsL+/DxiF1NbWAmZLr/xD/kFX7zPPz8++Z5VnVFdXA5CcnAyYvMT7vki8x5xOIRahK0SzK18xPDwMmNnWu2ba9stPqB1MQUhRQ1Gkp6cHMNt8RRv5Dn03mNwkKE4hFqErRL5AvkNHllKMVwmfkZqaChifUV9fD5i3IXXkKd8Rb2Tx4hRi4QxiEfqS0TskCpVK3ScmJgBobW0FTLKl4o73bFfVdF3lePWMHKYKR6urq77738EpxCJ0hVRVVQGmqOPdkoPZdGl2hcIw+FPwj5BTXVpa8vUZBk4hFqErRGcnKuF99S3EWCiUb21tAT/7YwCnEIsfKxApyqgcKHQO6z1lA5PAedHPUBKJU4hFxP0zBD9OIRbOIBbOIBbOIBbOIBbOIBa/AEQyr63rTKk/AAAAAElFTkSuQmCC\n",
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 72x72 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"left_edge = tensor([[-1,1,0],\n",
|
|||
|
" [-1,1,0],\n",
|
|||
|
" [-1,1,0]]).float()\n",
|
|||
|
"\n",
|
|||
|
"left_edge3 = tensor([[apply_kernel(i,j,left_edge) for j in rng] for i in rng])\n",
|
|||
|
"\n",
|
|||
|
"show_image(left_edge3);"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"This operation of applying a kernel over a grid in this way is called *convolution*. In the paper [A guide to convolution arithmetic for deep learning](https://arxiv.org/abs/1603.07285) there are many great diagrams showing how image kernels can be applied. Here's an example from the paper showing (at bottom) a light blue 4x4 image, with a dark blue 3x3 kernel being applied, creating a 2x2 green output activation map at the top. (We'll be using quite a few images from this paper in this book--when you see images in this style, you'll know they're from this great paper.)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Result of applying a 3x3 kernel to a 4x4 image\" width=\"782\" caption=\"Result of applying a 3x3 kernel to a 4x4 image\" id=\"three_ex_four_conv\" src=\"images/att_00028.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Look at the shape of the result. If the original image has a height of `h` and a width of `w`, how many 3 by 3 windows can we find? As you see from the example, there are `h-2` by `w-2` windows, so the image we get as a result as a height of `h-2` and a witdh of `w-2`."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Convolutions in PyTorch"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Convolution is such an important and widely-used operation that PyTorch has it builtin. It's called `F.conv2d`. The PyTorch docs tell us that it includes these parameters:\n",
|
|||
|
"\n",
|
|||
|
"- **input**: input tensor of shape `(minibatch, in_channels, iH, iW)`\n",
|
|||
|
"- **weight**: filters of shape `(out_channels, in_channels, kH, kW)`\n",
|
|||
|
"\n",
|
|||
|
"Here `iH,iW` is the height and width of the image (i.e. `28,28`), and `kH,kW` is the height and width of our kernel (`3,3`). But apparently PyTorch is expecting rank 4 tensors for both these arguments, but currently we only have rank 2 tensors (i.e. matrices, arrays with two axes).\n",
|
|||
|
"\n",
|
|||
|
"The reason for these extra axes is that PyTorch has a few tricks up its sleeve. The first trick is that PyTorch can apply a convolution to multiple images at the same time. That means we can call it on every item in a batch at once!\n",
|
|||
|
"\n",
|
|||
|
"The second trick is that PyTorch can apply multiple kernels at the same time. So let's create the diagnoal edge kernels too, and then stack all 4 of our edge kernels into a single tensor:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([4, 3, 3])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"diag1_edge = tensor([[ 0,-1, 1],\n",
|
|||
|
" [-1, 1, 0],\n",
|
|||
|
" [ 1, 0, 0]]).float()\n",
|
|||
|
"diag2_edge = tensor([[ 1,-1, 0],\n",
|
|||
|
" [ 0, 1,-1],\n",
|
|||
|
" [ 0, 0, 1]]).float()\n",
|
|||
|
"\n",
|
|||
|
"edge_kernels = torch.stack([left_edge, top_edge, diag1_edge, diag2_edge])\n",
|
|||
|
"edge_kernels.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"In order to test on a mini-batch, we'll need a `DataLoader`, and a sample mini-batch. Let's use the data block API:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([64, 1, 28, 28])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"mnist = DataBlock((ImageBlock(cls=PILImageBW), CategoryBlock), \n",
|
|||
|
" get_items=get_image_files, \n",
|
|||
|
" splitter=GrandparentSplitter(),\n",
|
|||
|
" get_y=parent_label)\n",
|
|||
|
"\n",
|
|||
|
"dls = mnist.dataloaders(path)\n",
|
|||
|
"xb,yb = first(dls.valid)\n",
|
|||
|
"xb.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"By default, fastai puts data on the GPU when using data blocks. Let's move it to the CPU for our examples:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"xb,yb = to_cpu(xb),to_cpu(yb)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"One batch contains 64 images, each of 1 channel, with 28x28 pixels. `F.conv2d` can handle multi-channel (e.g. colour) images. A *channel* is a single basic color in an image--for regular full color images there are 3 channels, red, green, and blue. PyTorch represents an image as a rank-3 tensor, with dimensions channels x rows x columns.\n",
|
|||
|
"\n",
|
|||
|
"We'll see how to handle more than one channel later in this chapter. Kernels passed to `F.conv2d` need to be rank-4 tensors: channels_in x features_out x rows x columns. `edge_kernels` is currently missing one of these: the `1` for features_out. We need to tell PyTorch that the number of input channels in the kernel is one, by inserting an axis of size one (this is known as a *unit axis*) in the first location, since the PyTorch docs show that's where `in_channels` is expected. To insert a unit axis into a tensor, use the `unsqueeze` method:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"(torch.Size([4, 3, 3]), torch.Size([4, 1, 3, 3]))"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"edge_kernels.shape,edge_kernels.unsqueeze(1).shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"This is now the correct shape for `edge_kernels`. Let's pass this all to `conv2d`:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"edge_kernels = edge_kernels.unsqueeze(1)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([64, 4, 26, 26])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"batch_features = F.conv2d(xb, edge_kernels)\n",
|
|||
|
"batch_features.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The output shape shows our 64 images in the mini-batch, 4 kernels, and 26x26 edge maps (we started with 28x28 images, but lost one pixel from each side as discussed earlier). We can see we get the same results as when we did this manually:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAEQAAABECAYAAAA4E5OyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAADdUlEQVR4nO2cyUorQRRAT5wVFJxHVARxgDgguHbjzh9wq1s/x1/RrStXggqCExFRcZFExQEnHBeP25W+rw0x6XQ/Hvesmkp1p7x9UnXrdmPi6+sLw1ER9wD+NSwgCguIwgKisIAoqvJ9uLq6+t8uQSsrK4mgdjNEYQFRWEAUFhCFBURhAVFYQBQWEIUFRGEBUeRN3eNCilaXl5cAvL+/A9Df3w9AQ0MDACMjIwAcHx8D8Pz8XPJ3myGK2A15e3sD4Obmxmt7eXkB4PPzE4BEInAfxuDgoO/ci4uLksdjhihCM+Tu7g5wv+/q6urAfufn5wD09PQAzgKxohDk2l1dXQA0NTUVMeJgzBBFSYaIFQD7+/sATE9PAz8bUggyZ/T19QVeq66uztcvjNVFMEMUJRlye3vrHW9tbQEwOTmZ9xy5u4Lc/YmJCa9NVp7t7W0ARkdHfee0t7cDLl+5urr69dh/wgxRhLbKPDw8AC67HB4eBqC1tRVwq4nkDpIzfHx8AFBfX+9dS1acjY0NwBkic8bQ0BDgDL2/vw/rzzBDNCUZkrv+z87OAlBZWVnUtWTeANjb2wP8qxjA2NgYADU1NQCcnp4W9V35MEMUJRnS3NzsHY+PjwP+Ox2EzA/6NQzJYAHW19cBmJmZ8fWRuUSucXBwUMyw82KGKEJbZXp7ewPbr6+vCzo/lUp5x5lMBoDl5WUAOjs7AWhpaQFgc3MTcCtbmJghCguIIvYC0dPTEwA7Ozte29TUFOBS9GQyCbhCkCR/5cAMUcRuiEymMpECLC4uAi5FF1N2d3eB3xWTfosZoojNENnsyRZfSooA3d3dvraqqj/DPDo68p1bDswQRWyGnJycAC5xW1hY8D6TzdvAwAAAh4eHQHnNEMwQReSGPD4+Am7FkEJSbuovZUhZTcJ4AFUoZogickMk35Ai0NLSEgAdHR1eH9nmS9/c0kC5MUMUkRuSzWYB94qD5BptbW1eHylMr62tRTw6M+QvIjdEHipJgVpyDnk8AW4HHObjhUIxQxQWEEXkPxl5S0Cq9ILUTaHwOmw5MEMUkRtydnYGwPz8vK+9sbHRO5aNXxyYIYrIDZH3Q15fXwGora0FoKLC3RtZduPADFFEbsjc3BzgEjIhnU57x7nvrEaNGaJI2D9D8GOGKCwgCguIwgKisIAoLCCKb79WEcYbcUyrAAAAAElFTkSuQmCC\n",
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 72x72 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"show_image(batch_features[0,0]);"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The most important trick that PyTorch has up its sleeve is that it can use the GPU to do all this work in parallel. That is, applying multiple kernels, to multiple images, across multiple channels. Doing lots of work in parallel is critical to getting GPUs to work efficiently; if we did each of these one at a time, we'll often run hundreds of times slower (and if we used our manual convolution loop from the previous section, we'd be millions of times slower!) Therefore, to become a strong deep learning practitioner, one skill to practice is giving your GPU plenty of work to do at a time."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Strides and padding"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"It would be nice to not lose those two pixels on each axis. The way we do that is to add *padding*, which is simply additional pixels added around the outside of our image. Most commonly, pixels of zeros are added. With appropriate padding, we can ensure that the output activation map is the same size as the original image, which can make things a lot simpler when we construct our architectures."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img src=\"images/chapter9_padconv.svg\" id=\"pad_conv\" caption=\"Padding with a convolution\" alt=\"Padding with a convolution\" width=\"600\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"With a 5x5 input, and 4x4 kernel, and 2 pixels of padding, we end up with a 6x6 activation map:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"4x4 kernel with 5x5 input and 2 pixels of padding\" width=\"783\" caption=\"4x4 kernel with 5x5 input and 2 pixels of padding\" id=\"four_by_five_conv\" src=\"images/att_00029.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"If we add a kernel of size `ks` by `ks` (with `ks` an odd number), the necessary padding on each side to keep the same shape is `ks//2`. An even number for `ks` would require a different amount of padding on the top/bottom, left/right, but in practice we almost never use an even filter size.\n",
|
|||
|
"\n",
|
|||
|
"So far, when we have applied the kernel to the grid, we have moved it one pixel over at a time. But we can jump further; for instance, we could move over two pixels after each kernel application. This is known as a *stride 2* convolution. The most common kernel size in practice is 3x3, and the most common padding is 1. As you'll see, stride 2 convolutions are useful for decreasing the size of our outputs, and stride 1 convolutions are useful for adding layers without changing the output size."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"3x3 kernel with 5x5 input, stride 2 convolution, and 1 pixel of padding\" width=\"774\" caption=\"3x3 kernel with 5x5 input, stride 2 convolution, and 1 pixel of padding\" id=\"three_by_five_conv\" src=\"images/att_00030.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"In an image of size `h` by `w` like before, using a padding of 1 and a stride of 2 will give us a result of size `(h+1)//2` by `(w+1)//2`. The general formula for each dimension is `(n + 2*pad - ks)//stride + 1` where `pad` is the padding, `ks` the size of our kernel and `stride` the stride."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### CNNs from different viewpoints"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"fast.ai student Matt Kleinsmith came up with the very clever idea of showing [CNNs from different viewpoints](https://medium.com/impactai/cnns-from-different-viewpoints-fab7f52d159c). In fact, it's so clever, and so helpful, we're going to show it here too!\n",
|
|||
|
"\n",
|
|||
|
"Here's our 3x3 pixel *image*, with each *pixel* labeled with a letter:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"The image\" width=\"75\" src=\"images/att_00032.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"...and our kernel, with each weight labeled with a greek letter:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"The kernel\" width=\"55\" src=\"images/att_00033.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Since the filter fits in the image four times, we have four results:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"The activations\" width=\"52\" src=\"images/att_00034.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here’s how we applied the kernel to each section of the image to yield each result:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Applying the kernel\" width=\"366\" caption=\"Applying the kernel\" id=\"apply_kernel\" src=\"images/att_00035.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The equation view:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"The equation\" width=\"436\" caption=\"The equation\" id=\"eq_view\" src=\"images/att_00036.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Notice that the bias term, b, is the same for each section of the image. You can consider the bias as part of the filter, just like the weights (α, β, γ, δ) are part of the filter.\n",
|
|||
|
"\n",
|
|||
|
"The compact equation view:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"The compact equation\" width=\"218\" src=\"images/att_00037.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here's an interesting insight -- a convolution can be represented as a special kind of matrix multiplication. The weight matrix is just like the ones from traditional neural networks. However, this weight matrix has two special properties:\n",
|
|||
|
"\n",
|
|||
|
"1. The zeros shown in gray are untrainable. This means that they’ll stay zero throughout the optimization process.\n",
|
|||
|
"1. Some of the weights are equal, and while they are trainable (i.e. changeable), they must remain equal. These are called *shared weights*.\n",
|
|||
|
"\n",
|
|||
|
"The zeros correspond to the pixels that the filter can't touch. Each row of the weight matrix corresponds to one application of the filter."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Convolution as matrix multiplication\" width=\"683\" caption=\"Convolution as matrix multiplication\" id=\"conv_matmul\" src=\"images/att_00038.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Our first convolutional neural network"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Learning kernels"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"There is no reason to believe that these particular edge filters are the most useful kernels for image recognition. Furthermore, we've seen that in later layers convolutional kernels become complex transformations of features from lower levels — we do not have a good idea of how to manually construct these.\n",
|
|||
|
"\n",
|
|||
|
"Instead, it would be best to learn the values of the kernels. We already know how to do this — SGD! In effect, the model will learn the features that are useful for classification.\n",
|
|||
|
"\n",
|
|||
|
"When we use convolutions instead of (or in addition to) regular linear layers we create a *convolutional neural network*, or *CNN*."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Creating the CNN"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here's the basic neural network we had in <<chapter_mnist_basics>>:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"simple_net = nn.Sequential(\n",
|
|||
|
" nn.Linear(28*28,30),\n",
|
|||
|
" nn.ReLU(),\n",
|
|||
|
" nn.Linear(30,1)\n",
|
|||
|
")"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We can view a model's definition:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"Sequential(\n",
|
|||
|
" (0): Linear(in_features=784, out_features=30, bias=True)\n",
|
|||
|
" (1): ReLU()\n",
|
|||
|
" (2): Linear(in_features=30, out_features=1, bias=True)\n",
|
|||
|
")"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"simple_net"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We now want to create a similar architecture to this linear model, but using convolutional layers instead of linear. `nn.Conv2d` is the module equivalent of `F.conv2d`. It's more convenient than `F.conv2d` when creating an architecture, because it creates the weight matrix for us automatically when we instantiate it.\n",
|
|||
|
"\n",
|
|||
|
"Here's a possible architecture:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"broken_cnn = sequential(\n",
|
|||
|
" nn.Conv2d(1,30, kernel_size=3, padding=1),\n",
|
|||
|
" nn.ReLU(),\n",
|
|||
|
" nn.Conv2d(30,1, kernel_size=3, padding=1)\n",
|
|||
|
")"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"One thing to note here is that we didn't need to specify \"28x28\" as the input size. That's because a linear layer needs a weight in the weight matrix for every pixel. So it needs to know how many pixels there are. But a convolution is applied over each pixel automatically. The weights only depend on the number of input and output channels, and the kernel size, as we say in the previous section.\n",
|
|||
|
"\n",
|
|||
|
"Have a think about what the output shape is going to be.\n",
|
|||
|
"\n",
|
|||
|
"Let's try it and see:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([64, 1, 28, 28])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"broken_cnn(xb).shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"This is not something we can use to do classification, since we need a single output activation per image, not a 28x28 map of activations. One way to deal with this is to use enough stride-2 convolutions such that the final layer is size 1. That is, after one stride-2 convolution, the size will be 14x14, after 2 it will be 7x7, then 4x4, 2x2, and finally size 1.\n",
|
|||
|
"\n",
|
|||
|
"Let's try that now. First, we'll define a function with the basic parameters we'll use in each convolution:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"def conv(ni, nf, ks=3, act=True):\n",
|
|||
|
" res = nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)\n",
|
|||
|
" if act: res = nn.Sequential(res, nn.ReLU())\n",
|
|||
|
" return res"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"> important: Refactoring parts of your neural networks like this makes it much less likely you'll get errors due to inconsistencies in your architectures, and makes it more obvious to the reader which parts of your layers are actually changing."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"When we use a stride-2 convolution, we often increase the number of features at the same time. This is because we're decreasing the number of activations in the activation map by a factor of 4; we don't want to decrease the capacity of a layer by too much at a time."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"> jargon: channels and features: These two terms are largely used interchangably, and refer to the size of the second axis of a weight matrix, which is, therefore, the number of activations per grid cell after a convolution. *Features* is never used to refer to the input data, but *channels* can refer to either the input data (generally channels are colors) or activations inside the network."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"simple_cnn = sequential(\n",
|
|||
|
" conv(1 ,4), #14x14\n",
|
|||
|
" conv(4 ,8), #7x7\n",
|
|||
|
" conv(8 ,16), #4x4\n",
|
|||
|
" conv(16,32), #2x2\n",
|
|||
|
" conv(32,2, act=False), #1x1\n",
|
|||
|
" Flatten(),\n",
|
|||
|
")"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"> j: I like to add comments like the above after each convolution to show how large the activation map will be after each layer. The above comments assume that the input size is 28x28"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Now the network outputs two activations, which maps to the two possible levels in our labels:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([64, 2])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"simple_cnn(xb).shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We can now create our `Learner`:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"learn = Learner(dls, simple_cnn, loss_func=F.cross_entropy, metrics=accuracy)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"To see exactly what's going on in your model, use `summary`:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"Sequential (Input shape: ['64 x 1 x 28 x 28'])\n",
|
|||
|
"================================================================\n",
|
|||
|
"Layer (type) Output Shape Param # Trainable \n",
|
|||
|
"================================================================\n",
|
|||
|
"Conv2d 64 x 4 x 14 x 14 40 True \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"ReLU 64 x 4 x 14 x 14 0 False \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"Conv2d 64 x 8 x 7 x 7 296 True \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"ReLU 64 x 8 x 7 x 7 0 False \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"Conv2d 64 x 16 x 4 x 4 1,168 True \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"ReLU 64 x 16 x 4 x 4 0 False \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"Conv2d 64 x 32 x 2 x 2 4,640 True \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"ReLU 64 x 32 x 2 x 2 0 False \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"Conv2d 64 x 2 x 1 x 1 578 True \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"Flatten 64 x 2 0 False \n",
|
|||
|
"________________________________________________________________\n",
|
|||
|
"\n",
|
|||
|
"Total params: 6,722\n",
|
|||
|
"Total trainable params: 6,722\n",
|
|||
|
"Total non-trainable params: 0\n",
|
|||
|
"\n",
|
|||
|
"Optimizer used: <function Adam at 0x7fbc9c258cb0>\n",
|
|||
|
"Loss function: <function cross_entropy at 0x7fbca9ba0170>\n",
|
|||
|
"\n",
|
|||
|
"Callbacks:\n",
|
|||
|
" - TrainEvalCallback\n",
|
|||
|
" - Recorder\n",
|
|||
|
" - ProgressCallback"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"learn.summary()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Note that the output of the final Conv2d layer is `64x2x1x1`. We need to remove those extra `1x1` axes; that's what `Flatten` does. It's basically the same as PyTorch's `squeeze` method, but as a module.\n",
|
|||
|
"\n",
|
|||
|
"Let's see if this trains! Since this is a deeper network than we've built from scratch before, we'll use a lower learning rate and more epochs:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/html": [
|
|||
|
"<table border=\"1\" class=\"dataframe\">\n",
|
|||
|
" <thead>\n",
|
|||
|
" <tr style=\"text-align: left;\">\n",
|
|||
|
" <th>epoch</th>\n",
|
|||
|
" <th>train_loss</th>\n",
|
|||
|
" <th>valid_loss</th>\n",
|
|||
|
" <th>accuracy</th>\n",
|
|||
|
" <th>time</th>\n",
|
|||
|
" </tr>\n",
|
|||
|
" </thead>\n",
|
|||
|
" <tbody>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <td>0</td>\n",
|
|||
|
" <td>0.072684</td>\n",
|
|||
|
" <td>0.045110</td>\n",
|
|||
|
" <td>0.990186</td>\n",
|
|||
|
" <td>00:05</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" <tr>\n",
|
|||
|
" <td>1</td>\n",
|
|||
|
" <td>0.022580</td>\n",
|
|||
|
" <td>0.030775</td>\n",
|
|||
|
" <td>0.990186</td>\n",
|
|||
|
" <td>00:05</td>\n",
|
|||
|
" </tr>\n",
|
|||
|
" </tbody>\n",
|
|||
|
"</table>"
|
|||
|
],
|
|||
|
"text/plain": [
|
|||
|
"<IPython.core.display.HTML object>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"learn.fit_one_cycle(2, 0.01)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Success! It's getting closer to the resnet-18 result we had, although it's not quite there yet, and it's taking more epochs, and we're needing to use a lower learning rate. So we've got a few more tricks still to learn--but we're getting closer and closer to being able to create a modern CNN from scratch."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Understanding convolution arithmetic"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We can see from the summary that we have an input of size `64x1x28x28`. The axes are: `batch,channel,height,width`. This is often represented as `NCHW` (where `N` refers to batch size). Tensorflow, on the other hand, uses `NHWC` axis order. The first layer is:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"Sequential(\n",
|
|||
|
" (0): Conv2d(1, 4, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
|
|||
|
" (1): ReLU()\n",
|
|||
|
")"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"m = learn.model[0]\n",
|
|||
|
"m"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"So we have 1 channel input, 4 channel output, and a 3x3 kernel. Let's check the weights of the first convolution:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([4, 1, 3, 3])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"m[0].weight.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The summary shows we have 40 parameters, and `4*1*3*3` is 36. What are the other 4 parameters? Let's see what the bias contains:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([4])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"m[0].bias.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We can now use this information to better understand our earlier statement in this section: \"because we're decreasing the number of activations in the activation map by a factor of 4; we don't want to decrease the capacity of a layer by too much at a time\".\n",
|
|||
|
"\n",
|
|||
|
"There is one bias for each channel. (Sometimes channels are called *features* or *filters* when they are not input channels.) The output shape is `64x4x14x14`, and this will therefore become the input shape to the next layer. The next layer, according to `summary`, has 296 parameters. Let's ignore the batch axis to keep things simple. So for each of `14*14=196` locations we are multiplying `296-8=288` weights (ignoring the bias for simplicity), so that's `196*288=56_448` multiplications at this layer. The next layer will have `7*7*(1168-16)=56_448` multiplications.\n",
|
|||
|
"\n",
|
|||
|
"So what happened here is that our stride 2 conv halved the *grid size* from `14x14` to `7x7`, and we doubled the *number of filters* from 8 to 16, resulting in no overall change in the amount of computation. If we left the number of channels the same in each stride 2 layer, the amount of computation being done in the net would get less and less as it gets deeper. But we know that the deeper layers have to compute semantically rich features (such as eyes, or fur), so we wouldn't expect that doing *less* compute would make sense."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Receptive fields"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Another way to think of this is based on *receptive fields*. The \"receptive field\" is the area of an image that is involved in the calculation of a layer. On the book website, you'll find an Excel spreadsheet called `conv-example.xlsx` that shows the calculation of two stride 2 convolutional layers using an MNIST digit. Each layer has a single kernel. If we click on one of the cells in the *conv2* section, which shows the output of the second convolutional layer, and click *trace precendents*, we see this:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Immediate precedents of conv2 layer\" width=\"308\" caption=\"Immediate precedents of conv2 layer\" id=\"preced1\" src=\"images/att_00068.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here, the green cell is the cell we clicked on, and the blue highlighted cells are its *precedents*--that is, the cells used to calculate its value. These cells are the corresponding 3x3 area of cells from the input layer (on the left), and the cells from the filter (on the right). Let's now click *show precedents* again, to show what cells are used to calculate these inputs, and see what happens:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Secondary precedents of conv2 layer\" width=\"601\" caption=\"Secondary precedents of conv2 layer\" id=\"preced2\" src=\"images/att_00069.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"In this example, we just have two convolutional layers, each of stride 2, so this is now tracing right back to the input image. We can see that a 7x7 area of cells in the input layer is used to calculate the single green cell in the Conv2 layer. This 7x7 area is the *receptive field* in the Input of the green activation in Conv2. We can also see that a second filter kernel is needed now, since we have two layers.\n",
|
|||
|
"\n",
|
|||
|
"As you see from this example, the deeper we are in the network (specfically, the more stride 2 convs we have before a layer), the larger the receptive field for an activation in that layer. A large receptive field means that a large amount of the input image is used to calculate each activation in that layer. So we know now that in the deeper layers of the network, we have semantically rich features, corresponding to larger receptive fields. Therefore, we'd expect that we'd need more weights for each of our features to handle this increasing complexity. This is another way of seeing the same thing we saw in the previous section: when we introduce a stride 2 conv in our network, we should also increase the number of channels."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### A note about twitter"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We are not, to say the least, big users of social networks in general. But our goal of this book is to help you become the best deep learning practitioner you can, and we would be remiss not to mention how important Twitter has been in our own deep learning journeys.\n",
|
|||
|
"\n",
|
|||
|
"You see, there's another part of Twitter, far away from Donald Trump and the Kardashians, which is the part of Twitter where deep learning researchers and practitioners talk shop every day. As we were writing the section above, Jeremy wanted to double-check to ensure that what we were saying about stride 2 convolutions was accurate, so he asked on twitter:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"twitter 1\" width=\"500\" src=\"images/att_00064.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"A few minutes later, this answer popped up:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"twitter 2\" width=\"500\" src=\"images/att_00065.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Christian Szegedy is the first author of [Inception](https://arxiv.org/pdf/1409.4842.pdf), the 2014 Imagenet winner and source of many key insights used in modern neural networks. Two hours later, this appeared:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"twitter 3\" width=\"500\" src=\"images/att_00066.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Do you recognize that name? We saw a picture of him back in <<chapter_production>>, when we were talking about the Turing Award winners who set the foundation of deep learning today!\n",
|
|||
|
"\n",
|
|||
|
"Jeremy also asked on Twitter for help checking our description of label smoothing in <<chapter_sizing_and_tta>> was accurate, and got a response from again from directly from Christian Szegedy (label smoothing was originally introduced in the Inception paper):"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"twitter 4\" width=\"500\" src=\"images/att_00067.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Many of the top people in deep learning today are Twitter regulars, and are very open about interacting with the wider community. One good way to get started is to look at a list of Jeremy's [recent Twitter likes](https://twitter.com/jeremyphoward/likes), or [Sylvain's](https://twitter.com/GuggerSylvain/likes). That way, you can see a list of Twitter users that we thought had interesting and useful things to say.\n",
|
|||
|
"\n",
|
|||
|
"Twitter is the main way we both stay up to date with interesting papers, software releases, and other deep learning news. For making connections with the deep learning community, we recommend getting involved both in the [fast.ai forums](https://forums.fast.ai) and Twitter."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Colour images"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"A colour picture is a rank-3 tensor."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"torch.Size([3, 1000, 846])"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"im = image2tensor(Image.open('images/grizzly.jpg'))\n",
|
|||
|
"im.shape"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAPQAAAEeCAYAAAC9hziuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy7Sa9tW3bn9ZvlKnZ1invOLd+78V6896J2hNO2CKfTZEoIEiGlhJBoIHcovgE9vgA9Wkj0AClp0ExoICGBUjLglLEdcpQv/Kpbl+eealermNWgsW5GjwgpjIQUuqO7G3vvteZ/jvEvhhIR3tW7ele/G6X///4B7+pdvav/7+odoN/Vu/odqneAflfv6neo3gH6Xb2r36F6B+h39a5+h+odoN/Vu/odKvvrPvwv/8f/XCIDpUDjK9p6wUItON+fUZmKno75fMm96mPGuOYsnrHZrck6QrGIRDKR2rW0yiJW89Hp9/naycegEn/14P9iZluCRF5tnhNSRKNwlSelRG0altWKUgrKKESEkoUxBUbZTT8/CoPsUaIQCjH2+GqGVS3OG1b2AKfgatyQC4jJeGVwrqKxDVlnSimICKEfiCUy5B6Jkdn8kHm1wGqFyECWzEW3JpaMUzUFIZWBWjUoXVBYsokI0JoFferoho7aT99llEcVYZuuAA1ZaAykIhTlcRREMpXRtPUxXYhcp2scGW0qco4UHHM9Z1GvWNUn7LmmD1tKgbEMlFIgKo6XpxxXN1nv3/AyPGbMCS+eyjaMJTCzDUfVEu9aLsZn9GOmbZd888b3aJRmH17x8PI1Z/sLgoy01ZzT1X1C6Xl99ZRGGoJEvLEsm2P6uOdquEKAIiMhRXLM1L5i5pcs/Jx96ujKBZVyNPoQEcUwDFyXK5QovK1QSjDOogRSKiileH91g2V1QMlClxLPNi8REUQEJYVKN9P/diAiWGU5bqBtTomiuByuGGNHF/aoqNjna0RnvJmTVcEbD0VRKQPKolHsxh5rIpWrubG4g1GWddjQjR0xdRjjWFQrxnHEarg1axlyw+PdE0K/Y9msqAxYejIrhtjRy0DjFojSJCK6OAoZ0FRUqJIxVpFVYGaPEQWRgZALkjKbfo3Xmv/iP/yv1W8FaOccqkDxAspSu5qSMs45rPVUSmh0i1E9Ua7Z7S8QbZCs0CJIgaaZ4U2N1ZakMjEnkMD1/oycI8kkYhwIKWCKwRiHZI1SCldXRIkMYcAYReNn1JVnSD0GR8gDWRW0WMa8B9EkDXNbM7czKldRSmbIEW0VTdVQUkaKwlrDdthSWTeBi4RYTYgBbzxJazQKpRSpCDFqgmTCmNFWg84YNMbUOFOhNaSUkKKoXANFyDnhjMUrj9Ya7zzX/QUpJZyqwQox96AaRAt9LqSSwTpq2SBKcNZi8KRSiAJGgXiFtRarYBh7igKlFDkLKUUwsLAtmmsuxzNEDFYUxYEU4XR+By8KJWcMYSRnS+UVbbWksoUuCo+u9rx377scxcjV5gqvFR/e/JhXl89IdaYPa1bmiFk9Y2YqrGmY1we8ePMV+xwpklmtDokxYpVmjD2pdJSSMNaDNmRJUIHqFEZNR7FoQWeoaMiqQwv0ObJQgT6NdDmTiRjlEMkooylGYbImkyhFMN6AccSSSXkgpj27uEOJomiF1Q1aKTQGSkEAbTJWFzSWJAWtoSiHaIcuW5xvad2MIQ4IkThes8kdxnhQDZSeWiKtduwlU7SlKAsaXLHslWGMgTFdUldLSkk4ZfC+wmmHShplpmeQMuz0jto0FAWkkRBHSg7kUn77Di2qYEXRjQXrFW2p2eRr5m4GgLEtJhm2cc92SPRDRHRAi8VVBoxFa40Vh1KaVlucsQxj4eXVOX0Y0SjW3TVZErkEDJnKNtS6wmRLySNj6dBobPFoFI1tOB9eU1IhlpHKLpAM2ihqV6O1xXuPyZaYewoF7cFkAyJkX7janOHNDNPWkDWjypQh4VWF0opSQEShCxQU3dhRiGSVkJIRmS6pyjm8qvDOE13A54xRliFv0Vgq6zBGYbXDiGUIPaSeURdKzjjbUClLKgGFwdsaUHTB0JeBhbck3WJVggCSFbZYdISOLUPpMEVT+5ZMJJaRo/qYpat5tX9KnzqCCAZDrRzHs1PmbkVIV+zynJlpaWmoZjNO21soPWN5o+a9uuWk1mBq7swPqeuamg12ucRI4tnZyMc371NZQZG5Wd1iiDsuN+eQLPuwQyVo1AxrPX3YM5YRESGJx+hEZStSn1FKoZTgtEOU0NgWLZoyJowWxpDoXcfVuCZToYrCAMY5UgqMYY9zFZlC5SyKRMoBrTV92RPzCEmRJFObCmM03jpKhEH2iAheQ9QKJVBEYYyZJoASyTK9K8kOVRRaN4w6oVTEKkfOgcvUMyOikqKtG7RoVFGgagoah6YUQauM5IIgZAno3FDpCqxiLD2FTCRgoqMAznqiicTQQxkZfgNJ/rWAzrGQdaLICMaCShRR1KaiLwWXNBjNMHbEBLFkNBrlQLRCa9DaoyVT8kiSBqUUXnl245YkPSINXewZcsCgcH7689Z6hMxYAqkkLJYh92gBFKQSiSlQ6xaRjHUOoww5F6wYQghklSgUMBqHYxxHohpZby/wrsFajSShpExRCasNfRnRxb49VDCEEa3NNP7kiDOemPYoldDKk2KhbaZOnpMQcsAQJ5A5j+SC1pqQRoyqkAxJFA4NoihFKKag0BRJgEHlzKjK9Cz1knlVEXJCtEIZhfeOLAEliZwjqGlErXRNNpmlOwDZsY07jHHkcU8xkVmpkZjYqQtSHlnNTijRsaw8pzfucVC3CCNWBk6rhOnW+NURJu9oZE+6vsI7w8JsOGlqKtngc6R2c0wFvZ7z4a1P6GLP0zePaIwnlYhzjm5ULNtjdv0Go/x0OSeL0x5nPCH2VFVDo+fUVc0wdmQiqgh93LMZISeLs57a54kipRGtLZgyjepiiDmhlUGVQBctuxTYx3E6NMBYehb2ADKIShhrSSmRS4VTmpgFpRRaNFlnoiSKaEppKSVAyVhtidKQy0BWnpn1hCigB4rKOOWo3YyiIiGDFQW2QjQoDNoaVBFyHElmZCyGlAUhEEkoJUQJtMym55QHlCiSCBr32wM6yIhVGqcrWl2hdEAk048bsq3JIqiYIDm2rMlZEJVodYtVnpQLoQy4eoZSHqUUfdiyGS7ZpS2SNFkltNbUukGjptvQKIY4UNmGLl4RYsT41cS5ckSpiScVk0GDkPDWI2KYVzW1big5cZUuEFNoWWBKzVA6hthR9NQFsiTGFEllpI8dtW2myUQL82rOID1DN+KxjHECc9YJsKQSMfTMqzkhBGKJjERSjGQNxhisaEQLIopMJuiR+PaleVOQVBDR5JzxanrhYxkISlO5ChHFqr2BVY5tvKZnx5gKrvRoW6GsR+PIEsmqUFUVzlYc+QNebL8iFYu1cFTNcBhqNWOkJ/SRmVtxsvyA2fwIL5HD+ZxhuCYOa3TsUaEnXW0wQSBnrp89p8SEW1TUruZkMSO9+Jyzp+fc/9on6DsB26y420Zwc+bqW2grxJwIIbA8OmSz33H//reoKsdXDz+lqERJiSwJcYaZW2I1mKzxYvGmpQtbMJmrsaOxc0BhlaZPe4yxZAqFRCyZStdQHEopYq4ZciLEiFOeWCLKGiQKQ9jjbQUKSil463G6QiEok4ipB5mojqSBwgJra1RJGCJSpjbZ6AOsqihaU5uGPliwI754jDEoVaAIWjvI0zM3ygAK0ZpEwCiLwlB0R84FrRQocNqRVMaZSVex3lLLEmPMbw9orS2qGPq44a4/xdslzmqGMDDXNaUU+pzJhOmH++kgl1ImoYlMXVUYYyaOlEFXE/+7Wd9ma9ecHL3HUfiQF+efkuMIZqTr18zrQ1TRaOWxVpEkIKPgXIUpGqc8osFaTyHhlcfbBi8VWRJBBZyriDkxxIFoItpqGj/HRjeNU0qzDWvU2zi7iCHlnsZMD9mKQ5mR7dCRcoexDaZYBDC6QinYxzWohtbPIQnWG5RoGtWS9AgUdt2Ouq7JecT
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 360x360 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"show_image(im);"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The first axis contains the channels: red, green, and blue:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAp8AAADnCAYAAABYBj2NAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9WawkWXrf9ztLbLnftW5VdVX1vrBnelYuw+EiUxIGJEVLliDR9osBG34zDBiQAQN+MJ8N2K9+NwEZhk0DFk1JpAhJpEgNOcOZYc9M9/RS1V1d213qbrlFxnbO8cMXkdktSy0DFkZS+35AofJmRp44cU5Exv/8v///CxVC4Cqu4iqu4iqu4iqu4iqu4scR+t90B67iKq7iKq7iKq7iKq7i/z9xBT6v4iqu4iqu4iqu4iqu4scWV+DzKq7iKq7iKq7iKq7iKn5scQU+r+IqruIqruIqruIqruLHFlfg8yqu4iqu4iqu4iqu4ip+bGE/7cPmv/6bwV/O0OMhTCao8Zjw5AlEEYxGqOdegpNDwt33oa4hSVhvPx5DXaO+9ouoF9/A/72/g4oiQlHAw4fSBoAxsLcnr72X92czKAr5bD6Xz5wjlBVqNIR+H4ZDUAqmU/m815PvGgMhQFlKn5ZLcA6yDLa3UVoTqgqePpXPQdqJY0hT2W9Zwu7uph/DISSJtDudyvbDobQbAjQNDAbSh4sLyHPpi3NgDOrgAK7dhNMjwmol+zw/hyxDPfciRDHhrTelP9aC1tLXvQPC0yMZx9EY9dVfJFQlfO+PCSfHUBSo69fh+jOEN78r/c9z1J1noTcg3H0XFgtCnsthWgu3bsk4n5zAZCJ9Xi6lP97LsSWJ9MN7iGPUT3xe+nj/LpydbbazVsa7rmEwQF2/KX9fnhNOT2G1AufwswV6NCAsc1S/t2m/qmR8owj17LMQJ4TDJzJ+SsFoBGWJevZ5iGLq3/2HmEGC2t+XYw0Bjo+lrSyT+coymY/RSMa/aeS9Xg8WC3nv4gKiCPs//Jb613Yl/VsSv/XB/xJKV2K0YSuZADCvFlhtmCQTYh1TuILT1SmegEZR+ZpB1Mcoi9WGm/2bbCe73JvdBaDxNRflJbGRa1YrQ9/2CAQUCq0MebOk8Y7YRMyrBQA+OJrg6NseiUmITUwIgcpXACQmQaGITUzta3zwLOscpRRFU9CPeqQmxWpL5SpKV+KCxweHUgqrLEYbnHe44EltSulKfHAkJsEqiyewavJ1v0MIGKUpXElmU4yyBDyVq1FKTocQAoOoT2pTfPCUrgSgdCVaacbxGB885+UFGtW2rUltilV2fSz9qM9Wsk3lSo5Wx6yaHKss/WjAdrLF4+VjAoFlnXPQu0ZqU05XpxSuoGj3aZTMWwiB0pUopUhMwqrJKV2FCw6rLLGJsMrigscozXa6jVGGy2rKsu7mQ45dKYVCE2lLP+oT6Yja1yzrJZWvyBv5jcpMum5PK4NSisbX1L5BobjRv44Pnnk9X89LYhKcdwzjIUYZPpo/QCnFIBqgUSil2jkKxCamaAp6NsMjfatchVYGqw2pkfFfuYJFvcAqy994/j/6TF2z/8nf+X44mxds9RN2hjGT1PJoWmKUYrtv+cqNIT88XvLeyZKicgxSy9m8ZDKIiY1wR7/+xnUOhin/+9uHDGLDonLcO1kSmQ23dHsnQyvwAaxSTEvH5bIiiQzTXK7HonLMi5r9ccYoi9jpy/V+kTcAZJEmizVWKTyBog4sSseqaliWDVv9hGujiKoJFI3nbF5RNg6jZcpia0gjQ+08q6rh2jjlfFFRNZ5RL2KYWFa152S6opdYQpDbgNGKonZYrdkfJVyuapZFg9ZKbt1G88Jejxe2Ux7NSi5XDucDTxcVw9TypRt9ah/41oM5Ze1JIk1kNHv9iK2e5WRRs6wcz0wSfub6FoVz/NGDC85XDXnl+IlrPZ6dZPzue+ckkSavHJ876LGVWb71cMFlXpEXDQHQSvHywQAf4N3DGaMs5vZOxrx0nM5LisoRW40xisQa6sYTR5rXDvqME8P9i5KjWUkI4Hwgiw1aKRrvGSaWyCh2+pbLleNwWlI2jrJyXOYVu8OUZdkwSC1aK4yS6y0vG2rnef3miH6seTytuMwrQoBRFhGCjPGd7YTf++EJwywiiw2NC2wPEo6n8nuwPUiY5RXDLKJynmEaUTuP9wGlFDcmCU/nNYHA6azEec//+Z//5L/0ev1U8Im16F4q4CJNBcSMRmtQwmJOuLwgrFaowUCAgtECnkAAWFMT5hdy49/aEoAQAuszy1ppvwUqTCYCFpyD5ZKwKlBxRCgrfO0wHfDsQE8cC5hpGmlzNtsAE5D/td7sD+DyUgBmB1S1lnbiWNocj6W9ppG2OnBzciLHrrWA4o/vI4qkH94LEC0K+ZdlhBBQ3hGqCpVlhIuLDdDc3Sd8/zsCxFrAqp59DuIUlgtUmglgHYxAGzh+CC+8ito7gFUOL7+BSvtwekI4OkS9/BpcfwbqCnXnOcKP3gJAbW3JmDon+2oaGY+mkT5389UB6jgWUKqUjFVZyt/eb4C4c5tFhLVtm5owm8l8t23q0QC0FuD58fEKQebdGEJdo/pDGeeiEADaNPLZ9AJ18zZ2eyjvddG24S9n6G4uOtDanVtd/7Nsfc6EssJPl/+Kk//fzeiAmRZYSOFWAsS0AK9lk1O6kiY4UpNSuAKjDD4EUiMjolDkzZLSlfRsRhFkzEMILegz1L5Zg8vEJBhlUVpAow8OT8AHj1byW2C0tN2EBqMskbY0wRHrmGWdE2kBbalNKZpi3d8O4K5cQdEUawAMAvhiHVOGksxm1L4hhIBWZv152axovMNqg/MNSilckJvEZsw8Udu/whUoFKHtf+1r2YcrqX1NpCOstpyuTnG+IShDbCL6tk9AgLUA2kCsYxlbPAe9Axpfo1BMki0iHXNZXXJWnHFzcIPMZEQ6Zjvd5oPZhwCM4xGNdzKeweNCQ6wEwMc6pvGO0I6xQrdj7jBK44KjdBWNr9dzqhXrsenmRKFofMOqKdbgPm3n0wf3iXHqxlQhALjyFbGOsToiuIrC1QLu28VCYhIiHbVzrgXIoqh9Q+MbrJa+dMCz8Y7M9lqAW9OzfXzw63Nv3sz/dVwi/9ZFGhnKxpFFmmnpGCQC9Pux4e75iqN5tQaeRe2IrFxTWWxxPpA3DdNVzbLypFZzOC2pG4/VGmsUxmiqJjArGhofuDaMSa0iiQyzVc2qci3ACVit2erHTDKL0bAsBaz1Y43zgRDgsmxIraZoPNYofIDEGgLyG5xazUXerIFnCAjYijSDxHCRe8a9mNoFmhZgKRRZrDmZlVijqZ0nLxt6iaXxcg5GVnOR1zgf6KcW7wPTvMakso/aB+aljONF3uC9gONeZPmn96c4H0gjgwuB1w96aAXff5Lz/G5K7QL7/QitYFHX/OKdbTyBeVVzvZ8RGc3nrhe8dZTzk7cHjGJLz1peu5bxB+8VBGBnmAgX1e63cdLP2gWK2tE4T2xlHLVWDFLL6byUfhay/dmy/gTo9u19zflAZNR6vC/yhrwUAJ7FAjbltZHbaukYZnI8AMM0Yl46itozTAyLQvPRyZzTyLAzTMhiy73TFYPUrs+tVdWglOw7L5tPLGaGaUQayd/aKlZVQwhgjWJVeRZFTd34Tz3vP/3+2wGiqkL1eoTpFDUYEJyTzxdTmM1wswK1qjD9VJjJLBOAlqaQpHD6RICktcIcNo20m6YbwDedCjvVAg7KkrBcokZDwmKBylJM1H5PqQ0QiiIBF+fn8r2dHWlzOt0AQrP5wQ15Ltu3r1WSyPZZJmCl3xdQBRs2tKo+waaGukGlifS3GyMQVnAwkH1qvX5ftcfTAS3iWN6/fkMAZFVtWLrdXQGeVcv8bu+ixluo175KKFdw/Vn0tVtgIkI+QyU9Ql3Ca19Ebe/BredRox1QipD1UGkG774tx+Gc9KHrs/cbcL1YyPtbW7Jf79fbhbMT6X8H8rtzAzZj1TSwWhKmFwLMu+26ee6+04H9jgGOItnWOajKjy1sFrLPrS3Ic8L
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 864x288 with 3 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"_,axs = subplots(1,3)\n",
|
|||
|
"for bear,ax,color in zip(im,axs,('Reds','Greens','Blues')):\n",
|
|||
|
" show_image(255-bear, ax=ax, cmap=color)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We saw what the convolution operation was for one filter on one channel of the image (our examples were done on a square). A convolution layer will take an image with a certain number of channels (3 for the first layer for regular RGB color images) and output an image with a different number of channels. Like our hidden size that represented the numbers of neurons in a linear layer, we can decide to have has many filters as we want, and each of them will be able to specialize, some to detect horizontal edges, other to detect vertical edges and so forth, to give something like we studied in <<chapter_production>>.\n",
|
|||
|
"\n",
|
|||
|
"On one sliding window, we have a certain number of channels and we need as many filters (we don't use the same kernel for all the channels). So our kernel doesn't have a size of 3 by 3, but `ch_in` (for channel in) by 3 by 3. On each channel, we multiply the elements of our window by the elements of the coresponding filter then sum the results (as we saw before) and sum over all the filters. In the following example, the result of our conv layer on that window is $y_{R} + y_{G} + y_{B}$."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img src=\"images/chapter9_rgbconv.svg\" id=\"rgbconv\" caption=\"Convolution over an RGB image\" alt=\"Convolution over an RGB image\" width=\"550\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"So, in order to apply a convolution to a colour picture we require a kernel tensor with a matching size as the first axis. At each location, the corresponding parts of the kernel and the image patch are multiplied together.\n",
|
|||
|
"\n",
|
|||
|
"These are then all added together, to produce a single number, for each grid location, for each output feature:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img src=\"images/chapter9_rgb_conv_stack.svg\" id=\"rgbconv2\" caption=\"Adding the RGB filters\" alt=\"Adding the RGB filters\" width=\"500\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Then we have `ch_out` filters like this, so in the end, the result of our convolutional layer will be a batch of images with `ch_out` channels and a height and width given by the formula above. This give us `ch_out` tensors of size `ch_in x ks x ks` that we represent in one big tensor of 4 dimensions. In PyTorch, the order of the dimensions for those weights is `ch_out x ch_in x ks x ks`.\n",
|
|||
|
"\n",
|
|||
|
"Additionally, we may want to have a bias for each filter. In the example above, the final result for our convolutional layer would be $y_{R} + y_{G} + y_{B} + b$ in that case. Like in a linear layer, there are as many bias as we have kernels, so the bias is a vector of size `ch_out`.\n",
|
|||
|
"\n",
|
|||
|
"There are no special mechanisms required when setting up a CNN for training with color images. Just make sure your first layer as 3 inputs.\n",
|
|||
|
"\n",
|
|||
|
"There are lots of ways of processing color images. For instance, you can change them to black and white, or change from RGB to HSV color space, and so forth. In general, it turns out experimentally that changing the encoding of colors won't make any difference to your model results, as long as you don't lose information in the transformation. So transforming to black and white is a bad idea, since it removes the color information entirely (and this can be critical; for instance a pet breed may have a distinctive color); but converting to HSV generally won't make any difference.\n",
|
|||
|
"\n",
|
|||
|
"Now you know what those pictures in <<chapter_intro>> of \"what a neural net learns\" from the Zeiler and Fergus paper mean! This is their picture of some of the layer 1 weights which we showed:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"<img alt=\"Layer 1 kernels found by Zeiler and Fergus\" width=\"120\" src=\"images/att_00031.png\">"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"This is taking the 3 slices of the convolutional kernel, for each output feature, and displaying them as images. We can see that even although the creators of the neural net never explicitly created kernels to find edges, for instance, the neural net automatically discovered these features using SGD."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Conclusions"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"We've seen that convolutions are just a type of matrix multiplication, with two constraints on the weight matrix: some elements are always zero, and some elements are tied (forced to always have the same value). In <<chapter_intro>> we saw the eight requirements from the 1986 book *Parallel Distributed Processing*; one of them was \"A pattern of connectivity among units\". That's exactly what these constraints do: they enforce a certain pattern of connectivity.\n",
|
|||
|
"\n",
|
|||
|
"These constraints allow us to use far less parameters in our model, without sacrificing the ability to represent complex visual features. That means we can train deeper models faster, with less over-fitting. Although the universal approximation theorem shows that it should be *possible* to represent anything in a fully connected network in one hidden layer, we've seen now that in *practice* we can train much better models by being thoughtful about network architecture.\n",
|
|||
|
"\n",
|
|||
|
"Convolutions are by far the most common pattern of connectivity we see in neural nets (along with regular linear layers, which we refer to as *fully connected*), but it's likely that many more will be discovered."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Questionnaire"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"1. What is a \"feature\"?\n",
|
|||
|
"1. Write out the convolutional kernel matrix for a top edge detector.\n",
|
|||
|
"1. Write out the mathematical operation applied by a 3 x 3 kernel to a single pixel in an image.\n",
|
|||
|
"1. What is the value of a convolutional kernel apply to a 3 x 3 matrix of zeros?\n",
|
|||
|
"1. What is padding?\n",
|
|||
|
"1. What is stride?\n",
|
|||
|
"1. Create a nested list comprehension to complete any task that you choose.\n",
|
|||
|
"1. What are the shapes of the input and weight parameters to PyTorch's 2D convolution?\n",
|
|||
|
"1. What is a channel?\n",
|
|||
|
"1. What is the relationship between a convolution and a matrix multiplication?\n",
|
|||
|
"1. What is a convolutional neural network?\n",
|
|||
|
"1. What is the benefit of refactoring parts of your neural network definition?\n",
|
|||
|
"1. What is `Flatten`? Where does it need to be included in the MNIST CNN? Why?\n",
|
|||
|
"1. What does \"NCHW\" mean?\n",
|
|||
|
"1. Why does the third layer of the MNIST CNN have `7*7*(1168-16)` multiplications?\n",
|
|||
|
"1. What is a receptive field?\n",
|
|||
|
"1. What is the size of the receptive field of an activation after two stride 2 convolutions? Why?\n",
|
|||
|
"1. Run conv-example.xlsx yourself and experiment with \"trace precedents\".\n",
|
|||
|
"1. Have a look at Jeremy or Sylvain's list of recent Twitter \"like\"s, and see if you find any interesting resources or ideas there.\n",
|
|||
|
"1. How is a color image represented as a tensor?\n",
|
|||
|
"1. How does a convolution work with a color input?"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"### Further research"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"1. What features other than edge detectors have been used in computer vision (especially before deep learning became popular)?"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": []
|
|||
|
}
|
|||
|
],
|
|||
|
"metadata": {
|
|||
|
"jupytext": {
|
|||
|
"split_at_heading": true
|
|||
|
},
|
|||
|
"kernelspec": {
|
|||
|
"display_name": "Python 3",
|
|||
|
"language": "python",
|
|||
|
"name": "python3"
|
|||
|
}
|
|||
|
},
|
|||
|
"nbformat": 4,
|
|||
|
"nbformat_minor": 2
|
|||
|
}
|