diff --git a/14_resnet.ipynb b/14_resnet.ipynb index 130af48..c8ddf26 100644 --- a/14_resnet.ipynb +++ b/14_resnet.ipynb @@ -117,7 +117,7 @@ "source": [ "When we looked at MNIST we were dealing with 28×28-pixel images. For Imagenette we are going to be training with 128×128-pixel images. Later, we would like to be able to use larger images as well—at least as big as 224×224 pixels, the ImageNet standard. Do you recall how we managed to get a single vector of activations for each image out of the MNIST convolutional neural network?\n", "\n", - "The approach we used was to ensure that there were enough stride-2convolutions such that the final layer would have a grid size of 1. Then we just flattened out the unit axes that we ended up with, to get a vector for each image (so, a matrix of activations for a mini-batch). We could do the same thing for Imagenette, but that's would cause two problems:\n", + "The approach we used was to ensure that there were enough stride-2 convolutions such that the final layer would have a grid size of 1. Then we just flattened out the unit axes that we ended up with, to get a vector for each image (so, a matrix of activations for a mini-batch). We could do the same thing for Imagenette, but that would cause two problems:\n", "\n", "- We'd need lots of stride-2 layers to make our grid 1×1 at the end—perhaps more than we would otherwise choose.\n", "- The model would not work on images of any size other than the size we originally trained on.\n",