"Now that we know how to build up pretty much anything from scratch, let's use that knowledge to create entirely new (and very useful!) functionality: the *class activation map*. In the process, we'll learn about one handy feature of PyTorch we haven't seen before, the *hook*, and we'll apply many of the concepts classes we've learned in the rest of the book. If you want to really test out your understanding of the material in this book, after you've finished this chapter, try putting the book aside, and recreate the ideas here yourself from scratch (no peaking!)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CAM and hooks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Class Activation Mapping (or CAM) was introduced by Zhou et al. in [Learning Deep Features for Discriminative Localization](https://arxiv.org/abs/1512.04150). It uses the output of the last convolutional layer (just before our average pooling) together with the predictions to give us some heatmap visulaization of why the model made its decision.\n",
"\n",
"More precisely, at each position of our final convolutional layer we have has many filters as the last linear layer. We can then compute the dot product of those activations by the final weights to have, for each location on our feature map, the score of the feature that was used to make a decision.\n",
"\n",
"We're going to need a way to get access to the activations inside the model while it's training. In PyTorch this can be done with a *hook*. Hooks are PyTorch's equivalent of fastai's *callbacks*. However rather than allowing you to inject code to the training loop like a fastai Learner callback, hooks allow you to inject code into the forward and backward calculations themselves. We can attach a hook to any layer of the model, and it will be executed when we compute the outputs (forward hook) or during backpropagation (backward hook). A forward hook has to be a function that takes three things: a module, its input and its output, and it can perform any behavior you want. (fastai also provides a handy `HookCallback` that we won't cover here, so take a look at the fastai docs; it makes working with hooks a little easier.)\n",
"\n",
"We'll use the same cats and dogs model we trained in <<chapter_intro>>:"
"For CAM we want to store the activations of the last convolutional layer. We put our hook function in a class so it has a state that we can access later, and just store a copy of the output:"
"So our model is very confident this was a picture of a cat."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To do the dot product of our weight matrix (2 by number of activations) with the activations (batch size by activations by rows by cols) we use a custom einsum:"
"For each image in our batch, and for each class, we get a 7 by 7 feature map that tells us where the activations were higher vs lower. This will let us see which area of the pictures made the model take its decision.\n",
"\n",
"For instance, the model decided this animal was a cat based on those areas (note that we need to `decode` the input `x` since it's been normalized by the `DataLoader`, and we need to cast to `TensorImage` since at the time this book is written PyTorch does not maintain types when indexing--this may be fixed by the time you are reading this):"
"That's why it's usually a good idea to have the `Hook` class be a *context manager*, registering the hook when you enter it and removing it when you exit. A \"context manager\" is a Python construct that calls `__enter__` when the object is created in a `with` clause, and `__exit__` at the end of the `with` clause. For instance, this is how Python handles the `with open(...) as f:` construct that you'll often see for opening files in Python, and not requiring an explicit `close(f)` at the end."
" with torch.no_grad(): output = learn.model.eval()(x.cuda())\n",
" act = hook.stored"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"fastai provides this `Hook` class for you, as well as some other handy classes to make working with hooks easier."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Gradient CAM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The method we just saw only lets us compute a heatmap with the last activations, since once we have our features, we have to multiply them by the last weight matrix. This won't work for inner layers in the network. A variant introduced in the paper [Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/abs/1611.07450) in 2016 uses the gradients of the final activation for the desired class: if you remember a little bit about the backward pass, the gradients of the output of the last layer with respect to the input of that layer is equal to the layer weights, since it is a linear layer.\n",
"With deeper layers, we still want the gradients, but they won't just be equal to the weights any more. We have to calculate them. The gradients of every layer are calculated for us by PyTorch during the backward pass, but they're not stored (except for tensors where `requires_grad` is `True`). We can, however, register a hook on the *backward* pass, which PyTorch will give the gradients to as a parameter, so we can store them there. We'll use a `HookBwd` class that will work like `Hook`, but intercepts and stores gradients, instead of activations:"
" def hook_func(self, m, gi, go): self.stored = go[0].detach().clone()\n",
" def __enter__(self, *args): return self\n",
" def __exit__(self, *args): self.hook.remove()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then for the class index 1 (for `True`, which is 'cat') we intercept the features of the last convolutional layer\n",
", as before, and compute the gradients of the output activation of our class. We can't just call `output.backward()`, because gradients only make sense with respect to a *scalar* (which is normally our *loss*), but `output` is a rank-2 tensor. But if we pick a single image (we'll use 0), and a single class (we'll use 1), then we *can* calculate the gradients of any weight or activation we like, with respect to that single value, using `output[0,cls].backward()`. Our hook intercepts the gradients that we'll use as weights."
"1. Look at the source code of `ActivationStats` class and see how it uses hooks.\n",
"1. Write a hook that stores the activation of a given layer in a model (without peaking, if possible).\n",
"1. Why do we call `eval` before getting the activations? Why do we use `no_grad`?\n",
"1. Use `torch.einsum` to compute the \"dog\" or \"cat\" score of each of the locations in the last activation of the body of the model.\n",
"1. How do you check which orders the categories are in (i.e. the correspondence of index->category)?\n",
"1. Why are we using `decode` when displaying the input image?\n",
"1. What is a \"context manager\"? What special methods need to be defined to create one?\n",
"1. Why can't we use plain CAM for the inner layers of a network?\n",
"1. Why do we need to hook the backward pass in order to do GradCAM?\n",
"1. Why can't we call `output.backward()` when `output` is a rank-2 tensor of output activations per image per class?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Further research"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Try removing `keepdim` and see what happens. Look up this parameter in the PyTorch docs. Why do we need it in this notebook?\n",
"1. Create a notebook like this one, but for NLP, and use it to find which words in a movie review are most significant in assessing sentiment of a particular movie review."