mirror of
https://github.com/fastai/fastbook.git
synced 2025-04-05 18:30:44 +00:00
Update 17_foundations.ipynb
This commit is contained in:
parent
62ac21d085
commit
7d2ae8e167
@ -1774,7 +1774,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We've seen that PyTorch computes all the gradient we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n",
|
"We've seen that PyTorch computes all the gradients we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in `w1`, `b1`, `w2`, and `b2`. For this, we will need a bit of math—specifically the *chain rule*. This is the rule of calculus that guides how we can compute the derivative of a composed function:\n",
|
"Now comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in `w1`, `b1`, `w2`, and `b2`. For this, we will need a bit of math—specifically the *chain rule*. This is the rule of calculus that guides how we can compute the derivative of a composed function:\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
Loading…
Reference in New Issue
Block a user