fix inofmration to information
This commit is contained in:
Vijayabhaskar 2020-03-15 21:17:52 +05:30 committed by GitHub
parent b2f1c12d4c
commit e140ccd217
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -3621,7 +3621,7 @@
"source": [
"Now that we have a loss function which is suitable to drive SGD, we can consider some of the details involved in the next phase of the learning process, which is *step* (i.e., change or update) the weights based on the gradients. This is called an optimisation step.\n",
"\n",
"In order to take an optimiser step we need to calculate the loss over one or more data items. How many should we use? We could calculate it for the whole dataset, and take the average, or we could calculate it for a single data item. But neither of these is ideal. Calculating it for the whole dataset would take a very long time. Calculating it for a single item would not use much inofmration, and so it would result in a very imprecise and unstable gradient. That is, you'd be going to the trouble of updating the weights but taking into account only how that would improve the model's performance on that single item.\n",
"In order to take an optimiser step we need to calculate the loss over one or more data items. How many should we use? We could calculate it for the whole dataset, and take the average, or we could calculate it for a single data item. But neither of these is ideal. Calculating it for the whole dataset would take a very long time. Calculating it for a single item would not use much information, and so it would result in a very imprecise and unstable gradient. That is, you'd be going to the trouble of updating the weights but taking into account only how that would improve the model's performance on that single item.\n",
"\n",
"So instead we take a compromise between the two: we calculate the average loss for a few data items at a time. This is called a *mini-batch*. The number of data items in the mini batch is called the *batch size*. A larger batch size means that you will get a more accurate and stable estimate of your datasets gradient on the loss function, but it will take longer, and you will get less mini-batches per epoch. Choosing a good batch size is one of the decisions you need to make as a deep learning practitioner to train your model quickly and accurately. We will talk about how to make this choice throughout this book.\n",
"\n",