mirror of
https://github.com/fastai/fastbook.git
synced 2025-04-04 18:00:48 +00:00
fix
This commit is contained in:
parent
ca670e712b
commit
9dcac4a83a
@ -36,7 +36,7 @@
|
||||
"\n",
|
||||
"You will see bits in the text like this: \"TK: figure showing bla here\" or \"TK: expand introduction\". \"TK\" is used to make places where we know something is missing and we will add them. This does not alter any of the core content as those are usually small parts/figures that are relatively independent form the flow and self-explanatory.\n",
|
||||
"\n",
|
||||
"Throughout the book, the version of the fastai library used is version 2. That version is not yet officially released and is for now separate from the main project. You can find it [here](https://github.com/fastai/fastai2). TK book website mention here also https://book.fast.ai"
|
||||
"Throughout the book, the version of the fastai library used is version 2. That version is not yet officially released and is for now separate from the main project. You can find it [here](https://github.com/fastai/fastai2)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -159,13 +159,6 @@
|
||||
"An MIT professor named Marvin Minsky (who was a grade behind Rosenblatt at the same high school!) along with Seymour Papert wrote a book, called \"Perceptrons\", about Rosenblatt's invention. They showed that a single layer of these devices was unable to learn some simple, critical mathematical functions (such as XOR). In the same book, they also showed that using multiple layers of the devices would allow these limitations to be addressed. Unfortunately, only the first of these insights was widely recognized, as a result of which the global academic community nearly entirely gave up on neural networks for the next two decades."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`<<<<<<< HEAD`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@ -1569,7 +1562,7 @@
|
||||
"\n",
|
||||
"There are many different architectures in fastai, which we will be learning about in this book, as well as discussing how to create your own. Most of the time, however, picking an architecture isn't a very important part of the deep learning process. It's something that academics love to talk about, but in practice it is unlikely to be something you need to spend much time on. There are some standard architectures that work most of the time, and in this case we're using one called _ResNet_ that we'll be learning a lot about during the book; it is both fast and accurate for many datasets and problems. The \"34\" in `resnet34` refers to the number of layers in this variant of the architecture (other options are \"18\", \"50\", \"101\", and \"152\"). Models using architectures with more layers take longer to train, and are more prone to overfitting (i.e. you can't train them for as many epochs before the accuracy on the validation set starts getting worse). On the other hand, when using more data, they can be quite a bit more accurate.\n",
|
||||
"\n",
|
||||
"A *metric* is a function that is called to measure how good the model is, using the validation set, and will be printed at the end of each *epoch*. In this case, we're using `error_rate`, which is a function provided by fastai which does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is `accuracy` (which is just `1.0 - error_rate`). fastai provides many more, which will be discussed throughout this book. TK? worth mentioning an error in label in validation set will show as if in error_rate of model?"
|
||||
"A *metric* is a function that is called to measure how good the model is, using the validation set, and will be printed at the end of each *epoch*. In this case, we're using `error_rate`, which is a function provided by fastai which does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is `accuracy` (which is just `1.0 - error_rate`). fastai provides many more, which will be discussed throughout this book."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2822,8 +2815,8 @@
|
||||
"1. What were the two theoretical misunderstandings that held back the field of neural networks?\n",
|
||||
"1. What is a GPU?\n",
|
||||
"1. Open a notebook and execute a cell containing: `1+1`. What happens?\n",
|
||||
"1. Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, anticipate what will happen.\n",
|
||||
"1. Complete the Jupyter Notebook online appendix. TK link?\n",
|
||||
"1. Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.\n",
|
||||
"1. Complete the Jupyter Notebook online appendix.\n",
|
||||
"1. Why is it hard to use a traditional computer program to recognize images in a photo?\n",
|
||||
"1. What did Samuel mean by \"Weight Assignment\"?\n",
|
||||
"1. What term do we normally use in deep learning for what Samuel called \"Weights\"?\n",
|
||||
|
Loading…
Reference in New Issue
Block a user