Merge pull request #112 from alvarotap/patch-11

Some typos in chapter 10
This commit is contained in:
Sylvain Gugger 2020-04-16 09:08:21 -04:00 committed by GitHub
commit f37af7f8fb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -103,7 +103,7 @@
"\n", "\n",
"- **Tokenization**:: convert the text into a list of words (or characters, or substrings, depending on the granularity of your model)\n", "- **Tokenization**:: convert the text into a list of words (or characters, or substrings, depending on the granularity of your model)\n",
"- **Numericalization**:: make a list of all of the unique words which appear (the vocab), and convert each word into a number, by looking up its index in the vocab\n", "- **Numericalization**:: make a list of all of the unique words which appear (the vocab), and convert each word into a number, by looking up its index in the vocab\n",
"- **Language model data loader** creation:: fastai provides an `LMDataLoader` class which automatically handles creating a dependent variable which is offset from the independent variable buy one token. It also handles some important details, such as how to shuffle the training data in such a way that the dependent and independent variables maintain their structure as required\n", "- **Language model data loader** creation:: fastai provides an `LMDataLoader` class which automatically handles creating a dependent variable which is offset from the independent variable by one token. It also handles some important details, such as how to shuffle the training data in such a way that the dependent and independent variables maintain their structure as required\n",
"- **Language model** creation:: we need a special kind of model which does something we haven't seen before: handles input lists which could be arbitrarily big or small. There are a number of ways to do this; in this chapter we will be using a *recurrent neural network*. We will get to the details of this in the <<chapter_nlp_dive>>, but for now, you can think of it as just another deep neural network.\n", "- **Language model** creation:: we need a special kind of model which does something we haven't seen before: handles input lists which could be arbitrarily big or small. There are a number of ways to do this; in this chapter we will be using a *recurrent neural network*. We will get to the details of this in the <<chapter_nlp_dive>>, but for now, you can think of it as just another deep neural network.\n",
"\n", "\n",
"Let's take a look at how each step works in detail." "Let's take a look at how each step works in detail."
@ -347,7 +347,7 @@
"\n", "\n",
"Here is a brief summary of what each does:\n", "Here is a brief summary of what each does:\n",
"\n", "\n",
"- `fix_html`:: replace special HTML characters by a readable version (IMDb reviwes have quite a few of them for instance) ;\n", "- `fix_html`:: replace special HTML characters by a readable version (IMDb reviews have quite a few of them for instance) ;\n",
"- `replace_rep`:: replace any character repeated three times or more by a special token for repetition (xxrep), the number of times it's repeated, then the character ;\n", "- `replace_rep`:: replace any character repeated three times or more by a special token for repetition (xxrep), the number of times it's repeated, then the character ;\n",
"- `replace_wrep`:: replace any word repeated three times or more by a special token for word repetition (xxwrep), the number of times it's repeated, then the word ;\n", "- `replace_wrep`:: replace any word repeated three times or more by a special token for word repetition (xxwrep), the number of times it's repeated, then the word ;\n",
"- `spec_add_spaces`:: add spaces around / and # ;\n", "- `spec_add_spaces`:: add spaces around / and # ;\n",
@ -1276,7 +1276,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"As we have seen at the beginning of this chapter to train a state-of-the-art text classifier using transfer learning will take two steps: first we need to fine-tune our langauge model pretrained on Wikipedia to the corpus of IMDb reviews, then we can use that model to train a classifier.\n", "As we have seen at the beginning of this chapter to train a state-of-the-art text classifier using transfer learning will take two steps: first we need to fine-tune our language model pretrained on Wikipedia to the corpus of IMDb reviews, then we can use that model to train a classifier.\n",
"\n", "\n",
"As usual, let's start with assembling our data." "As usual, let's start with assembling our data."
] ]
@ -1515,7 +1515,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We can them finetune the model after unfreezing:" "We can then finetune the model after unfreezing:"
] ]
}, },
{ {
@ -2251,6 +2251,18 @@
"display_name": "Python 3", "display_name": "Python 3",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
} }
}, },
"nbformat": 4, "nbformat": 4,