From 040e5cb4e270c2db8785fd2f032238081199ad92 Mon Sep 17 00:00:00 2001 From: alvarotap <61787129+alvarotap@users.noreply.github.com> Date: Wed, 4 Mar 2020 13:40:32 +0100 Subject: [PATCH 1/4] "In order to" Changed "other" for "order" in: "The program's inputs are values that it processes in other to products its results" --- 01_intro.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/01_intro.ipynb b/01_intro.ipynb index b0a720e..18daa3b 100644 --- a/01_intro.ipynb +++ b/01_intro.ipynb @@ -947,7 +947,7 @@ "\n", "Let us take these concepts one by one, in order to understand how they fit together in practice. First, we need to understand what Samuel means by a *weight assignment*.\n", "\n", - "Weights are just variables, and a weight assignment is a particular choice of values for those variables. The program's inputs are values that it processes in other to products its results -- for instance, taking image pixels as inputs, and returning the classification \"dog\" as a result. But the program's weight assignments are other values which define how the program will operate.\n", + "Weights are just variables, and a weight assignment is a particular choice of values for those variables. The program's inputs are values that it processes in order to products its results -- for instance, taking image pixels as inputs, and returning the classification \"dog\" as a result. But the program's weight assignments are other values which define how the program will operate.\n", "\n", "Since they will affect the program they are in a sense another kind of input, so we will update our basic picture of <> and replace it with <> in order to take this into account:" ] From 53b3de1c4fb133fe426a46b8fd21b999ab6809c2 Mon Sep 17 00:00:00 2001 From: alvarotap <61787129+alvarotap@users.noreply.github.com> Date: Wed, 4 Mar 2020 15:37:29 +0100 Subject: [PATCH 2/4] Some typos Some typos in chapter 1. --- 01_intro.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/01_intro.ipynb b/01_intro.ipynb index b0a720e..6313999 100644 --- a/01_intro.ipynb +++ b/01_intro.ipynb @@ -1273,7 +1273,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It's not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it's not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interestings problems we might imagein.\n", + "It's not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it's not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interesting problems we might imagine.\n", "\n", "What we would like is some kind of function that is so flexible that it could be used to solve any given problem, just by varying its weights. Amazingly enough, this function actually exists! It's the neural network, which we already discussed. That is, if you regard a neural network as a mathematical function, it turns out to be a function which is extremely flexible depending on its weights. A mathematical proof called the *universal approximation theorem* shows that this function can solve any problem to any level of accuracy, in theory. The fact that neural networks are so flexible means that, in practice, they are often a suitable kind of model, and you can focus your effort on the process of training them, that is, of finding good weight assignments.\n", "\n", @@ -1297,7 +1297,7 @@ "\n", "Let's now try to fit our image classification problem into Samuel's framework.\n", "\n", - "Our inputs, those are the images. Our weights, those are the weights in the neural net. Our model is a neural net. Ou results those are the values that are calculated by the neural net.\n", + "Our inputs, those are the images. Our weights, those are the weights in the neural net. Our model is a neural net. Our results those are the values that are calculated by the neural net.\n", "\n", "So now we just need some *automatic means of testing the effectiveness of any current weight assignment in terms of actual performance*. Well that's easy enough: we can see how accurate our model is at predicting the correct answers! So put this all together, and we have an image recognizer." ] @@ -1315,7 +1315,7 @@ "source": [ "Our picture is almost complete.\n", "\n", - "All that remains is to add this last concept, of measuring a model's performance by comparing wit the correct answer, and to update some of its terminology to match the usage of 2020 instead of 1961.\n", + "All that remains is to add this last concept, of measuring a model's performance by comparing with the correct answer, and to update some of its terminology to match the usage of 2020 instead of 1961.\n", "\n", "Here is the modern deep learning terminology for all the pieces we have discussed:\n", "\n", @@ -2184,7 +2184,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This model is using the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((https://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it do its thing:" + "This model is using the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((https://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it does its thing:" ] }, { From a343b3f185726508fc44a790d16a812ef08a70d6 Mon Sep 17 00:00:00 2001 From: Jordi Villar Date: Wed, 4 Mar 2020 16:59:33 +0100 Subject: [PATCH 3/4] Fix nlp minor typo --- 10_nlp.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/10_nlp.ipynb b/10_nlp.ipynb index 4077abf..2945da1 100644 --- a/10_nlp.ipynb +++ b/10_nlp.ipynb @@ -880,7 +880,7 @@ "source": [ "#hide\n", "stream = \"In this chapter, we will go back over the example of classifying movie reviews we studied in chapter 1 and dig deeper under the surface. First we will look at the processing steps necessary to convert text into numbers and how to customize it. By doing this, we'll have another example of the PreProcessor used in the data block API.\\nThen we will study how we build a language model and train it for a while.\"\n", - "tokens = tfm(stream)\n", + "tokens = tkn(stream)\n", "bs,seq_len = 6,15\n", "d_tokens = np.array([tokens[i*seq_len:(i+1)*seq_len] for i in range(bs)])\n", "df = pd.DataFrame(d_tokens)\n", From 6eaae1338c1b7207a14f5c379f3dc240333685d7 Mon Sep 17 00:00:00 2001 From: Jordi Villar Date: Wed, 4 Mar 2020 17:00:19 +0100 Subject: [PATCH 4/4] Add missing requirement --- requirements.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/requirements.txt b/requirements.txt index 98b3041..2cc571b 100644 --- a/requirements.txt +++ b/requirements.txt @@ -6,3 +6,4 @@ nbdev>=0.2.12 pandas scikit_learn azure-cognitiveservices-search-imagesearch +sentencepiece