diff --git a/01_intro.ipynb b/01_intro.ipynb index b0a720e..6313999 100644 --- a/01_intro.ipynb +++ b/01_intro.ipynb @@ -1273,7 +1273,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It's not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it's not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interestings problems we might imagein.\n", + "It's not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it's not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interesting problems we might imagine.\n", "\n", "What we would like is some kind of function that is so flexible that it could be used to solve any given problem, just by varying its weights. Amazingly enough, this function actually exists! It's the neural network, which we already discussed. That is, if you regard a neural network as a mathematical function, it turns out to be a function which is extremely flexible depending on its weights. A mathematical proof called the *universal approximation theorem* shows that this function can solve any problem to any level of accuracy, in theory. The fact that neural networks are so flexible means that, in practice, they are often a suitable kind of model, and you can focus your effort on the process of training them, that is, of finding good weight assignments.\n", "\n", @@ -1297,7 +1297,7 @@ "\n", "Let's now try to fit our image classification problem into Samuel's framework.\n", "\n", - "Our inputs, those are the images. Our weights, those are the weights in the neural net. Our model is a neural net. Ou results those are the values that are calculated by the neural net.\n", + "Our inputs, those are the images. Our weights, those are the weights in the neural net. Our model is a neural net. Our results those are the values that are calculated by the neural net.\n", "\n", "So now we just need some *automatic means of testing the effectiveness of any current weight assignment in terms of actual performance*. Well that's easy enough: we can see how accurate our model is at predicting the correct answers! So put this all together, and we have an image recognizer." ] @@ -1315,7 +1315,7 @@ "source": [ "Our picture is almost complete.\n", "\n", - "All that remains is to add this last concept, of measuring a model's performance by comparing wit the correct answer, and to update some of its terminology to match the usage of 2020 instead of 1961.\n", + "All that remains is to add this last concept, of measuring a model's performance by comparing with the correct answer, and to update some of its terminology to match the usage of 2020 instead of 1961.\n", "\n", "Here is the modern deep learning terminology for all the pieces we have discussed:\n", "\n", @@ -2184,7 +2184,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This model is using the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((https://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it do its thing:" + "This model is using the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((https://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it does its thing:" ] }, {