mirror of
https://github.com/fastai/fastbook.git
synced 2025-04-04 01:40:44 +00:00
clean
This commit is contained in:
parent
2f153dd6e7
commit
111e6c5b1c
@ -564,7 +564,7 @@
|
||||
"split_at_heading": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
}
|
||||
|
@ -121,7 +121,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"key = os.environ.get('AZURE_SEARCH_KEY', 'XXX')"
|
||||
]
|
||||
},
|
||||
@ -701,4 +700,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
}
|
||||
|
@ -620,7 +620,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()"
|
||||
"def mse(preds, targets): return ((preds-targets)**2).mean()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -975,7 +975,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"weights[0] *= 1.0001"
|
||||
"with torch.no_grad(): weights[0] *= 1.0001"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -20,7 +20,6 @@
|
||||
"source": [
|
||||
"#hide\n",
|
||||
"from fastbook import *\n",
|
||||
"from kaggle import api\n",
|
||||
"from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype\n",
|
||||
"from fastai.tabular.all import *\n",
|
||||
"from sklearn.ensemble import RandomForestRegressor\n",
|
||||
@ -95,7 +94,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"path = URLs.path('bluebook')\n",
|
||||
"comp = 'bluebook-for-bulldozers'\n",
|
||||
"path = URLs.path(comp)\n",
|
||||
"path"
|
||||
]
|
||||
},
|
||||
@ -115,10 +115,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from kaggle import api\n",
|
||||
"\n",
|
||||
"if not path.exists():\n",
|
||||
" path.mkdir(parents=true)\n",
|
||||
" api.competition_download_cli('bluebook-for-bulldozers', path=path)\n",
|
||||
" file_extract(path/'bluebook-for-bulldozers.zip')\n",
|
||||
" api.competition_download_cli(comp, path=path)\n",
|
||||
" shutil.unpack_archive(str(path/f'{comp}.zip'), str(path))\n",
|
||||
"\n",
|
||||
"path.ls(file_type='text')"
|
||||
]
|
||||
@ -1398,7 +1400,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
}
|
||||
|
@ -668,7 +668,7 @@
|
||||
"split_at_heading": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
}
|
||||
|
@ -701,7 +701,7 @@
|
||||
"source": [
|
||||
"1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?\n",
|
||||
"1. Why do we concatenate the documents in our dataset before creating a language model?\n",
|
||||
"1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to ou model?\n",
|
||||
"1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to our model?\n",
|
||||
"1. How can we share a weight matrix across multiple layers in PyTorch?\n",
|
||||
"1. Write a module that predicts the third word given the previous two words of a sentence, without peeking.\n",
|
||||
"1. What is a recurrent neural network?\n",
|
||||
@ -725,13 +725,13 @@
|
||||
"1. Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?\n",
|
||||
"1. What are these two states called in an LSTM?\n",
|
||||
"1. What is tanh, and how is it related to sigmoid?\n",
|
||||
"1. What is the purpose of this code in `LSTMCell`: `h = torch.stack([h, input], dim=1)`\n",
|
||||
"1. What is the purpose of this code in `LSTMCell`: `h = torch.cat([h, input], dim=1)`\n",
|
||||
"1. What does `chunk` do in PyTorch?\n",
|
||||
"1. Study the refactored version of `LSTMCell` carefully to ensure you understand how and why it does the same thing as the non-refactored version.\n",
|
||||
"1. Why can we use a higher learning rate for `LMModel6`?\n",
|
||||
"1. What are the three regularization techniques used in an AWD-LSTM model?\n",
|
||||
"1. What is \"dropout\"?\n",
|
||||
"1. Why do we scale the weights with dropout? Is this applied during training, inference, or both?\n",
|
||||
"1. Why do we scale the acitvations with dropout? Is this applied during training, inference, or both?\n",
|
||||
"1. What is the purpose of this line from `Dropout`: `if not self.training: return x`\n",
|
||||
"1. Experiment with `bernoulli_` to understand how it works.\n",
|
||||
"1. How do you set your model in training mode in PyTorch? In evaluation mode?\n",
|
||||
@ -753,7 +753,7 @@
|
||||
"source": [
|
||||
"1. In ` LMModel2`, why can `forward` start with `h=0`? Why don't we need to say `h=torch.zeros(...)`?\n",
|
||||
"1. Write the code for an LSTM from scratch (you may refer to <<lstm>>).\n",
|
||||
"1. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare you results to the results of PyTorch's built in `GRU` module.\n",
|
||||
"1. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare your results to the results of PyTorch's built in `GRU` module.\n",
|
||||
"1. Take a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter."
|
||||
]
|
||||
},
|
||||
@ -770,7 +770,7 @@
|
||||
"split_at_heading": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
}
|
||||
|
@ -757,8 +757,8 @@
|
||||
"source": [
|
||||
"def conv(ni, nf, ks=3, act=True):\n",
|
||||
" layers = [nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)]\n",
|
||||
" layers.append(nn.BatchNorm2d(nf))\n",
|
||||
" if act: layers.append(nn.ReLU())\n",
|
||||
" layers.append(nn.BatchNorm2d(nf))\n",
|
||||
" return nn.Sequential(*layers)"
|
||||
]
|
||||
},
|
||||
@ -789,15 +789,6 @@
|
||||
"learn = fit(5, lr=0.1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"learn = fit(5, lr=0.1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -156,7 +156,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"head = create_head(512*4, 2, ps=0.5)"
|
||||
"head = create_head(512*2, 2, ps=0.5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -409,7 +409,7 @@
|
||||
"1. How can you get the list of events available to you when writing a callback?\n",
|
||||
"1. Write the `ModelResetter` callback (without peeking).\n",
|
||||
"1. How can you access the necessary attributes of the training loop inside a callback? When can you use or not use the shortcuts that go with them?\n",
|
||||
"1. How can a callback influence the control flow of the training loop.\n",
|
||||
"1. How can a callback influence the control flow of the training loop?\n",
|
||||
"1. Write the `TerminateOnNaN` callback (without peeking, if possible).\n",
|
||||
"1. How do you make sure your callback runs after or before another callback?"
|
||||
]
|
||||
@ -427,7 +427,7 @@
|
||||
"source": [
|
||||
"1. Look up the \"Rectified Adam\" paper, implement it using the general optimizer framework, and try it out. Search for other recent optimizers that work well in practice, and pick one to implement.\n",
|
||||
"1. Look at the mixed-precision callback with the documentation. Try to understand what each event and line of code does.\n",
|
||||
"1. Implement your own version of ther learning rate finder from scratch. Compare it with fastai's version.\n",
|
||||
"1. Implement your own version of the learning rate finder from scratch. Compare it with fastai's version.\n",
|
||||
"1. Look at the source code of the callbacks that ship with fastai. See if you can find one that's similar to what you're looking to do, to get some inspiration."
|
||||
]
|
||||
},
|
||||
|
@ -12,16 +12,6 @@
|
||||
"fastbook.setup_book()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#hide\n",
|
||||
"from fastai.gen_doc.nbdoc import *"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
Loading…
Reference in New Issue
Block a user