fastbook/clean/10_nlp.ipynb
Jeremy Howard dd985841b6 clean
2020-09-03 15:58:27 -07:00

679 lines
15 KiB
Plaintext

{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"!pip install -Uqq fastbook\n",
"import fastbook\n",
"fastbook.setup_book()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"from fastbook import *\n",
"from IPython.display import display,HTML"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NLP Deep Dive: RNNs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Text Preprocessing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tokenization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Word Tokenization with fastai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fastai.text.all import *\n",
"path = untar_data(URLs.IMDB)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"files = get_text_files(path, folders = ['train', 'test', 'unsup'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"txt = files[0].open().read(); txt[:75]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"spacy = WordTokenizer()\n",
"toks = first(spacy([txt]))\n",
"print(coll_repr(toks, 30))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"first(spacy(['The U.S. dollar $1 is $1.00.']))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tkn = Tokenizer(spacy)\n",
"print(coll_repr(tkn(txt), 31))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"defaults.text_proc_rules"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"coll_repr(tkn('© Fast.ai www.fast.ai/INDEX'), 31)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Subword Tokenization"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"txts = L(o.open().read() for o in files[:2000])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def subword(sz):\n",
" sp = SubwordTokenizer(vocab_sz=sz)\n",
" sp.setup(txts)\n",
" return ' '.join(first(sp([txt]))[:40])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subword(1000)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subword(200)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subword(10000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Numericalization with fastai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"toks = tkn(txt)\n",
"print(coll_repr(tkn(txt), 31))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"toks200 = txts[:200].map(tkn)\n",
"toks200[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num = Numericalize()\n",
"num.setup(toks200)\n",
"coll_repr(num.vocab,20)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nums = num(toks)[:20]; nums"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"' '.join(num.vocab[o] for o in nums)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Putting Our Texts into Batches for a Language Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stream = \"In this chapter, we will go back over the example of classifying movie reviews we studied in chapter 1 and dig deeper under the surface. First we will look at the processing steps necessary to convert text into numbers and how to customize it. By doing this, we'll have another example of the PreProcessor used in the data block API.\\nThen we will study how we build a language model and train it for a while.\"\n",
"tokens = tkn(stream)\n",
"bs,seq_len = 6,15\n",
"d_tokens = np.array([tokens[i*seq_len:(i+1)*seq_len] for i in range(bs)])\n",
"df = pd.DataFrame(d_tokens)\n",
"display(HTML(df.to_html(index=False,header=None)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bs,seq_len = 6,5\n",
"d_tokens = np.array([tokens[i*15:i*15+seq_len] for i in range(bs)])\n",
"df = pd.DataFrame(d_tokens)\n",
"display(HTML(df.to_html(index=False,header=None)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bs,seq_len = 6,5\n",
"d_tokens = np.array([tokens[i*15+seq_len:i*15+2*seq_len] for i in range(bs)])\n",
"df = pd.DataFrame(d_tokens)\n",
"display(HTML(df.to_html(index=False,header=None)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bs,seq_len = 6,5\n",
"d_tokens = np.array([tokens[i*15+10:i*15+15] for i in range(bs)])\n",
"df = pd.DataFrame(d_tokens)\n",
"display(HTML(df.to_html(index=False,header=None)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nums200 = toks200.map(num)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dl = LMDataLoader(nums200)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x,y = first(dl)\n",
"x.shape,y.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"' '.join(num.vocab[o] for o in x[0][:20])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"' '.join(num.vocab[o] for o in y[0][:20])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training a Text Classifier"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Language Model Using DataBlock"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"get_imdb = partial(get_text_files, folders=['train', 'test', 'unsup'])\n",
"\n",
"dls_lm = DataBlock(\n",
" blocks=TextBlock.from_folder(path, is_lm=True),\n",
" get_items=get_imdb, splitter=RandomSplitter(0.1)\n",
").dataloaders(path, path=path, bs=128, seq_len=80)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dls_lm.show_batch(max_n=2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fine-Tuning the Language Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = language_model_learner(\n",
" dls_lm, AWD_LSTM, drop_mult=0.3, \n",
" metrics=[accuracy, Perplexity()]).to_fp16()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.fit_one_cycle(1, 2e-2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Saving and Loading Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.save('1epoch')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = learn.load('1epoch')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.unfreeze()\n",
"learn.fit_one_cycle(10, 2e-3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.save_encoder('finetuned')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Text Generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"TEXT = \"I liked this movie because\"\n",
"N_WORDS = 40\n",
"N_SENTENCES = 2\n",
"preds = [learn.predict(TEXT, N_WORDS, temperature=0.75) \n",
" for _ in range(N_SENTENCES)]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"\\n\".join(preds))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating the Classifier DataLoaders"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dls_clas = DataBlock(\n",
" blocks=(TextBlock.from_folder(path, vocab=dls_lm.vocab),CategoryBlock),\n",
" get_y = parent_label,\n",
" get_items=partial(get_text_files, folders=['train', 'test']),\n",
" splitter=GrandparentSplitter(valid_name='test')\n",
").dataloaders(path, path=path, bs=128, seq_len=72)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dls_clas.show_batch(max_n=3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nums_samp = toks200[:10].map(num)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nums_samp.map(len)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = text_classifier_learner(dls_clas, AWD_LSTM, drop_mult=0.5, \n",
" metrics=accuracy).to_fp16()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = learn.load_encoder('finetuned')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fine-Tuning the Classifier"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.fit_one_cycle(1, 2e-2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.freeze_to(-2)\n",
"learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.freeze_to(-3)\n",
"learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn.unfreeze()\n",
"learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Disinformation and Language Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Questionnaire"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. What is \"self-supervised learning\"?\n",
"1. What is a \"language model\"?\n",
"1. Why is a language model considered self-supervised?\n",
"1. What are self-supervised models usually used for?\n",
"1. Why do we fine-tune language models?\n",
"1. What are the three steps to create a state-of-the-art text classifier?\n",
"1. How do the 50,000 unlabeled movie reviews help us create a better text classifier for the IMDb dataset?\n",
"1. What are the three steps to prepare your data for a language model?\n",
"1. What is \"tokenization\"? Why do we need it?\n",
"1. Name three different approaches to tokenization.\n",
"1. What is `xxbos`?\n",
"1. List four rules that fastai applies to text during tokenization.\n",
"1. Why are repeated characters replaced with a token showing the number of repetitions and the character that's repeated?\n",
"1. What is \"numericalization\"?\n",
"1. Why might there be words that are replaced with the \"unknown word\" token?\n",
"1. With a batch size of 64, the first row of the tensor representing the first batch contains the first 64 tokens for the dataset. What does the second row of that tensor contain? What does the first row of the second batch contain? (Careful—students often get this one wrong! Be sure to check your answer on the book's website.)\n",
"1. Why do we need padding for text classification? Why don't we need it for language modeling?\n",
"1. What does an embedding matrix for NLP contain? What is its shape?\n",
"1. What is \"perplexity\"?\n",
"1. Why do we have to pass the vocabulary of the language model to the classifier data block?\n",
"1. What is \"gradual unfreezing\"?\n",
"1. Why is text generation always likely to be ahead of automatic identification of machine-generated texts?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Further Research"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. See what you can learn about language models and disinformation. What are the best language models today? Take a look at some of their outputs. Do you find them convincing? How could a bad actor best use such a model to create conflict and uncertainty?\n",
"1. Given the limitation that models are unlikely to be able to consistently recognize machine-generated texts, what other approaches may be needed to handle large-scale disinformation campaigns that leverage deep learning?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"jupytext": {
"split_at_heading": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}