From dcaf783f7ce180e779983885faff168936cf50c7 Mon Sep 17 00:00:00 2001 From: Hiroya Chiba Date: Sat, 30 Sep 2017 22:09:21 +0900 Subject: [PATCH] typos --- 6.1-one-hot-encoding-of-words-or-characters.ipynb | 2 +- 6.1-using-word-embeddings.ipynb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/6.1-one-hot-encoding-of-words-or-characters.ipynb b/6.1-one-hot-encoding-of-words-or-characters.ipynb index 7538c0183e..2138492d0a 100644 --- a/6.1-one-hot-encoding-of-words-or-characters.ipynb +++ b/6.1-one-hot-encoding-of-words-or-characters.ipynb @@ -155,7 +155,7 @@ "samples = ['The cat sat on the mat.', 'The dog ate my homework.']\n", "\n", "# We create a tokenizer, configured to only take\n", - "# into account the top-1000 most common on words\n", + "# into account the top-1000 most common words\n", "tokenizer = Tokenizer(num_words=1000)\n", "# This builds the word index\n", "tokenizer.fit_on_texts(samples)\n", diff --git a/6.1-using-word-embeddings.ipynb b/6.1-using-word-embeddings.ipynb index 48baf32924..8596b2a218 100644 --- a/6.1-using-word-embeddings.ipynb +++ b/6.1-using-word-embeddings.ipynb @@ -589,7 +589,7 @@ "Additionally, we freeze the embedding layer (we set its `trainable` attribute to `False`), following the same rationale as what you are \n", "already familiar with in the context of pre-trained convnet features: when parts of a model are pre-trained (like our `Embedding` layer), \n", "and parts are randomly initialized (like our classifier), the pre-trained parts should not be updated during training to avoid forgetting \n", - "what they already know. The large gradient updated triggered by the randomly initialized layers would be very disruptive to the already \n", + "what they already know. The large gradient update triggered by the randomly initialized layers would be very disruptive to the already \n", "learned features." ] },