Skip to content

Commit

Permalink
add nlp tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
HwangJaeYoung committed Apr 18, 2022
1 parent 25eaa4e commit d086b93
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 0 deletions.
1 change: 1 addition & 0 deletions nlp/IMDB_for_text_classfication.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells":[{"cell_type":"code","execution_count":null,"metadata":{"id":"PQQBUxGeOeFl"},"outputs":[],"source":["import numpy as np\n","import matplotlib.pyplot as plt\n","from tensorflow.keras.datasets import imdb"]},{"cell_type":"markdown","metadata":{"id":"WC2xyYXVch9M"},"source":["- IMDB 리뷰 데이터는 기존 데이터 셋과는 달리 이미 훈련 데이터와 테스트 데이터를 50:50 비율로 구분해서 제공\n","- imdb.data_load()의 인자로 num_words를 사용하면 이 데이터에서 등장 빈도 순위로 몇 등까지의 단어를 사용할 것인지를 의미한다.\n","- 예를들어 10,000을 넣으면, 등장 빈도 순위가 1~10,000에 해당하는 단어만 사용한다."]},{"cell_type":"code","execution_count":null,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"elapsed":4772,"status":"ok","timestamp":1650125301372,"user":{"displayName":"JaeYoung Hwang","userId":"08071223562055378805"},"user_tz":-540},"id":"QxF2Y1sBOjX-","outputId":"f31b56ba-810c-4bbf-b5e8-74aeb248142d"},"outputs":[{"output_type":"stream","name":"stdout","text":["Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz\n","17465344/17464789 [==============================] - 0s 0us/step\n","17473536/17464789 [==============================] - 0s 0us/step\n","훈련용 리뷰 개수: 25000\n","테스트용 리뷰 개수: 25000\n","카테고리: 2\n"]}],"source":["(X_train, y_train), (X_test, y_test) = imdb.load_data()\n","\n","print('훈련용 리뷰 개수: {}'.format(len(X_train)))\n","print('테스트용 리뷰 개수: {}'.format(len(X_test)))\n","num_classes = len(set(y_train))\n","print('카테고리: {}'.format(num_classes))"]},{"cell_type":"code","execution_count":null,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"elapsed":8,"status":"ok","timestamp":1650125301373,"user":{"displayName":"JaeYoung Hwang","userId":"08071223562055378805"},"user_tz":-540},"id":"KMZ_ULG_ezYz","outputId":"e83f8662-34d5-49d1-e72f-501f954a8c2c"},"outputs":[{"output_type":"stream","name":"stdout","text":["첫번째 훈련용 리뷰 : [1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]\n","첫번째 훈련용 리뷰의 레이블 : 1\n"]}],"source":["print('첫번째 훈련용 리뷰 :', X_train[0])\n","print('첫번째 훈련용 리뷰의 레이블 :', y_train[0])"]},{"cell_type":"markdown","source":["- 케라스의 Embedding()은 단어 각각에 대해 정수로 변환된 입력에 대해서 임베딩 작업을 수행한다.\n","\n","- 단어 각각에 정수를 부여하는 방법으로는 단어를 빈도수 순대로 정렬하고 순차적으로 정수를 부여하는 방법이 있다. 로이터 뉴스와 IMDB 리뷰 데이터는 방법을 사용하였으며 이미 이 작업이 끝난 상태이다.\n","\n","- 등장 빈도 순으로 단어를 정렬하여 정수를 부여하였을 때의 장점은 등장 빈도수가 적은 단어의 제거이다. 예를 들어서 25,000개의 단어가 있다고 가정하고, 해당 단어를 등장 빈도수 순가 높은 순서로 1부터 25,000까지 정수를 부여했다고 하자. 이렇게 되면 등장 빈도 순으로 등수가 부여된 것과 다름없으므로 전처리 작업에서 1,000보다 큰 정수로 맵핑된 단어들을 제거한다면 등장 빈도 상위 1,000개의 단어만 남길 수 있다."],"metadata":{"id":"tE80zquxL3Ed"}},{"cell_type":"code","execution_count":null,"metadata":{"id":"8AljbIPlezb3"},"outputs":[],"source":["# 단어 집합의 크기를 10,000으로 제한하고, 리뷰 최대 길이는 500으로 제한하여 패딩을 진행\n","import re\n","from tensorflow.keras.datasets import imdb\n","from tensorflow.keras.preprocessing.sequence import pad_sequences\n","from tensorflow.keras.models import Sequential\n","from tensorflow.keras.layers import Dense, GRU, Embedding\n","from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint\n","from tensorflow.keras.models import load_model\n","\n","vocab_size = 10000\n","max_len = 500\n","\n","(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocab_size)\n","\n","X_train = pad_sequences(X_train, maxlen=max_len)\n","X_test = pad_sequences(X_test, maxlen=max_len)"]},{"cell_type":"code","execution_count":null,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"jI1OVv1YezfQ","executionInfo":{"status":"ok","timestamp":1649907694771,"user_tz":-540,"elapsed":83393,"user":{"displayName":"JaeYoung Hwang","userId":"08071223562055378805"}},"outputId":"ab0fe9c7-09be-4ac5-c673-0384af5ca528"},"outputs":[{"output_type":"stream","name":"stdout","text":["Epoch 1/15\n","312/313 [============================>.] - ETA: 0s - loss: 0.5099 - acc: 0.7586\n","Epoch 1: val_acc improved from -inf to 0.76940, saving model to GRU_model.h5\n","313/313 [==============================] - 16s 31ms/step - loss: 0.5098 - acc: 0.7585 - val_loss: 0.4884 - val_acc: 0.7694\n","Epoch 2/15\n","312/313 [============================>.] - ETA: 0s - loss: 0.3308 - acc: 0.8698\n","Epoch 2: val_acc improved from 0.76940 to 0.87560, saving model to GRU_model.h5\n","313/313 [==============================] - 9s 29ms/step - loss: 0.3306 - acc: 0.8698 - val_loss: 0.3097 - val_acc: 0.8756\n","Epoch 3/15\n","311/313 [============================>.] - ETA: 0s - loss: 0.2543 - acc: 0.9031\n","Epoch 3: val_acc improved from 0.87560 to 0.88160, saving model to GRU_model.h5\n","313/313 [==============================] - 9s 29ms/step - loss: 0.2543 - acc: 0.9032 - val_loss: 0.3290 - val_acc: 0.8816\n","Epoch 4/15\n","311/313 [============================>.] - ETA: 0s - loss: 0.2096 - acc: 0.9218\n","Epoch 4: val_acc improved from 0.88160 to 0.88700, saving model to GRU_model.h5\n","313/313 [==============================] - 10s 30ms/step - loss: 0.2097 - acc: 0.9216 - val_loss: 0.2762 - val_acc: 0.8870\n","Epoch 5/15\n","312/313 [============================>.] - ETA: 0s - loss: 0.1634 - acc: 0.9416\n","Epoch 5: val_acc did not improve from 0.88700\n","313/313 [==============================] - 9s 29ms/step - loss: 0.1639 - acc: 0.9414 - val_loss: 0.3014 - val_acc: 0.8750\n","Epoch 6/15\n","311/313 [============================>.] - ETA: 0s - loss: 0.1326 - acc: 0.9523\n","Epoch 6: val_acc improved from 0.88700 to 0.88940, saving model to GRU_model.h5\n","313/313 [==============================] - 9s 29ms/step - loss: 0.1327 - acc: 0.9522 - val_loss: 0.2905 - val_acc: 0.8894\n","Epoch 7/15\n","312/313 [============================>.] - ETA: 0s - loss: 0.1038 - acc: 0.9631\n","Epoch 7: val_acc improved from 0.88940 to 0.89380, saving model to GRU_model.h5\n","313/313 [==============================] - 9s 30ms/step - loss: 0.1037 - acc: 0.9632 - val_loss: 0.2838 - val_acc: 0.8938\n","Epoch 8/15\n","312/313 [============================>.] - ETA: 0s - loss: 0.0764 - acc: 0.9742\n","Epoch 8: val_acc did not improve from 0.89380\n","313/313 [==============================] - 9s 29ms/step - loss: 0.0764 - acc: 0.9742 - val_loss: 0.3494 - val_acc: 0.8840\n","Epoch 8: early stopping\n"]}],"source":["embedding_dim = 100\n","hidden_units = 128\n","\n","model = Sequential()\n","model.add(Embedding(vocab_size, embedding_dim))\n","model.add(GRU(hidden_units))\n","model.add(Dense(1, activation='sigmoid'))\n","\n","es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=4)\n","mc = ModelCheckpoint('GRU_model.h5', monitor='val_acc', mode='max', verbose=1, save_best_only=True)\n","\n","model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])\n","history = model.fit(X_train, y_train, epochs=15, callbacks=[es, mc], batch_size=64, validation_split=0.2)"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"--6-cTfMg8GL","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1649907706034,"user_tz":-540,"elapsed":11267,"user":{"displayName":"JaeYoung Hwang","userId":"08071223562055378805"}},"outputId":"846d47bf-eb3f-4b92-fa85-561d0acb55f5"},"outputs":[{"output_type":"stream","name":"stdout","text":["782/782 [==============================] - 8s 9ms/step - loss: 0.3145 - acc: 0.8847\n","\n"," 테스트 정확도: 0.8847\n"]}],"source":["loaded_model = load_model('GRU_model.h5')\n","print(\"\\n 테스트 정확도: %.4f\" % (loaded_model.evaluate(X_test, y_test)[1]))"]}],"metadata":{"colab":{"name":"IMDB_for_text_classfication.ipynb","provenance":[],"collapsed_sections":[],"authorship_tag":"ABX9TyM0sdBxCxha7+ZZYohdjL6j"},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"},"accelerator":"GPU"},"nbformat":4,"nbformat_minor":0}
Loading

0 comments on commit d086b93

Please sign in to comment.