Skip to content

Commit

Permalink
Merge branch 'master' into estimator_example
Browse files Browse the repository at this point in the history
  • Loading branch information
bozhou committed May 23, 2019
2 parents c55b534 + 417f466 commit 09de6f7
Show file tree
Hide file tree
Showing 8 changed files with 5,949 additions and 0 deletions.
425 changes: 425 additions & 0 deletions keras/2.1-a-first-look-at-a-neural-network.ipynb

Large diffs are not rendered by default.

1,740 changes: 1,740 additions & 0 deletions keras/3.5-classifying-movie-reviews.ipynb

Large diffs are not rendered by default.

554 changes: 554 additions & 0 deletions keras/3.6-classifying-newswires.ipynb

Large diffs are not rendered by default.

797 changes: 797 additions & 0 deletions keras/3.7-predicting-house-prices.ipynb

Large diffs are not rendered by default.

711 changes: 711 additions & 0 deletions keras/4.4-overfitting-and-underfitting.ipynb

Large diffs are not rendered by default.

300 changes: 300 additions & 0 deletions keras/5.1-introduction-to-convnets.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,300 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all, set environment variables and initialize spark context:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"env: SPARK_DRIVER_MEMORY=8g\n",
"env: PYSPARK_PYTHON=/usr/bin/python3.5\n",
"env: PYSPARK_DRIVER_PYTHON=/usr/bin/python3.5\n"
]
}
],
"source": [
"%env SPARK_DRIVER_MEMORY=8g\n",
"%env PYSPARK_PYTHON=/usr/bin/python3.5\n",
"%env PYSPARK_DRIVER_PYTHON=/usr/bin/python3.5\n",
"\n",
"from zoo.common.nncontext import *\n",
"sc = init_nncontext(init_spark_conf().setMaster(\"local[4]\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 5.1 - Introduction to convnets\n",
"\n",
"\n",
"----\n",
"\n",
"First, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been \n",
"through in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its \n",
"accuracy will still blow out of the water that of the densely-connected model from Chapter 2.\n",
"\n",
"The 6 lines of code below show you what a basic convnet looks like. It's a stack of `Conv2D` and `MaxPooling2D` layers. We'll see in a \n",
"minute what they do concretely.\n",
"Importantly, a convnet takes as input tensors of shape `(image_height, image_width, image_channels)` (not including the batch dimension). \n",
"In our case, we will configure our convnet to process inputs of size `(28, 28, 1)`, which is the format of MNIST images. We do this via \n",
"passing the argument `input_shape=(28, 28, 1)` to our first layer."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"creating: createZooKerasSequential\n",
"creating: createZooKerasConvolution2D\n",
"creating: createZooKerasMaxPooling2D\n",
"creating: createZooKerasConvolution2D\n",
"creating: createZooKerasMaxPooling2D\n",
"creating: createZooKerasConvolution2D\n"
]
}
],
"source": [
"from zoo.pipeline.api.keras import layers\n",
"from zoo.pipeline.api.keras import models\n",
"\n",
"model = models.Sequential()\n",
"model.add(layers.Conv2D(32, nb_col=3, nb_row=3, activation='relu', input_shape=(1,28,28)))\n",
"model.add(layers.MaxPooling2D((2, 2)))\n",
"model.add(layers.Conv2D(64, nb_col=3, nb_row=3, activation='relu'))\n",
"model.add(layers.MaxPooling2D((2, 2)))\n",
"model.add(layers.Conv2D(64, nb_col=3, nb_row=3, activation='relu'))\n",
"\n",
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_In Keras one could see model summary directly in output, in Keras API of Analytics Zoo, summary is printed in console, the same as INFO._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From summary you can see that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width \n",
"and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to \n",
"the `Conv2D` layers (e.g. 32 or 64).\n",
"\n",
"The next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are \n",
"already familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor. \n",
"So first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"creating: createZooKerasFlatten\n",
"creating: createZooKerasDense\n",
"creating: createZooKerasDense\n"
]
},
{
"data": {
"text/plain": [
"<zoo.pipeline.api.keras.models.Sequential at 0x7f81700d54a8>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.add(layers.Flatten())\n",
"model.add(layers.Dense(64, activation='relu'))\n",
"model.add(layers.Dense(10, activation='softmax'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network \n",
"looks like:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.\n",
"\n",
"Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter \n",
"2."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### CNN input shape\n",
"_Once we get the dataset, we need to reshape the images. In Keras the shape of the dataset is `(sample_size, height, width, channel)`, like the Keras code below:\n",
" \n",
" train_images = train_images.reshape((60000, 28, 28, 1))\n",
"In Keras API of Analytics Zoo, the default order is theano-style NCHW `(sample_size, channel, height, width)`, so you can process data like following:\n",
"\n",
"Alternatively, you can also use tensorflow-style NHWC as Keras default just by setting `Convolution2D(dim_ordering=\"tf\")`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.\n",
"\n",
"Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter \n",
"2."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n"
]
}
],
"source": [
"from keras.datasets import mnist\n",
"(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n",
"\n",
"train_images = train_images.reshape((60000, 1, 28, 28))\n",
"train_images = train_images.astype('float32') / 255\n",
"\n",
"test_images = test_images.reshape((10000, 1, 28, 28))\n",
"test_images = test_images.astype('float32') / 255"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"creating: createRMSprop\n",
"creating: createZooKerasSparseCategoricalCrossEntropy\n",
"creating: createZooKerasSparseCategoricalAccuracy\n"
]
}
],
"source": [
"model.compile(optimizer='rmsprop',\n",
" loss='sparse_categorical_crossentropy',\n",
" metrics=['acc'])\n",
"\n",
"model.fit(train_images, train_labels, nb_epoch=5, batch_size=64)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Trained 64 records in 0.03212866 seconds. Throughput is 1991.9911 records/second. Loss is 0.0023578003."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"test_loss, test_acc = model.evaluate(test_images, test_labels)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.9912999868392944"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_acc"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"While our densely-connected network from Chapter 2 had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.1%: we \n",
"decreased our error rate by over 50% (relative). Not bad! "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit 09de6f7

Please sign in to comment.