data-science-ipython-notebooks/deep-learning/tensor-flow-exercises/4_convolutions.ipynb
2015-12-27 07:24:30 -05:00

490 lines
15 KiB
Python

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "4embtkV0pNxM"
},
"source": [
"Deep Learning with TensorFlow\n",
"=============\n",
"\n",
"Credits: Forked from [TensorFlow](https://github.com/tensorflow/tensorflow) by Google\n",
"\n",
"Setup\n",
"------------\n",
"\n",
"Refer to the [setup instructions](https://github.com/donnemartin/data-science-ipython-notebooks/tree/feature/deep-learning/deep-learning/tensor-flow-exercises/README.md).\n",
"\n",
"Exercise 4\n",
"------------\n",
"\n",
"Previously in `2_fullyconnected.ipynb` and `3_regularization.ipynb`, we trained fully connected networks to classify [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) characters.\n",
"\n",
"The goal of this exercise is make the neural network convolutional."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"collapsed": true,
"id": "tm2CQN_Cpwj0"
},
"outputs": [],
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"import cPickle as pickle\n",
"import numpy as np\n",
"import tensorflow as tf"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"colab_type": "code",
"collapsed": false,
"executionInfo": {
"elapsed": 11948,
"status": "ok",
"timestamp": 1446658914837,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"id": "y3-cj1bpmuxc",
"outputId": "016b1a51-0290-4b08-efdb-8c95ffc3cd01"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training set (200000, 28, 28) (200000,)\n",
"Validation set (10000, 28, 28) (10000,)\n",
"Test set (18724, 28, 28) (18724,)\n"
]
}
],
"source": [
"pickle_file = 'notMNIST.pickle'\n",
"\n",
"with open(pickle_file, 'rb') as f:\n",
" save = pickle.load(f)\n",
" train_dataset = save['train_dataset']\n",
" train_labels = save['train_labels']\n",
" valid_dataset = save['valid_dataset']\n",
" valid_labels = save['valid_labels']\n",
" test_dataset = save['test_dataset']\n",
" test_labels = save['test_labels']\n",
" del save # hint to help gc free up memory\n",
" print 'Training set', train_dataset.shape, train_labels.shape\n",
" print 'Validation set', valid_dataset.shape, valid_labels.shape\n",
" print 'Test set', test_dataset.shape, test_labels.shape"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L7aHrm6nGDMB"
},
"source": [
"Reformat into a TensorFlow-friendly shape:\n",
"- convolutions need the image data formatted as a cube (width by height by #channels)\n",
"- labels as float 1-hot encodings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"colab_type": "code",
"collapsed": false,
"executionInfo": {
"elapsed": 11952,
"status": "ok",
"timestamp": 1446658914857,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"id": "IRSyYiIIGIzS",
"outputId": "650a208c-8359-4852-f4f5-8bf10e80ef6c"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training set (200000, 28, 28, 1) (200000, 10)\n",
"Validation set (10000, 28, 28, 1) (10000, 10)\n",
"Test set (18724, 28, 28, 1) (18724, 10)\n"
]
}
],
"source": [
"image_size = 28\n",
"num_labels = 10\n",
"num_channels = 1 # grayscale\n",
"\n",
"import numpy as np\n",
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape(\n",
" (-1, image_size, image_size, num_channels)).astype(np.float32)\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
"print 'Training set', train_dataset.shape, train_labels.shape\n",
"print 'Validation set', valid_dataset.shape, valid_labels.shape\n",
"print 'Test set', test_dataset.shape, test_labels.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"collapsed": true,
"id": "AgQDIREv02p1"
},
"outputs": [],
"source": [
"def accuracy(predictions, labels):\n",
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
" / predictions.shape[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "5rhgjmROXu2O"
},
"source": [
"Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"collapsed": true,
"id": "IZYv70SvvOan"
},
"outputs": [],
"source": [
"batch_size = 16\n",
"patch_size = 5\n",
"depth = 16\n",
"num_hidden = 64\n",
"\n",
"graph = tf.Graph()\n",
"\n",
"with graph.as_default():\n",
"\n",
" # Input data.\n",
" tf_train_dataset = tf.placeholder(\n",
" tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" layer1_weights = tf.Variable(tf.truncated_normal(\n",
" [patch_size, patch_size, num_channels, depth], stddev=0.1))\n",
" layer1_biases = tf.Variable(tf.zeros([depth]))\n",
" layer2_weights = tf.Variable(tf.truncated_normal(\n",
" [patch_size, patch_size, depth, depth], stddev=0.1))\n",
" layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n",
" layer3_weights = tf.Variable(tf.truncated_normal(\n",
" [image_size / 4 * image_size / 4 * depth, num_hidden], stddev=0.1))\n",
" layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n",
" layer4_weights = tf.Variable(tf.truncated_normal(\n",
" [num_hidden, num_labels], stddev=0.1))\n",
" layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n",
" \n",
" # Model.\n",
" def model(data):\n",
" conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n",
" hidden = tf.nn.relu(conv + layer1_biases)\n",
" conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n",
" hidden = tf.nn.relu(conv + layer2_biases)\n",
" shape = hidden.get_shape().as_list()\n",
" reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n",
" hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n",
" return tf.matmul(hidden, layer4_weights) + layer4_biases\n",
" \n",
" # Training computation.\n",
" logits = model(tf_train_dataset)\n",
" loss = tf.reduce_mean(\n",
" tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(model(tf_test_dataset))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 37
}
]
},
"colab_type": "code",
"collapsed": false,
"executionInfo": {
"elapsed": 63292,
"status": "ok",
"timestamp": 1446658966251,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"id": "noKFb2UovVFR",
"outputId": "28941338-2ef9-4088-8bd1-44295661e628"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Initialized\n",
"Minibatch loss at step 0 : 3.51275\n",
"Minibatch accuracy: 6.2%\n",
"Validation accuracy: 12.8%\n",
"Minibatch loss at step 50 : 1.48703\n",
"Minibatch accuracy: 43.8%\n",
"Validation accuracy: 50.4%\n",
"Minibatch loss at step 100 : 1.04377\n",
"Minibatch accuracy: 68.8%\n",
"Validation accuracy: 67.4%\n",
"Minibatch loss at step 150 : 0.601682\n",
"Minibatch accuracy: 68.8%\n",
"Validation accuracy: 73.0%\n",
"Minibatch loss at step 200 : 0.898649\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 77.8%\n",
"Minibatch loss at step 250 : 1.3637\n",
"Minibatch accuracy: 56.2%\n",
"Validation accuracy: 75.4%\n",
"Minibatch loss at step 300 : 1.41968\n",
"Minibatch accuracy: 62.5%\n",
"Validation accuracy: 76.0%\n",
"Minibatch loss at step 350 : 0.300648\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 80.2%\n",
"Minibatch loss at step 400 : 1.32092\n",
"Minibatch accuracy: 56.2%\n",
"Validation accuracy: 80.4%\n",
"Minibatch loss at step 450 : 0.556701\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 79.4%\n",
"Minibatch loss at step 500 : 1.65595\n",
"Minibatch accuracy: 43.8%\n",
"Validation accuracy: 79.6%\n",
"Minibatch loss at step 550 : 1.06995\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 81.2%\n",
"Minibatch loss at step 600 : 0.223684\n",
"Minibatch accuracy: 100.0%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 650 : 0.619602\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 81.8%\n",
"Minibatch loss at step 700 : 0.812091\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 82.4%\n",
"Minibatch loss at step 750 : 0.276302\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 800 : 0.450241\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 850 : 0.137139\n",
"Minibatch accuracy: 93.8%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 900 : 0.52664\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 82.2%\n",
"Minibatch loss at step 950 : 0.623835\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 82.1%\n",
"Minibatch loss at step 1000 : 0.243114\n",
"Minibatch accuracy: 93.8%\n",
"Validation accuracy: 82.9%\n",
"Test accuracy: 90.0%\n"
]
}
],
"source": [
"num_steps = 1001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.initialize_all_variables().run()\n",
" print \"Initialized\"\n",
" for step in xrange(num_steps):\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n",
" _, l, predictions = session.run(\n",
" [optimizer, loss, train_prediction], feed_dict=feed_dict)\n",
" if (step % 50 == 0):\n",
" print \"Minibatch loss at step\", step, \":\", l\n",
" print \"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels)\n",
" print \"Validation accuracy: %.1f%%\" % accuracy(\n",
" valid_prediction.eval(), valid_labels)\n",
" print \"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "KedKkn4EutIK"
},
"source": [
"---\n",
"Problem 1\n",
"---------\n",
"\n",
"The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides a max pooling operation (`nn.max_pool()`) of stride 2 and kernel size 2.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "klf21gpbAgb-"
},
"source": [
"---\n",
"Problem 2\n",
"---------\n",
"\n",
"Try to get the best performance you can using a convolutional net. Look for example at the classic [LeNet5](http://yann.lecun.com/exdb/lenet/) architecture, adding Dropout, and/or adding learning rate decay.\n",
"\n",
"---"
]
}
],
"metadata": {
"colabVersion": "0.3.2",
"colab_default_view": {},
"colab_views": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}