mirror of
https://github.com/donnemartin/data-science-ipython-notebooks.git
synced 2024-03-22 13:30:56 +08:00
Add TensorFlow regularization notebook.
This commit is contained in:
parent
4c9ffb83e7
commit
a0fe5bc62e
|
@ -135,6 +135,7 @@ IPython Notebook(s) demonstrating deep learning functionality.
|
|||
| [deep dream](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/deep-dream/dream.ipynb) | Caffe-based computer vision program which uses a convolutional neural network to find and enhance patterns in images. |
|
||||
| [ts-not-mnist](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-exercises/1_notmnist.ipynb) | Learn simple data curation by creating a pickle with formatted datasets for training, development and testing in TensorFlow. |
|
||||
| [ts-fully-connected](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-exercises/2_fullyconnected.ipynb) | Progressively train deeper and more accurate models using logistic regression and neural networks in TensorFlow. |
|
||||
| [ts-regularization](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-exercises/3_regularization.ipynb) | Explore regularization techniques by training fully connected networks to classify notMNIST characters in TensorFlow. |
|
||||
|
||||
<br/>
|
||||
<p align="center">
|
||||
|
|
323
deep-learning/tensor-flow-exercises/3_regularization.ipynb
Normal file
323
deep-learning/tensor-flow-exercises/3_regularization.ipynb
Normal file
|
@ -0,0 +1,323 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "kR-4eNdK6lYS"
|
||||
},
|
||||
"source": [
|
||||
"Deep Learning with TensorFlow\n",
|
||||
"=============\n",
|
||||
"\n",
|
||||
"Credits: Forked from [TensorFlow](https://github.com/tensorflow/tensorflow) by Google\n",
|
||||
"\n",
|
||||
"Setup\n",
|
||||
"------------\n",
|
||||
"\n",
|
||||
"Refer to the [setup instructions](https://github.com/donnemartin/data-science-ipython-notebooks/tree/feature/deep-learning/deep-learning/tensor-flow-exercises/README.md).\n",
|
||||
"\n",
|
||||
"Exercise 3\n",
|
||||
"------------\n",
|
||||
"\n",
|
||||
"Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.\n",
|
||||
"\n",
|
||||
"The goal of this exercise is to explore regularization techniques."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "both",
|
||||
"colab": {
|
||||
"autoexec": {
|
||||
"startup": false,
|
||||
"wait_interval": 0
|
||||
}
|
||||
},
|
||||
"colab_type": "code",
|
||||
"collapsed": true,
|
||||
"id": "JLpLa8Jt7Vu4"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# These are all the modules we'll be using later. Make sure you can import them\n",
|
||||
"# before proceeding further.\n",
|
||||
"import cPickle as pickle\n",
|
||||
"import numpy as np\n",
|
||||
"import tensorflow as tf"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "1HrCK6e17WzV"
|
||||
},
|
||||
"source": [
|
||||
"First reload the data we generated in _notmist.ipynb_."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "both",
|
||||
"colab": {
|
||||
"autoexec": {
|
||||
"startup": false,
|
||||
"wait_interval": 0
|
||||
},
|
||||
"output_extras": [
|
||||
{
|
||||
"item_id": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
"colab_type": "code",
|
||||
"collapsed": false,
|
||||
"executionInfo": {
|
||||
"elapsed": 11777,
|
||||
"status": "ok",
|
||||
"timestamp": 1449849322348,
|
||||
"user": {
|
||||
"color": "",
|
||||
"displayName": "",
|
||||
"isAnonymous": false,
|
||||
"isMe": true,
|
||||
"permissionId": "",
|
||||
"photoUrl": "",
|
||||
"sessionId": "0",
|
||||
"userId": ""
|
||||
},
|
||||
"user_tz": 480
|
||||
},
|
||||
"id": "y3-cj1bpmuxc",
|
||||
"outputId": "e03576f1-ebbe-4838-c388-f1777bcc9873"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Training set (200000, 28, 28) (200000,)\n",
|
||||
"Validation set (10000, 28, 28) (10000,)\n",
|
||||
"Test set (18724, 28, 28) (18724,)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"pickle_file = 'notMNIST.pickle'\n",
|
||||
"\n",
|
||||
"with open(pickle_file, 'rb') as f:\n",
|
||||
" save = pickle.load(f)\n",
|
||||
" train_dataset = save['train_dataset']\n",
|
||||
" train_labels = save['train_labels']\n",
|
||||
" valid_dataset = save['valid_dataset']\n",
|
||||
" valid_labels = save['valid_labels']\n",
|
||||
" test_dataset = save['test_dataset']\n",
|
||||
" test_labels = save['test_labels']\n",
|
||||
" del save # hint to help gc free up memory\n",
|
||||
" print 'Training set', train_dataset.shape, train_labels.shape\n",
|
||||
" print 'Validation set', valid_dataset.shape, valid_labels.shape\n",
|
||||
" print 'Test set', test_dataset.shape, test_labels.shape"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "L7aHrm6nGDMB"
|
||||
},
|
||||
"source": [
|
||||
"Reformat into a shape that's more adapted to the models we're going to train:\n",
|
||||
"- data as a flat matrix,\n",
|
||||
"- labels as float 1-hot encodings."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "both",
|
||||
"colab": {
|
||||
"autoexec": {
|
||||
"startup": false,
|
||||
"wait_interval": 0
|
||||
},
|
||||
"output_extras": [
|
||||
{
|
||||
"item_id": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
"colab_type": "code",
|
||||
"collapsed": false,
|
||||
"executionInfo": {
|
||||
"elapsed": 11728,
|
||||
"status": "ok",
|
||||
"timestamp": 1449849322356,
|
||||
"user": {
|
||||
"color": "",
|
||||
"displayName": "",
|
||||
"isAnonymous": false,
|
||||
"isMe": true,
|
||||
"permissionId": "",
|
||||
"photoUrl": "",
|
||||
"sessionId": "0",
|
||||
"userId": ""
|
||||
},
|
||||
"user_tz": 480
|
||||
},
|
||||
"id": "IRSyYiIIGIzS",
|
||||
"outputId": "3f8996ee-3574-4f44-c953-5c8a04636582"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Training set (200000, 784) (200000, 10)\n",
|
||||
"Validation set (10000, 784) (10000, 10)\n",
|
||||
"Test set (18724, 784) (18724, 10)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"image_size = 28\n",
|
||||
"num_labels = 10\n",
|
||||
"\n",
|
||||
"def reformat(dataset, labels):\n",
|
||||
" dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
|
||||
" # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n",
|
||||
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
|
||||
" return dataset, labels\n",
|
||||
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
|
||||
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
|
||||
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
|
||||
"print 'Training set', train_dataset.shape, train_labels.shape\n",
|
||||
"print 'Validation set', valid_dataset.shape, valid_labels.shape\n",
|
||||
"print 'Test set', test_dataset.shape, test_labels.shape"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "both",
|
||||
"colab": {
|
||||
"autoexec": {
|
||||
"startup": false,
|
||||
"wait_interval": 0
|
||||
}
|
||||
},
|
||||
"colab_type": "code",
|
||||
"collapsed": true,
|
||||
"id": "RajPLaL_ZW6w"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def accuracy(predictions, labels):\n",
|
||||
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
|
||||
" / predictions.shape[0])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "sgLbUAQ1CW-1"
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"Problem 1\n",
|
||||
"---------\n",
|
||||
"\n",
|
||||
"Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compue the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.\n",
|
||||
"\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "na8xX2yHZzNF"
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"Problem 2\n",
|
||||
"---------\n",
|
||||
"Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n",
|
||||
"\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "ww3SCBUdlkRc"
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"Problem 3\n",
|
||||
"---------\n",
|
||||
"Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.\n",
|
||||
"\n",
|
||||
"What happens to our extreme overfitting case?\n",
|
||||
"\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "-b1hTz3VWZjw"
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"Problem 4\n",
|
||||
"---------\n",
|
||||
"\n",
|
||||
"Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).\n",
|
||||
"\n",
|
||||
"One avenue you can explore is to add multiple layers.\n",
|
||||
"\n",
|
||||
"Another one is to use learning rate decay:\n",
|
||||
"\n",
|
||||
" global_step = tf.Variable(0) # count the number of steps taken.\n",
|
||||
" learning_rate = tf.train.exponential_decay(0.5, step, ...)\n",
|
||||
" optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n",
|
||||
" \n",
|
||||
" ---\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colabVersion": "0.3.2",
|
||||
"colab_default_view": {},
|
||||
"colab_views": {},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
Loading…
Reference in New Issue
Block a user