2015-12-27 20:25:44 +08:00
{
" cells " : [
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " D7tqLMoKF6uq "
} ,
" source " : [
" Deep Learning with TensorFlow \n " ,
" ============= \n " ,
" \n " ,
" Credits: Forked from [TensorFlow](https://github.com/tensorflow/tensorflow) by Google \n " ,
" \n " ,
" Setup \n " ,
" ------------ \n " ,
" \n " ,
" Refer to the [setup instructions](https://github.com/donnemartin/data-science-ipython-notebooks/tree/feature/deep-learning/deep-learning/tensor-flow-exercises/README.md). \n " ,
" \n " ,
" Exercise 6 \n " ,
" ------------ \n " ,
" \n " ,
" After training a skip-gram model in `5_word2vec.ipynb`, the goal of this exercise is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
}
} ,
" colab_type " : " code " ,
" collapsed " : true ,
" id " : " MvEblsgEXxrd "
} ,
" outputs " : [ ] ,
" source " : [
" # These are all the modules we ' ll be using later. Make sure you can import them \n " ,
" # before proceeding further. \n " ,
" import os \n " ,
" import numpy as np \n " ,
" import random \n " ,
" import string \n " ,
" import tensorflow as tf \n " ,
" import urllib \n " ,
" import zipfile "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 1
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 5993 ,
" status " : " ok " ,
" timestamp " : 1445965582896 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " RJ-o3UBUFtCw " ,
" outputId " : " d530534e-0791-4a94-ca6d-1c8f1b908a9e "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" Found and verified text8.zip \n "
]
}
] ,
" source " : [
" url = ' http://mattmahoney.net/dc/ ' \n " ,
" \n " ,
" def maybe_download(filename, expected_bytes): \n " ,
" \" \" \" Download a file if not present, and make sure it ' s the right size. \" \" \" \n " ,
" if not os.path.exists(filename): \n " ,
" filename, _ = urllib.urlretrieve(url + filename, filename) \n " ,
" statinfo = os.stat(filename) \n " ,
" if statinfo.st_size == expected_bytes: \n " ,
" print ' Found and verified ' , filename \n " ,
" else: \n " ,
" print statinfo.st_size \n " ,
" raise Exception( \n " ,
" ' Failed to verify ' + filename + ' . Can you get to it with a browser? ' ) \n " ,
" return filename \n " ,
" \n " ,
" filename = maybe_download( ' text8.zip ' , 31344016) "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 1
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 5982 ,
" status " : " ok " ,
" timestamp " : 1445965582916 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " Mvf09fjugFU_ " ,
" outputId " : " 8f75db58-3862-404b-a0c3-799380597390 "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" Data size 100000000 \n "
]
}
] ,
" source " : [
" def read_data(filename): \n " ,
" f = zipfile.ZipFile(filename) \n " ,
" for name in f.namelist(): \n " ,
" return f.read(name) \n " ,
" f.close() \n " ,
" \n " ,
" text = read_data(filename) \n " ,
" print \" Data size \" , len(text) "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " ga2CYACE-ghb "
} ,
" source " : [
" Create a small validation set. "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 1
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 6184 ,
" status " : " ok " ,
" timestamp " : 1445965583138 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " w-oBpfFG-j43 " ,
" outputId " : " bdb96002-d021-4379-f6de-a977924f0d02 "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" 99999000 ons anarchists advocate social relations based upon voluntary as \n " ,
" 1000 anarchism originated as a term of abuse first used against earl \n "
]
}
] ,
" source " : [
" valid_size = 1000 \n " ,
" valid_text = text[:valid_size] \n " ,
" train_text = text[valid_size:] \n " ,
" train_size = len(train_text) \n " ,
" print train_size, train_text[:64] \n " ,
" print valid_size, valid_text[:64] "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " Zdw6i4F8glpp "
} ,
" source " : [
" Utility functions to map characters to vocabulary IDs and back. "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 1
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 6276 ,
" status " : " ok " ,
" timestamp " : 1445965583249 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " gAL1EECXeZsD " ,
" outputId " : " 88fc9032-feb9-45ff-a9a0-a26759cc1f2e "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" 1 26 0 Unexpected character: ï \n " ,
" 0 \n " ,
" a z \n "
]
}
] ,
" source " : [
" vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' \n " ,
" first_letter = ord(string.ascii_lowercase[0]) \n " ,
" \n " ,
" def char2id(char): \n " ,
" if char in string.ascii_lowercase: \n " ,
" return ord(char) - first_letter + 1 \n " ,
" elif char == ' ' : \n " ,
" return 0 \n " ,
" else: \n " ,
" print ' Unexpected character: ' , char \n " ,
" return 0 \n " ,
" \n " ,
" def id2char(dictid): \n " ,
" if dictid > 0: \n " ,
" return chr(dictid + first_letter - 1) \n " ,
" else: \n " ,
" return ' ' \n " ,
" \n " ,
" print char2id( ' a ' ), char2id( ' z ' ), char2id( ' ' ), char2id( ' ï ' ) \n " ,
" print id2char(1), id2char(26), id2char(0) "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " lFwoyygOmWsL "
} ,
" source " : [
" Function to generate a training batch for the LSTM model. "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 1
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 6473 ,
" status " : " ok " ,
" timestamp " : 1445965583467 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " d9wMtjy5hCj9 " ,
" outputId " : " 3dd79c80-454a-4be0-8b71-4a4a357b3367 "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" [ ' ons anarchi ' , ' when milita ' , ' lleria arch ' , ' abbeys and ' , ' married urr ' , ' hel and ric ' , ' y and litur ' , ' ay opened f ' , ' tion from t ' , ' migration t ' , ' new york ot ' , ' he boeing s ' , ' e listed wi ' , ' eber has pr ' , ' o be made t ' , ' yer who rec ' , ' ore signifi ' , ' a fierce cr ' , ' two six ei ' , ' aristotle s ' , ' ity can be ' , ' and intrac ' , ' tion of the ' , ' dy to pass ' , ' f certain d ' , ' at it will ' , ' e convince ' , ' ent told hi ' , ' ampaign and ' , ' rver side s ' , ' ious texts ' , ' o capitaliz ' , ' a duplicate ' , ' gh ann es d ' , ' ine january ' , ' ross zero t ' , ' cal theorie ' , ' ast instanc ' , ' dimensiona ' , ' most holy m ' , ' t s support ' , ' u is still ' , ' e oscillati ' , ' o eight sub ' , ' of italy la ' , ' s the tower ' , ' klahoma pre ' , ' erprise lin ' , ' ws becomes ' , ' et in a naz ' , ' the fabian ' , ' etchy to re ' , ' sharman ne ' , ' ised empero ' , ' ting in pol ' , ' d neo latin ' , ' th risky ri ' , ' encyclopedi ' , ' fense the a ' , ' duating fro ' , ' treet grid ' , ' ations more ' , ' appeal of d ' , ' si have mad ' ] \n " ,
" [ ' ists advoca ' , ' ary governm ' , ' hes nationa ' , ' d monasteri ' , ' raca prince ' , ' chard baer ' , ' rgical lang ' , ' for passeng ' , ' the nationa ' , ' took place ' , ' ther well k ' , ' seven six s ' , ' ith a gloss ' , ' robably bee ' , ' to recogniz ' , ' ceived the ' , ' icant than ' , ' ritic of th ' , ' ight in sig ' , ' s uncaused ' , ' lost as in ' , ' cellular ic ' , ' e size of t ' , ' him a stic ' , ' drugs confu ' , ' take to co ' , ' the priest ' , ' im to name ' , ' d barred at ' , ' standard fo ' , ' such as es ' , ' ze on the g ' , ' e of the or ' , ' d hiver one ' , ' y eight mar ' , ' the lead ch ' , ' es classica ' , ' ce the non ' , ' al analysis ' , ' mormons bel ' , ' t or at lea ' , ' disagreed ' , ' ing system ' , ' btypes base ' , ' anguages th ' , ' r commissio ' , ' ess one nin ' , ' nux suse li ' , ' the first ' , ' zi concentr ' , ' society ne ' , ' elatively s ' , ' etworks sha ' , ' or hirohito ' , ' litical ini ' , ' n most of t ' , ' iskerdoo ri ' , ' ic overview ' , ' air compone ' , ' om acnm acc ' , ' centerline ' , ' e than any ' , ' devotional ' , ' de such dev ' ] \n " ,
" [ ' a ' ] \n " ,
" [ ' an ' ] \n "
]
}
] ,
" source " : [
" batch_size=64 \n " ,
" num_unrollings=10 \n " ,
" \n " ,
" class BatchGenerator(object): \n " ,
" def __init__(self, text, batch_size, num_unrollings): \n " ,
" self._text = text \n " ,
" self._text_size = len(text) \n " ,
" self._batch_size = batch_size \n " ,
" self._num_unrollings = num_unrollings \n " ,
" segment = self._text_size / batch_size \n " ,
" self._cursor = [ offset * segment for offset in xrange(batch_size)] \n " ,
" self._last_batch = self._next_batch() \n " ,
" \n " ,
" def _next_batch(self): \n " ,
" \" \" \" Generate a single batch from the current cursor position in the data. \" \" \" \n " ,
" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) \n " ,
" for b in xrange(self._batch_size): \n " ,
" batch[b, char2id(self._text[self._cursor[b]])] = 1.0 \n " ,
" self._cursor[b] = (self._cursor[b] + 1) % s elf._text_size \n " ,
" return batch \n " ,
" \n " ,
" def next(self): \n " ,
" \" \" \" Generate the next array of batches from the data. The array consists of \n " ,
" the last batch of the previous array, followed by num_unrollings new ones. \n " ,
" \" \" \" \n " ,
" batches = [self._last_batch] \n " ,
" for step in xrange(self._num_unrollings): \n " ,
" batches.append(self._next_batch()) \n " ,
" self._last_batch = batches[-1] \n " ,
" return batches \n " ,
" \n " ,
" def characters(probabilities): \n " ,
" \" \" \" Turn a 1-hot encoding or a probability distribution over the possible \n " ,
" characters back into its (mostl likely) character representation. \" \" \" \n " ,
" return [id2char(c) for c in np.argmax(probabilities, 1)] \n " ,
" \n " ,
" def batches2string(batches): \n " ,
" \" \" \" Convert a sequence of batches back into their (most likely) string \n " ,
" representation. \" \" \" \n " ,
" s = [ ' ' ] * batches[0].shape[0] \n " ,
" for b in batches: \n " ,
" s = [ ' ' .join(x) for x in zip(s, characters(b))] \n " ,
" return s \n " ,
" \n " ,
" train_batches = BatchGenerator(train_text, batch_size, num_unrollings) \n " ,
" valid_batches = BatchGenerator(valid_text, 1, 1) \n " ,
" \n " ,
" print batches2string(train_batches.next()) \n " ,
" print batches2string(train_batches.next()) \n " ,
" print batches2string(valid_batches.next()) \n " ,
" print batches2string(valid_batches.next()) "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
}
} ,
" colab_type " : " code " ,
" collapsed " : true ,
" id " : " KyVd8FxT5QBc "
} ,
" outputs " : [ ] ,
" source " : [
" def logprob(predictions, labels): \n " ,
" \" \" \" Log-probability of the true labels in a predicted batch. \" \" \" \n " ,
" predictions[predictions < 1e-10] = 1e-10 \n " ,
" return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] \n " ,
" \n " ,
" def sample_distribution(distribution): \n " ,
" \" \" \" Sample one element from a distribution assumed to be an array of normalized \n " ,
" probabilities. \n " ,
" \" \" \" \n " ,
" r = random.uniform(0, 1) \n " ,
" s = 0 \n " ,
" for i in xrange(len(distribution)): \n " ,
" s += distribution[i] \n " ,
" if s >= r: \n " ,
" return i \n " ,
" return len(distribution) - 1 \n " ,
" \n " ,
" def sample(prediction): \n " ,
" \" \" \" Turn a (column) prediction into 1-hot encoded samples. \" \" \" \n " ,
" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) \n " ,
" p[0, sample_distribution(prediction[0])] = 1.0 \n " ,
" return p \n " ,
" \n " ,
" def random_distribution(): \n " ,
" \" \" \" Generate a random column of probabilities. \" \" \" \n " ,
" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) \n " ,
" return b/np.sum(b, 1)[:,None] "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " K8f67YXaDr4C "
} ,
" source " : [
" Simple LSTM Model. "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
}
} ,
" colab_type " : " code " ,
" collapsed " : true ,
" id " : " Q5rxZK6RDuGe "
} ,
" outputs " : [ ] ,
" source " : [
" num_nodes = 64 \n " ,
" \n " ,
" graph = tf.Graph() \n " ,
" with graph.as_default(): \n " ,
" \n " ,
" # Parameters: \n " ,
" # Input gate: input, previous output, and bias. \n " ,
" ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) \n " ,
" im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) \n " ,
" ib = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" # Forget gate: input, previous output, and bias. \n " ,
" fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) \n " ,
" fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) \n " ,
" fb = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" # Memory cell: input, state and bias. \n " ,
" cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) \n " ,
" cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) \n " ,
" cb = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" # Output gate: input, previous output, and bias. \n " ,
" ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) \n " ,
" om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) \n " ,
" ob = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" # Variables saving state across unrollings. \n " ,
" saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) \n " ,
" saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) \n " ,
" # Classifier weights and biases. \n " ,
" w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) \n " ,
" b = tf.Variable(tf.zeros([vocabulary_size])) \n " ,
" \n " ,
" # Definition of the cell computation. \n " ,
" def lstm_cell(i, o, state): \n " ,
" \" \" \" Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf \n " ,
" Note that in this formulation, we omit the various connections between the \n " ,
" previous state and the gates. \" \" \" \n " ,
" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) \n " ,
" forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) \n " ,
" update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb \n " ,
" state = forget_gate * state + input_gate * tf.tanh(update) \n " ,
" output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) \n " ,
" return output_gate * tf.tanh(state), state \n " ,
" \n " ,
" # Input data. \n " ,
" train_data = list() \n " ,
" for _ in xrange(num_unrollings + 1): \n " ,
" train_data.append( \n " ,
" tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) \n " ,
" train_inputs = train_data[:num_unrollings] \n " ,
" train_labels = train_data[1:] # labels are inputs shifted by one time step. \n " ,
" \n " ,
" # Unrolled LSTM loop. \n " ,
" outputs = list() \n " ,
" output = saved_output \n " ,
" state = saved_state \n " ,
" for i in train_inputs: \n " ,
" output, state = lstm_cell(i, output, state) \n " ,
" outputs.append(output) \n " ,
" \n " ,
" # State saving across unrollings. \n " ,
" with tf.control_dependencies([saved_output.assign(output), \n " ,
" saved_state.assign(state)]): \n " ,
" # Classifier. \n " ,
" logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) \n " ,
" loss = tf.reduce_mean( \n " ,
" tf.nn.softmax_cross_entropy_with_logits( \n " ,
" logits, tf.concat(0, train_labels))) \n " ,
" \n " ,
" # Optimizer. \n " ,
" global_step = tf.Variable(0) \n " ,
" learning_rate = tf.train.exponential_decay( \n " ,
" 10.0, global_step, 5000, 0.1, staircase=True) \n " ,
" optimizer = tf.train.GradientDescentOptimizer(learning_rate) \n " ,
" gradients, v = zip(*optimizer.compute_gradients(loss)) \n " ,
" gradients, _ = tf.clip_by_global_norm(gradients, 1.25) \n " ,
" optimizer = optimizer.apply_gradients( \n " ,
" zip(gradients, v), global_step=global_step) \n " ,
" \n " ,
" # Predictions. \n " ,
" train_prediction = tf.nn.softmax(logits) \n " ,
" \n " ,
" # Sampling and validation eval: batch 1, no unrolling. \n " ,
" sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) \n " ,
" saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) \n " ,
" reset_sample_state = tf.group( \n " ,
" saved_sample_output.assign(tf.zeros([1, num_nodes])), \n " ,
" saved_sample_state.assign(tf.zeros([1, num_nodes]))) \n " ,
" sample_output, sample_state = lstm_cell( \n " ,
" sample_input, saved_sample_output, saved_sample_state) \n " ,
" with tf.control_dependencies([saved_sample_output.assign(sample_output), \n " ,
" saved_sample_state.assign(sample_state)]): \n " ,
" sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) "
]
} ,
{
" cell_type " : " code " ,
" execution_count " : null ,
" metadata " : {
" cellView " : " both " ,
" colab " : {
" autoexec " : {
" startup " : false ,
" wait_interval " : 0
} ,
" output_extras " : [
{
" item_id " : 41
} ,
{
" item_id " : 80
} ,
{
" item_id " : 126
} ,
{
" item_id " : 144
}
]
} ,
" colab_type " : " code " ,
" collapsed " : false ,
" executionInfo " : {
" elapsed " : 199909 ,
" status " : " ok " ,
" timestamp " : 1445965877333 ,
" user " : {
" color " : " #1FA15D " ,
" displayName " : " Vincent Vanhoucke " ,
" isAnonymous " : false ,
" isMe " : true ,
" permissionId " : " 05076109866853157986 " ,
" photoUrl " : " //lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg " ,
" sessionId " : " 6f6f07b359200c46 " ,
" userId " : " 102167687554210253930 "
} ,
" user_tz " : 420
} ,
" id " : " RD9zQCZTEaEm " ,
" outputId " : " 5e868466-2532-4545-ce35-b403cf5d9de6 "
} ,
" outputs " : [
{
" name " : " stdout " ,
" output_type " : " stream " ,
" text " : [
" Initialized \n " ,
" Average loss at step 0 : 3.29904174805 learning rate: 10.0 \n " ,
" Minibatch perplexity: 27.09 \n " ,
" ================================================================================ \n " ,
" srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh \n " ,
" lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o \n " ,
" meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet \n " ,
" unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw \n " ,
" ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 19.99 \n " ,
" Average loss at step 100 : 2.59553678274 learning rate: 10.0 \n " ,
" Minibatch perplexity: 9.57 \n " ,
" Validation set perplexity: 10.60 \n " ,
" Average loss at step 200 : 2.24747137785 learning rate: 10.0 \n " ,
" Minibatch perplexity: 7.68 \n " ,
" Validation set perplexity: 8.84 \n " ,
" Average loss at step 300 : 2.09438110709 learning rate: 10.0 \n " ,
" Minibatch perplexity: 7.41 \n " ,
" Validation set perplexity: 8.13 \n " ,
" Average loss at step 400 : 1.99440989017 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.46 \n " ,
" Validation set perplexity: 7.58 \n " ,
" Average loss at step 500 : 1.9320810616 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.30 \n " ,
" Validation set perplexity: 6.88 \n " ,
" Average loss at step 600 : 1.90935629249 learning rate: 10.0 \n " ,
" Minibatch perplexity: 7.21 \n " ,
" Validation set perplexity: 6.91 \n " ,
" Average loss at step 700 : 1.85583009005 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.13 \n " ,
" Validation set perplexity: 6.60 \n " ,
" Average loss at step 800 : 1.82152368546 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.01 \n " ,
" Validation set perplexity: 6.37 \n " ,
" Average loss at step 900 : 1.83169809818 learning rate: 10.0 \n " ,
" Minibatch perplexity: 7.20 \n " ,
" Validation set perplexity: 6.23 \n " ,
" Average loss at step 1000 : 1.82217029214 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.73 \n " ,
" ================================================================================ \n " ,
" le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co \n " ,
" le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes \n " ,
" hian andoris ret the ecause bistory l pidect one eight five lack du that the ses \n " ,
" aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in \n " ,
" mer miter y sught esfectur of the upission vain is werms is vul ugher compted by \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 6.07 \n " ,
" Average loss at step 1100 : 1.77301145077 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.03 \n " ,
" Validation set perplexity: 5.89 \n " ,
" Average loss at step 1200 : 1.75306463003 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.50 \n " ,
" Validation set perplexity: 5.61 \n " ,
" Average loss at step 1300 : 1.72937195778 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.00 \n " ,
" Validation set perplexity: 5.60 \n " ,
" Average loss at step 1400 : 1.74773373723 learning rate: 10.0 \n " ,
" Minibatch perplexity: 6.48 \n " ,
" Validation set perplexity: 5.66 \n " ,
" Average loss at step 1500 : 1.7368799901 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.22 \n " ,
" Validation set perplexity: 5.44 \n " ,
" Average loss at step 1600 : 1.74528762937 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.85 \n " ,
" Validation set perplexity: 5.33 \n " ,
" Average loss at step 1700 : 1.70881183743 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.33 \n " ,
" Validation set perplexity: 5.56 \n " ,
" Average loss at step 1800 : 1.67776108027 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.33 \n " ,
" Validation set perplexity: 5.29 \n " ,
" Average loss at step 1900 : 1.64935536742 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.29 \n " ,
" Validation set perplexity: 5.15 \n " ,
" Average loss at step 2000 : 1.69528644681 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.13 \n " ,
" ================================================================================ \n " ,
" vers soqually have one five landwing to docial page kagan lower with ther batern \n " ,
" ctor son alfortmandd tethre k skin the known purated to prooust caraying the fit \n " ,
" je in beverb is the sournction bainedy wesce tu sture artualle lines digra forme \n " ,
" m rousively haldio ourso ond anvary was for the seven solies hild buil s to te \n " ,
" zall for is it is one nine eight eight one neval to the kime typer oene where he \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 5.25 \n " ,
" Average loss at step 2100 : 1.68808053017 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.17 \n " ,
" Validation set perplexity: 5.01 \n " ,
" Average loss at step 2200 : 1.68322490931 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.09 \n " ,
" Validation set perplexity: 5.15 \n " ,
" Average loss at step 2300 : 1.64465074301 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.51 \n " ,
" Validation set perplexity: 5.00 \n " ,
" Average loss at step 2400 : 1.66408578038 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.86 \n " ,
" Validation set perplexity: 4.80 \n " ,
" Average loss at step 2500 : 1.68515402555 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.75 \n " ,
" Validation set perplexity: 4.82 \n " ,
" Average loss at step 2600 : 1.65405208349 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.38 \n " ,
" Validation set perplexity: 4.85 \n " ,
" Average loss at step 2700 : 1.65706222177 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.46 \n " ,
" Validation set perplexity: 4.78 \n " ,
" Average loss at step 2800 : 1.65204829812 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.06 \n " ,
" Validation set perplexity: 4.64 \n " ,
" Average loss at step 2900 : 1.65107253551 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.00 \n " ,
" Validation set perplexity: 4.61 \n " ,
" Average loss at step 3000 : 1.6495274055 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.53 \n " ,
" ================================================================================ \n " ,
" ject covered in belo one six six to finsh that all di rozial sime it a the lapse \n " ,
" ble which the pullic bocades record r to sile dric two one four nine seven six f \n " ,
" originally ame the playa ishaps the stotchational in a p dstambly name which as \n " ,
" ore volum to bay riwer foreal in nuily operety can and auscham frooripm however \n " ,
" kan traogey was lacous revision the mott coupofiteditey the trando insended frop \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 4.76 \n " ,
" Average loss at step 3100 : 1.63705502152 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.50 \n " ,
" Validation set perplexity: 4.76 \n " ,
" Average loss at step 3200 : 1.64740695596 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.84 \n " ,
" Validation set perplexity: 4.67 \n " ,
" Average loss at step 3300 : 1.64711504817 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.39 \n " ,
" Validation set perplexity: 4.57 \n " ,
" Average loss at step 3400 : 1.67113256454 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.56 \n " ,
" Validation set perplexity: 4.71 \n " ,
" Average loss at step 3500 : 1.65637169957 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.03 \n " ,
" Validation set perplexity: 4.80 \n " ,
" Average loss at step 3600 : 1.66601825476 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.63 \n " ,
" Validation set perplexity: 4.52 \n " ,
" Average loss at step 3700 : 1.65021387935 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.50 \n " ,
" Validation set perplexity: 4.56 \n " ,
" Average loss at step 3800 : 1.64481814981 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.60 \n " ,
" Validation set perplexity: 4.54 \n " ,
" Average loss at step 3900 : 1.642069453 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.91 \n " ,
" Validation set perplexity: 4.54 \n " ,
" Average loss at step 4000 : 1.65179730773 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.77 \n " ,
" ================================================================================ \n " ,
" k s rasbonish roctes the nignese at heacle was sito of beho anarchys and with ro \n " ,
" jusar two sue wletaus of chistical in causations d ow trancic bruthing ha laters \n " ,
" de and speacy pulted yoftret worksy zeatlating to eight d had to ie bue seven si \n " ,
" s fiction of the feelly constive suq flanch earlied curauking bjoventation agent \n " ,
" quen s playing it calana our seopity also atbellisionaly comexing the revideve i \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 4.58 \n " ,
" Average loss at step 4100 : 1.63794238806 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.47 \n " ,
" Validation set perplexity: 4.79 \n " ,
" Average loss at step 4200 : 1.63822438836 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.30 \n " ,
" Validation set perplexity: 4.54 \n " ,
" Average loss at step 4300 : 1.61844664574 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.69 \n " ,
" Validation set perplexity: 4.54 \n " ,
" Average loss at step 4400 : 1.61255454302 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.67 \n " ,
" Validation set perplexity: 4.54 \n " ,
" Average loss at step 4500 : 1.61543365479 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.83 \n " ,
" Validation set perplexity: 4.69 \n " ,
" Average loss at step 4600 : 1.61607327104 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.18 \n " ,
" Validation set perplexity: 4.64 \n " ,
" Average loss at step 4700 : 1.62757282495 learning rate: 10.0 \n " ,
" Minibatch perplexity: 4.24 \n " ,
" Validation set perplexity: 4.66 \n " ,
" Average loss at step 4800 : 1.63222063541 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.30 \n " ,
" Validation set perplexity: 4.53 \n " ,
" Average loss at step 4900 : 1.63678096652 learning rate: 10.0 \n " ,
" Minibatch perplexity: 5.43 \n " ,
" Validation set perplexity: 4.64 \n " ,
" Average loss at step 5000 : 1.610340662 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.10 \n " ,
" ================================================================================ \n " ,
" in b one onarbs revieds the kimiluge that fondhtic fnoto cre one nine zero zero \n " ,
" of is it of marking panzia t had wap ironicaghni relly deah the omber b h menba \n " ,
" ong messified it his the likdings ara subpore the a fames distaled self this int \n " ,
" y advante authors the end languarle meit common tacing bevolitione and eight one \n " ,
" zes that materly difild inllaring the fusts not panition assertian causecist bas \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 4.69 \n " ,
" Average loss at step 5100 : 1.60593637228 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.69 \n " ,
" Validation set perplexity: 4.47 \n " ,
" Average loss at step 5200 : 1.58993269444 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.65 \n " ,
" Validation set perplexity: 4.39 \n " ,
" Average loss at step 5300 : 1.57930587292 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.11 \n " ,
" Validation set perplexity: 4.39 \n " ,
" Average loss at step 5400 : 1.58022856832 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.19 \n " ,
" Validation set perplexity: 4.37 \n " ,
" Average loss at step 5500 : 1.56654450059 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.69 \n " ,
" Validation set perplexity: 4.33 \n " ,
" Average loss at step 5600 : 1.58013380885 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.13 \n " ,
" Validation set perplexity: 4.35 \n " ,
" Average loss at step 5700 : 1.56974959254 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.00 \n " ,
" Validation set perplexity: 4.34 \n " ,
" Average loss at step 5800 : 1.5839582932 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.88 \n " ,
" Validation set perplexity: 4.31 \n " ,
" Average loss at step 5900 : 1.57129439116 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.66 \n " ,
" Validation set perplexity: 4.32 \n " ,
" Average loss at step 6000 : 1.55144061089 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.55 \n " ,
" ================================================================================ \n " ,
" utic clositical poopy stribe addi nixe one nine one zero zero eight zero b ha ex \n " ,
" zerns b one internequiption of the secordy way anti proble akoping have fictiona \n " ,
" phare united from has poporarly cities book ins sweden emperor a sass in origina \n " ,
" quulk destrebinist and zeilazar and on low and by in science over country weilti \n " ,
" x are holivia work missincis ons in the gages to starsle histon one icelanctrotu \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 4.30 \n " ,
" Average loss at step 6100 : 1.56450940847 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.77 \n " ,
" Validation set perplexity: 4.27 \n " ,
" Average loss at step 6200 : 1.53433164835 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.77 \n " ,
" Validation set perplexity: 4.27 \n " ,
" Average loss at step 6300 : 1.54773445129 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.76 \n " ,
" Validation set perplexity: 4.25 \n " ,
" Average loss at step 6400 : 1.54021131516 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.56 \n " ,
" Validation set perplexity: 4.24 \n " ,
" Average loss at step 6500 : 1.56153374553 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.43 \n " ,
" Validation set perplexity: 4.27 \n " ,
" Average loss at step 6600 : 1.59556478739 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.92 \n " ,
" Validation set perplexity: 4.28 \n " ,
" Average loss at step 6700 : 1.58076951623 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.77 \n " ,
" Validation set perplexity: 4.30 \n " ,
" Average loss at step 6800 : 1.6070714438 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.98 \n " ,
" Validation set perplexity: 4.28 \n " ,
" Average loss at step 6900 : 1.58413293839 learning rate: 1.0 \n " ,
" Minibatch perplexity: 4.61 \n " ,
" Validation set perplexity: 4.29 \n " ,
" Average loss at step 7000 : 1.57905534983 learning rate: 1.0 \n " ,
" Minibatch perplexity: 5.08 \n " ,
" ================================================================================ \n " ,
" jague are officiencinels ored by film voon higherise haik one nine on the iffirc \n " ,
" oshe provision that manned treatists on smalle bodariturmeristing the girto in s \n " ,
" kis would softwenn mustapultmine truativersakys bersyim by s of confound esc bub \n " ,
" ry of the using one four six blain ira mannom marencies g with fextificallise re \n " ,
" one son vit even an conderouss to person romer i a lebapter at obiding are iuse \n " ,
" ================================================================================ \n " ,
" Validation set perplexity: 4.25 \n "
]
}
] ,
" source " : [
" num_steps = 7001 \n " ,
" summary_frequency = 100 \n " ,
" \n " ,
" with tf.Session(graph=graph) as session: \n " ,
2017-02-06 01:13:24 +08:00
" tf.global_variables_initializer().run() \n " ,
2015-12-27 20:25:44 +08:00
" print ' Initialized ' \n " ,
" mean_loss = 0 \n " ,
" for step in xrange(num_steps): \n " ,
" batches = train_batches.next() \n " ,
" feed_dict = dict() \n " ,
" for i in xrange(num_unrollings + 1): \n " ,
" feed_dict[train_data[i]] = batches[i] \n " ,
" _, l, predictions, lr = session.run( \n " ,
" [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) \n " ,
" mean_loss += l \n " ,
" if step % s ummary_frequency == 0: \n " ,
" if step > 0: \n " ,
" mean_loss = mean_loss / summary_frequency \n " ,
" # The mean loss is an estimate of the loss over the last few batches. \n " ,
" print ' Average loss at step ' , step, ' : ' , mean_loss, ' learning rate: ' , lr \n " ,
" mean_loss = 0 \n " ,
" labels = np.concatenate(list(batches)[1:]) \n " ,
" print ' Minibatch perplexity: %.2f ' % f loat( \n " ,
" np.exp(logprob(predictions, labels))) \n " ,
" if step % (summary_frequency * 10) == 0: \n " ,
" # Generate some samples. \n " ,
" print ' = ' * 80 \n " ,
" for _ in xrange(5): \n " ,
" feed = sample(random_distribution()) \n " ,
" sentence = characters(feed)[0] \n " ,
" reset_sample_state.run() \n " ,
" for _ in xrange(79): \n " ,
" prediction = sample_prediction.eval( { sample_input: feed}) \n " ,
" feed = sample(prediction) \n " ,
" sentence += characters(feed)[0] \n " ,
" print sentence \n " ,
" print ' = ' * 80 \n " ,
" # Measure validation set perplexity. \n " ,
" reset_sample_state.run() \n " ,
" valid_logprob = 0 \n " ,
" for _ in xrange(valid_size): \n " ,
" b = valid_batches.next() \n " ,
" predictions = sample_prediction.eval( { sample_input: b[0]}) \n " ,
" valid_logprob = valid_logprob + logprob(predictions, b[1]) \n " ,
" print ' Validation set perplexity: %.2f ' % f loat(np.exp( \n " ,
" valid_logprob / valid_size)) "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " pl4vtmFfa5nn "
} ,
" source " : [
" --- \n " ,
" Problem 1 \n " ,
" --------- \n " ,
" \n " ,
" You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger. \n " ,
" \n " ,
" --- "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " 4eErTCTybtph "
} ,
" source " : [
" --- \n " ,
" Problem 2 \n " ,
" --------- \n " ,
" \n " ,
" We want to train a LSTM over bigrams, that is pairs of consecutive characters like ' ab ' instead of single characters like ' a ' . Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally. \n " ,
" \n " ,
" a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves. \n " ,
" \n " ,
" b- Write a bigram-based LSTM, modeled on the character LSTM above. \n " ,
" \n " ,
" c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329). \n " ,
" \n " ,
" --- "
]
} ,
{
" cell_type " : " markdown " ,
" metadata " : {
" colab_type " : " text " ,
" id " : " Y5tapX3kpcqZ "
} ,
" source " : [
" --- \n " ,
" Problem 3 \n " ,
" --------- \n " ,
" \n " ,
" (difficult!) \n " ,
" \n " ,
" Write a sequence-to-sequence LSTM which mirrors all the words in a sentence. For example, if your input is: \n " ,
" \n " ,
" the quick brown fox \n " ,
" \n " ,
" the model should attempt to output: \n " ,
" \n " ,
" eht kciuq nworb xof \n " ,
" \n " ,
" Reference: http://arxiv.org/abs/1409.3215 \n " ,
" \n " ,
" --- "
]
}
] ,
" metadata " : {
" colabVersion " : " 0.3.2 " ,
" colab_default_view " : { } ,
" colab_views " : { } ,
" kernelspec " : {
2017-02-06 01:13:24 +08:00
" display_name " : " Python 2 " ,
2015-12-27 20:25:44 +08:00
" language " : " python " ,
2017-02-06 01:13:24 +08:00
" name " : " python2 "
2015-12-27 20:25:44 +08:00
} ,
" language_info " : {
" codemirror_mode " : {
" name " : " ipython " ,
2017-02-06 01:13:24 +08:00
" version " : 2
2015-12-27 20:25:44 +08:00
} ,
" file_extension " : " .py " ,
" mimetype " : " text/x-python " ,
" name " : " python " ,
" nbconvert_exporter " : " python " ,
2017-02-06 01:13:24 +08:00
" pygments_lexer " : " ipython2 " ,
" version " : " 2.7.12 "
2015-12-27 20:25:44 +08:00
}
} ,
" nbformat " : 4 ,
" nbformat_minor " : 0
}