1

I am trying to add the weights and biases to tensorboard according to the layers. The following way I tried:

tf.reset_default_graph()

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None,n_outputs])


layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons, 
                                 activation=tf.nn.leaky_relu, use_peepholes = True)
         for layer in range(n_layers)]
# for i, layer in enumerate(layers):
#     tf.summary.histogram('layer{0}'.format(i), tf.convert_to_tensor(layer))


multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
for index,one_lstm_cell in enumerate(multi_layer_cell):
    one_kernel, one_bias = one_lstm_cell.variables
    # I think TensorBoard handles summaries with the same name fine.
    tf.summary.histogram("Kernel", one_kernel)
    tf.summary.histogram("Bias", one_bias)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)

stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) 
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
outputs = outputs[:,n_steps-1,:] # keep only last output of sequence

But I got the following error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-43-761df6e116a7> in <module>()
     44 
     45 multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
---> 46 for index,one_lstm_cell in enumerate(multi_layer_cell):
     47     one_kernel, one_bias = one_lstm_cell.variables
     48     # I think TensorBoard handles summaries with the same name fine.

TypeError: 'MultiRNNCell' object is not iterable

I would like to know what I have missed so that I can add the variables for visualization in the tensorboard. Kindly, help me.

1
  • Can someone please let me know what probably I am missing so that I can move forward? Commented Sep 5, 2018 at 10:52

1 Answer 1

2

MultiRNNCell is indeed not iterable. For your case, first, the RNN variables will not be created until you call tf.nn.dynamic_rnn, so you should try to retrieve them after that. Second, with use_peephole you have more than kernel and bias variables. To retrieve them, you can access all the RNN variables together from multi_layer_cell.variables or each layer's own set through the cell objects stored in layers:

multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
for index, one_lstm_cell in enumerate(layers):
    one_kernel, one_bias, *one_peepholes = one_lstm_cell.variables
    tf.summary.histogram("Kernel", one_kernel)
    tf.summary.histogram("Bias", one_bias)
Sign up to request clarification or add additional context in comments.

12 Comments

How I can have the weights displayed on the Tensorboard and whether the output is layer wise or is there something else? Kindly, enlighten me sir.
@JafferWilson Well that's another matter, it depends on what and how exactly you want to display. tf.summary.histogram will make a histogram out of all the values in the tensor and after a few plots you would get something like what is shown here.
ok I got it. Just one more query. I see there are three different weights, viz w_f_diag,w_i_diag andw_o_diag. Which of these weights should be considered for the visualization so that I can get a firm decision making, whether my model is going well or not?
Also, sir, I would like to know if I wanted to see whats going on in my LSTM model inside or behind the wall, is there anything I can get to see? Like a real time working , how the association are made within the model of my Multi cell RNN? Sir, kindly enlighten me, please.
@JafferWilson Well in this case you have 5 weights per layer really, the kernel, the bias and the three you mention, which are for the "peepholes" (if you don't set use_peephole=True they will not appear). The naming for these I suppose follows the convention on Understanding LSTMs (see "Variants on Long Short Term Memory"). Here is a nice discussion on how to use histograms to figure out whether your training is working.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.