4

I want to change the loss function in the ptb_word_lm.py example to tf.nn.nce_loss. Looking at the tf.nn.nce_loss implementation:

def nce_loss(weights, biases, inputs, labels, num_sampled, num_classes,
         num_true=1,
         sampled_values=None,
         remove_accidental_hits=False,
         partition_strategy="mod",
         name="nce_loss"):

I think

  • the 3rd parameter (inputs) is the logits of language model,
  • 4th parameter (labels) is the next word (self._targets) of language model,
  • num_classes is the vocab_size

But I do not know what are the first two parameters, weights and biases. How could I adapt tf.nn.nce_loss to language model? Thanks.

########UPDATES

@Aaron:

Thanks, I have tried the following:

loss = tf.reduce_mean(
        tf.nn.nce_loss(softmax_w, softmax_b, logits, tf.reshape(self._targets, [-1,1]),
                                     64, vocab_size))

According to the document at here:

  • weights: A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.

  • biases: A Tensor of shape [num_classes]. The class biases.

  • inputs: A Tensor of shape [batch_size, dim]. The forward activations of the input network.

  • labels: A Tensor of type int64 and shape [batch_size, num_true]. The target classes.

  • num_sampled: An int. The number of classes to randomly sample per batch.

  • num_classes: An int. The number of possible classes.

So,

  • weights is the softmax_w tensor, which has shape (hidden_size, vocab_size)
  • biases is softmax_b, which has shape (vocab_size)
  • inputs is logits, which has shape (batch_size*num_steps, vocab_size)
  • labels is self._targets, which has shape (batch_size, num_steps), thus, we need to reshape it, tf.reshape(self._targets, [-1,1])

My PTBModel model looks like

class PTBModel(object):
    def __init__(self, is_training, config):
        self.batch_size = batch_size = config.batch_size
        self.num_steps = num_steps = config.num_steps
        size = config.hidden_size
        vocab_size = config.vocab_size
        self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
        self._targets = tf.placeholder(tf.int32, [batch_size, num_steps])

        lstm_cell = rnn_cell.BasicLSTMCell(size, forget_bias=0.0)
        if is_training and config.keep_prob < 1:
            lstm_cell = rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=config.keep_prob)
        cell = rnn_cell.MultiRNNCell([lstm_cell] * config.num_layers)
        self._initial_state = cell.zero_state(batch_size, tf.float32)
        with tf.device("/cpu:0"):
            embedding = tf.get_variable("embedding", [vocab_size, size])
            inputs = tf.nn.embedding_lookup(embedding, self._input_data)
        if is_training and config.keep_prob < 1:
            inputs = tf.nn.dropout(inputs, config.keep_prob)

        outputs = []
        states = []
        state = self._initial_state
        with tf.variable_scope("RNN"):
            for time_step in range(num_steps):
                if time_step > 0: tf.get_variable_scope().reuse_variables()
                (cell_output, state) = cell(inputs[:, time_step, :], state)
                outputs.append(cell_output)
                states.append(state)
        output = tf.reshape(tf.concat(1, outputs), [-1, size])
        softmax_w = tf.get_variable("softmax_w", [size, vocab_size])
        softmax_b = tf.get_variable("softmax_b", [vocab_size])
        logits = tf.matmul(output, softmax_w) + softmax_b

        '''
        #minimize the average negative log probability using sequence_loss_by_example
        loss = seq2seq.sequence_loss_by_example([logits],
                                                [tf.reshape(self._targets, [-1])],
                                                [tf.ones([batch_size * num_steps])],
                                                vocab_size)

        loss = tf.reduce_mean(
            tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
                                         num_sampled, vocabulary_size))
        weights: A Tensor of shape [num_classes, dim], or a list of Tensor objects 
            whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.
        biases: A Tensor of shape [num_classes]. The class biases.
        inputs: A Tensor of shape [batch_size, dim]. The forward activations of the input network.
        labels: A Tensor of type int64 and shape [batch_size, num_true]. The target classes.
        num_sampled: An int. The number of classes to randomly sample per batch.
        num_classes: An int. The number of possible classes.

        '''
        loss = tf.reduce_mean(
            tf.nn.nce_loss(softmax_w, softmax_b, logits, tf.reshape(self._targets, [-1,1]),
                                         64, vocab_size))


        self._cost = cost = tf.reduce_sum(loss) / batch_size
        self._final_state = states[-1]
        if not is_training:
            return
        self._lr = tf.Variable(0.0, trainable=False)
        tvars = tf.trainable_variables()
        grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
                                          config.max_grad_norm)
        optimizer = tf.train.GradientDescentOptimizer(self.lr)
        self._train_op = optimizer.apply_gradients(zip(grads, tvars))

However, I got an error

Epoch: 1 Learning rate: 1.000
W tensorflow/core/common_runtime/executor.cc:1102] 0x528c980 Compute status: Invalid argument: Index 9971 at offset 0 in Tindices is out of range
     [[Node: model/nce_loss/embedding_lookup = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](model/softmax_w/read, model/nce_loss/concat)]]
W tensorflow/core/common_runtime/executor.cc:1102] 0x528c980 Compute status: Invalid argument: Index 9971 at offset 0 in Tindices is out of range
     [[Node: model/nce_loss/embedding_lookup = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](model/softmax_w/read, model/nce_loss/concat)]]
     [[Node: _send_model/RNN/concat_19_0 = _Send[T=DT_FLOAT, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1438650956868917036, tensor_name="model/RNN/concat_19:0", _device="/job:localhost/replica:0/task:0/cpu:0"](model/RNN/concat_19)]]
Traceback (most recent call last):
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 235, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 225, in main
    verbose=True)
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 189, in run_epoch
    m.initial_state: state})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 315, in run
    return self._run(None, fetches, feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 511, in _run
    feed_dict_string)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 564, in _do_run
    target_list)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 586, in _do_call
    e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Index 9971 at offset 0 in Tindices is out of range
     [[Node: model/nce_loss/embedding_lookup = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](model/softmax_w/read, model/nce_loss/concat)]]
Caused by op u'model/nce_loss/embedding_lookup', defined at:
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 235, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 214, in main
    m = PTBModel(is_training=True, config=config)
  File "/home/user/works/workspace/python/ptb_word_lm/ptb_word_lm.py", line 122, in __init__
    64, vocab_size))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn.py", line 798, in nce_loss
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn.py", line 660, in _compute_sampled_logits
    weights, all_ids, partition_strategy=partition_strategy)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 86, in embedding_lookup
    validate_indices=validate_indices)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 447, in gather
    validate_indices=validate_indices, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2040, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1087, in __init__
    self._traceback = _extract_stack()

Did I miss anything here? Thanks again.

1 Answer 1

-1

Weights and biases are the weight matrix and bias vector for the output layer of your language model.

https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.html#nce_loss

Sign up to request clarification or add additional context in comments.

2 Comments

Hi Aaron, I got an exception, could you take a look at it when you have the chance? Thanks
I think you should be passing output as the third argument instead of logits

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.