30

I'm building a model in Keras using some tensorflow function (reduce_sum and l2_normalize) in the last layer while encountered this problem. I have searched for a solution but all of it related to "Keras tensor".

Here is my code:

import tensorflow as tf;
from tensorflow.python.keras import backend as K

vgg16_model = VGG16(weights = 'imagenet', include_top = False, input_shape = input_shape);

fire8 = extract_layer_from_model(vgg16_model, layer_name = 'block4_pool');

pool8 = MaxPooling2D((3,3), strides = (2,2), name = 'pool8')(fire8.output);

fc1 = Conv2D(64, (6,6), strides= (1, 1), padding = 'same', name = 'fc1')(pool8);

fc1 = Dropout(rate = 0.5)(fc1);

fc2 = Conv2D(3, (1, 1), strides = (1, 1), padding = 'same', name = 'fc2')(fc1);

fc2 = Activation('relu')(fc2);

fc2 = Conv2D(3, (15, 15), padding = 'valid', name = 'fc_pooling')(fc2);

fc2_norm = K.l2_normalize(fc2, axis = 3);

est = tf.reduce_sum(fc2_norm, axis = (1, 2));
est = K.l2_normalize(est);

FC_model = Model(inputs = vgg16_model.input, outputs = est);

and then the error:

ValueError: Output tensors to a Model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: Tensor("l2_normalize_3:0", shape=(?, 3), dtype=float32)

I noticed that without passing fc2 layer to these functions, the model works fine:

FC_model = Model(inputs = vgg16_model.input, outputs = fc2);

Can someone please explain to me this problem and some suggestion on how to fix it?

2 Answers 2

40

I have found a way to work around to solve the problem. For anyone who encounters the same issue, you can use the Lambda layer to wrap your tensorflow operations, this is what I did:

from tensorflow.python.keras.layers import Lambda;

def norm(fc2):

    fc2_norm = K.l2_normalize(fc2, axis = 3);
    illum_est = tf.reduce_sum(fc2_norm, axis = (1, 2));
    illum_est = K.l2_normalize(illum_est);

    return illum_est;

illum_est = Lambda(norm)(fc2);
Sign up to request clarification or add additional context in comments.

1 Comment

Had the same problem when using Concatenate Layer incorrectly: It's Concatenate()([<previous_layers>]). If you forget the leading (), you'll get the error.
5

I had this issue because I was adding 2 tensors as x1+x2 somewhere in my model instead of using Add()([x1,x2]).

That solved the problem.

1 Comment

Something similar happened to me; not adding, but slicing.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.