2

I'm trying use the output of model.predict as an input for another model. This is actually for debugging purposes, and why I'm not using get_layer.output or using a global model that unifies these two models.

I'm running into this error:

TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("input_1:0", shape=(?, 10, 10, 2048), dtype=float32) is not an element of this graph.

Here is my current code:

I'm using the function below as bottleneck a generator

def test_model_predictions(params):

        batch_size = params['batch_size'] #params just contains model parameters
        base_model = create_model() #this just creates an instance of Inception model with final layers cutoff
        train,_ = create_generators(params) #creats a generator for image files

        while True:
                for x, y in train:
                        predict1 = base_model.predict(x)

                        yield (predict1, y)




def training_bottleneck():


    bottleneck_train = test_model_predictions(params) #generator 



    def bottleneck_model_1():
        bottleneck_input = Input(shape = (10, 10, 2048))
        x = GlobalAveragePooling2D()(bottleneck_input)
        x = Dense(1024, activation='relu')(x)
        predictions = Dense(params['classes'], activation='softmax')(x)
        model = Model(inputs=bottleneck_input, outputs=predictions)
        return model




    model2 =  bottleneck_model_1()
    model2.compile(optimizer= optimizers.SGD(lr=0.0001, momentum=0.99),
                  loss='categorical_crossentropy', metrics = ['accuracy'])

    print('Starting model training')
    history = model2.fit_generator(bottleneck_train,
    steps_per_epoch = 1000,
    epochs = 85,
    shuffle= False,
    verbose = 2,  )
    return history

Any clues as to how to make this work?

Thank you.

EDIT: Seems to be some confusion as to why I'm doing it this way, so I will add some more information.

I was specially using predict, because I was noticing a discrepancy when saving the model.predict values (bottleneck values) to hdf5 file, and then loading those values to another model (the second half of the original model),

versus just loading the entire model and just freezing the top half (can't train the first half). The difference I noticed despite using the same hypeparameters, and being essentially the same model was that the full model trains properly and converges, while the model loading the bottleneck values wasn't really improving. Thus I was trying to see i fusing model.predict to save bottleneck values was the cause for disparity between both models.

2
  • Basically, it means that that variable is not defined in the same graph scope as the one you are running right now. Try to define them in the same graph scope. Commented Jun 13, 2018 at 15:58
  • I'm a little confused as to how to do this properly, while still using model.predict specifically as the input for the second model (see updated OP). Can we not have two separate graphs work sequentially? The first Model.predict outputs a numpy array and should be a separate graph. Since the output is an array and not a tensor and I thought it shouldn't have any connections with the second model, which should be a separate graph. So confused as to why everything needs to be within one graph. Is there a way to create a single graph and still use model.predict as an input to the second model? Commented Jun 14, 2018 at 16:39

1 Answer 1

2

So basically you want to use the prediction of one model as an input fot a second model? In your code you mix up Tensors and "normal" python datastructures, which can not work as you have to build the hole computation graph with Tensors.

I guess you want to use the "prediction" of the first model and add some other features to make the prediction with the second model? In this case you can do soemthign like this:

from keras.layers import Input, Dense, concatenate
input_1= Input(shape=(32,))
input_2= Input(shape=(64,))
out_1= Dense(48, input_shape=(32,))(input_1) # your model 1
x = concatenate([out_1, input_2]) # stack both feature vectors
x = Dense(1, activation='sigmoid')(x)    # your model 2
model = Model(inputs=[input_1, input_2], outputs=[x]) # if you want the outputs of model 1 you can add the output out1


history = model.fit([x_train, x_meta_train], y_train, ...)
Sign up to request clarification or add additional context in comments.

1 Comment

Thank you. I added more information in the OP as to why I'm trying to use model.predict specifically as the input.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.