I'm trying use the output of model.predict as an input for another model.
This is actually for debugging purposes, and why I'm not using get_layer.output or using a global model that unifies these two models.
I'm running into this error:
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("input_1:0", shape=(?, 10, 10, 2048), dtype=float32) is not an element of this graph.
Here is my current code:
I'm using the function below as bottleneck a generator
def test_model_predictions(params):
batch_size = params['batch_size'] #params just contains model parameters
base_model = create_model() #this just creates an instance of Inception model with final layers cutoff
train,_ = create_generators(params) #creats a generator for image files
while True:
for x, y in train:
predict1 = base_model.predict(x)
yield (predict1, y)
def training_bottleneck():
bottleneck_train = test_model_predictions(params) #generator
def bottleneck_model_1():
bottleneck_input = Input(shape = (10, 10, 2048))
x = GlobalAveragePooling2D()(bottleneck_input)
x = Dense(1024, activation='relu')(x)
predictions = Dense(params['classes'], activation='softmax')(x)
model = Model(inputs=bottleneck_input, outputs=predictions)
return model
model2 = bottleneck_model_1()
model2.compile(optimizer= optimizers.SGD(lr=0.0001, momentum=0.99),
loss='categorical_crossentropy', metrics = ['accuracy'])
print('Starting model training')
history = model2.fit_generator(bottleneck_train,
steps_per_epoch = 1000,
epochs = 85,
shuffle= False,
verbose = 2, )
return history
Any clues as to how to make this work?
Thank you.
EDIT: Seems to be some confusion as to why I'm doing it this way, so I will add some more information.
I was specially using predict, because I was noticing a discrepancy when saving the model.predict values (bottleneck values) to hdf5 file, and then loading those values to another model (the second half of the original model),
versus just loading the entire model and just freezing the top half (can't train the first half). The difference I noticed despite using the same hypeparameters,
and being essentially the same model was that the full model trains properly and converges, while the model loading the bottleneck values wasn't really improving.
Thus I was trying to see i fusing model.predict to save bottleneck values was the cause for disparity between both models.