1

I have a simple convolution network (autoencoder) and I want to separate my model into two parts encoder and decoder. between encoder and decoder I add a random image to the output of the encoder and then sent the result to decoder part, but when I want to make a model from the encoder to decoder it produces the following error:

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(?, 28, 28, 1), dtype=float32) at layer "input_2". The following previous layers were accessed without issue: []

the error was produced when I want to create decoder model.I could not understand why this error was produced. please help me with this error.

from keras.layers import Input, Concatenate, GaussianNoise,Dropout
from keras.layers import Conv2D
from keras.models import Model
from keras.datasets import mnist
from keras.callbacks import TensorBoard
from keras import backend as K
from keras import layers
import matplotlib.pyplot as plt
import tensorflow as tf
import keras as Kr
import numpy as np
import pylab as pl
import matplotlib.cm as cm

#-----------------building w train---------------------------------------------
w_main = np.random.randint(2,size=(1,4,4,1))
w_main=w_main.astype(np.float32)
w_expand=np.zeros((1,28,28,1),dtype='float32')
w_expand[:,0:4,0:4]=w_main
w_expand.reshape(1,28,28,1)
w_expand=np.repeat(w_expand,49999,0)

#-----------------building w validation---------------------------------------------
w_valid = np.random.randint(2,size=(1,4,4,1))
w_valid=w_valid.astype(np.float32)
wv_expand=np.zeros((1,28,28,1),dtype='float32')
wv_expand[:,0:4,0:4]=w_valid
wv_expand.reshape(1,28,28,1)
wv_expand=np.repeat(wv_expand,9999,0)

#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4,1))
w_test=w_test.astype(np.float32)
wt_expand=np.zeros((1,28,28,1),dtype='float32')
wt_expand[:,0:4,0:4]=w_test
wt_expand.reshape(1,28,28,1)
wt_expand=np.repeat(wt_expand,10000,0)

#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((28,28,1))
image = Input((28, 28, 1))
conv1 = Conv2D(16, (3, 3), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(32, (3, 3), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='convl3e')(conv2)
DrO1=Dropout(0.25)(conv3)
encoded =  Conv2D(1, (3, 3), activation='relu', padding='same',name='reconstructed_I')(DrO1)


#-----------------------adding w---------------------------------------
#add_const = Kr.layers.Lambda(lambda x: x + Kr.backend.constant(w_expand))
#encoded_merged=Kr.layers.Add()([encoded,wtm])

add_const = Kr.layers.Lambda(lambda x: x + wtm)
encoded_merged = add_const(encoded)

encoder=Model(inputs=image, outputs=encoded_merged)
encoder.summary()

#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------

#encoded_merged = Input((28, 28, 2))
deconv1 = Conv2D(16, (3, 3), activation='relu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(32, (3, 3), activation='relu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(8, (3, 3), activation='relu',padding='same', name='convl3d')(deconv2)
DrO2=Dropout(0.25)(deconv3)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same', name='decoder_output')(DrO2) 

decoder=Model(inputs=encoded_merged, outputs=decoded)
#decoder.summary()

NEW CODE:

from keras.layers import Input, Concatenate, GaussianNoise,Dropout
from keras.layers import Conv2D
from keras.models import Model
from keras.datasets import mnist
from keras.callbacks import TensorBoard
from keras import backend as K
from keras import layers
import matplotlib.pyplot as plt
import tensorflow as tf
import keras as Kr
import numpy as np
import pylab as pl
import matplotlib.cm as cm
import keract
from keras import optimizers
from keras import regularizers
from keras.callbacks import EarlyStopping

from tensorflow.python.keras.layers import Lambda;

#-----------------building w train---------------------------------------------
w_main = np.random.randint(2,size=(1,14,14,1))
w_main=w_main.astype(np.float32)
w_expand=np.zeros((1,28,28,1),dtype='float32')
w_expand[:,0:14,0:14]=w_main
w_expand.reshape(1,28,28,1)
w_expand=np.repeat(w_expand,49999,0)

#-----------------building w validation---------------------------------------------
w_valid = np.random.randint(2,size=(1,14,14,1))
w_valid=w_valid.astype(np.float32)
wv_expand=np.zeros((1,28,28,1),dtype='float32')
wv_expand[:,0:14,0:14]=w_valid
wv_expand.reshape(1,28,28,1)
wv_expand=np.repeat(wv_expand,9999,0)

#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,14,14,1))
w_test=w_test.astype(np.float32)
wt_expand=np.zeros((1,28,28,1),dtype='float32')
wt_expand[:,0:14,0:14]=w_test
wt_expand.reshape(1,28,28,1)
#wt_expand=np.repeat(wt_expand,10000,0)

#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((28,28,1))
image = Input((28, 28, 1))
conv1 = Conv2D(16, (3, 3), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(32, (3, 3), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='convl3e')(conv2)
#conv3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='convl3e', kernel_initializer='Orthogonal',bias_initializer='glorot_uniform')(conv2)
DrO1=Dropout(0.25)(conv3)
encoded =  Conv2D(1, (3, 3), activation='relu', padding='same',name='reconstructed_I')(DrO1)


#-----------------------adding watermark---------------------------------------
#add_const = Kr.layers.Lambda(lambda x: x + Kr.backend.constant(w_expand))
#encoded_merged=Kr.layers.Add()([encoded,wtm])

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,wtm])
encoder=Model(inputs=[image,wtm], outputs= encoded_merged)
encoder.summary()

#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv_input=Input((28,28,1))
#encoded_merged = Input((28, 28, 2))
deconv1 = Conv2D(16, (3, 3), activation='relu', padding='same', name='convl1d',kernel_regularizer=regularizers.l2(0.001), kernel_initializer='Orthogonal')(deconv_input)
deconv2 = Conv2D(32, (3, 3), activation='relu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(8, (3, 3), activation='relu',padding='same', name='convl3d')(deconv2)
DrO2=Dropout(0.25)(deconv3)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same', name='decoder_output')(DrO2) 

decoder=Model(inputs=deconv_input, outputs=decoded)
#decoder.summary()
encoded_merged = encoder([image,wtm])
decoded = decoder(encoded_merged)

model=Model(inputs=[image,wtm],outputs=decoded)
#----------------------w extraction------------------------------------
convw1 = Conv2D(16, (3,3), activation='relu', padding='same', name='conl1w',kernel_regularizer=regularizers.l2(0.001), kernel_initializer='Orthogonal')(decoded)
convw2 = Conv2D(32, (3, 3), activation='relu', padding='same', name='convl2w')(convw1)
convw3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='conl3w')(convw2)
DrO3=Dropout(0.25)(convw3)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W')(DrO3)  
# reconsider activation (is W positive?)
# should be filter=1 to match W
watermark_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])


#----------------------training the model--------------------------------------
#------------------------------------------------------------------------------
#----------------------Data preparesion----------------------------------------

(x_train, _), (x_test, _) = mnist.load_data()
x_validation=x_train[1:10000,:,:]
x_train=x_train[10001:60000,:,:]
#
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_validation = x_validation.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_validation = np.reshape(x_validation, (len(x_validation), 28, 28, 1))

#---------------------compile and train the model------------------------------
# is accuracy sensible metric for this model?
adadelta=optimizers.Adadelta(lr=1.0,decay=1/1000)
watermark_extraction.compile(optimizer=adadelta, loss={'decoder_output':'mse','reconstructed_W':'mse'}, metrics=['mae'])
watermark_extraction.fit([x_train,w_expand], [x_train,w_expand],
          epochs=10,
          batch_size=32, 
          validation_data=([x_validation,wv_expand], [x_validation,wv_expand]),
          callbacks=[TensorBoard(log_dir='E:/tmp/AutewithW200', histogram_freq=0, write_graph=False),EarlyStopping(monitor='val_loss', patience=10,min_delta=0)])
model.summary()

new error:

ValueError: Unknown entry in loss dictionary: "decoder_output". Only expected the following keys: ['model_14', 'reconstructed_W']

1 Answer 1

1

The Lambda layer is hacking the system when you put a tensor inside the formula instead of passing the tensor to the layer.

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,wtm])

Or simply:

encoded_merged = Add()([encoded,wtm])

You must make wtm be an input of the model:

encoder = Model(inputs=[image,wtm], outputs = encoded_merged)

The models should start from an input tensor, not from a tensor in the middle of the graph:

deconv_inputs = Input(shape_of_encoded_merged)
deconv1 =  Conv2D(16, (3, 3), activation='relu', padding='same', name='convl1d')(deconv_inputs)
....

decoder = Model(inputs=deconv_inputs, outputs=decoded)

You can then create the autoencoder:

wtm=Input((28,28,1))
image = Input((28, 28, 1))    

encoded_merged = encoder([image,wtm])
decoded = decoder(encoded_merged)

autoencoder = Model([image,wtm], decoded)
Sign up to request clarification or add additional context in comments.

8 Comments

I changed my code based on your suggestions and put it here, but now it produces a new error. I do not know the reason but I think it should be related to these new changes. could you please tell me why does it produce this error? and how do I solve it? I put new code and error above.
I'm waiting for your help I really need it. please answer me.
Well... the error is quite clear. You used decoder_output in the loss while it will only accept model_14. (I suggest you name the models and check the error message again for which model is expected in the loss)
That is just the name of the output tensor. Before, the name was decoder_output (as you defined in the last Conv2D), but since you're creating a model with the output of another model, the final automatic name is model_## (the name of that model). If you give names your models, like decoder = Model(inputs=..., outputs=..., name='decoder'), you will see that the error message will now show the correct name as an option instead of `'model_14'``.
Also, in the loss function in compile, you don't need to use the names. You can simply pass a list in the same order as the outputs of watermark_extraction, like loss = ['mse','mse']. Or, in this case where you're using the same loss for both outputs, simply loss='mse'.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.