I wrote a architecture similar to this code here without using sequential layer but it's returning ValueError,
# Input Memory Representation.
input_story = layers.Input(shape=(story_maxlen,), dtype='int32')
input_story_0 = layers.Embedding(input_dim=vocab_size, output_dim=64)(input_story)
input_story_1 = layers.Dropout(0.3)(input_story_0)
input_question = layers.Input(shape=(query_maxlen,), dtype='int32')
input_question_0 = layers.Embedding(input_dim=vocab_size, output_dim=64)(input_question)
input_question_1 = layers.Dropout(0.3)(input_question_0)
match = layers.dot([input_story_1, input_question_1], axes=(2, 2))
match = layers.Activation('softmax')(match)
# Output Memory Representation.
input_story_11 = layers.Input(shape=(story_maxlen,), dtype='int32')
input_story_12 = layers.Embedding(input_dim=vocab_size, output_dim=query_maxlen)(input_story_11)
input_story_13 = layers.Dropout(0.3)(input_story_12)
add = layers.add([match, input_story_13])
add = layers.Permute((2, 1))(add)
# Generating Final Predictions
x = layers.concatenate([add, input_question_1])
x = layers.LSTM(32)(x)
x = layers.Dropout(0.3)(x)
x = layers.Dense(vocab_size)(x)
x - layers.Activation('softmax')(x)
model = Model(inputs=[input_story, input_question], outputs=x)
This is the error I was getting
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_143:0", shape=(None, 552), dtype=int32) at layer "input_143". The following previous layers were accessed without issue: ['input_142', 'input_141', 'embedding_141', 'embedding_140', 'dropout_152']
I've re-checked all the input layers, sizes and all seems to be fine but I can't figure out why I'm getting a value error. Can anyone help out?