0

I wrote a architecture similar to this code here without using sequential layer but it's returning ValueError,

# Input Memory Representation.
input_story = layers.Input(shape=(story_maxlen,), dtype='int32')
input_story_0 = layers.Embedding(input_dim=vocab_size, output_dim=64)(input_story)
input_story_1 = layers.Dropout(0.3)(input_story_0)

input_question = layers.Input(shape=(query_maxlen,), dtype='int32')
input_question_0 = layers.Embedding(input_dim=vocab_size, output_dim=64)(input_question)
input_question_1 = layers.Dropout(0.3)(input_question_0)

match = layers.dot([input_story_1, input_question_1], axes=(2, 2))
match = layers.Activation('softmax')(match)

# Output Memory Representation.
input_story_11 = layers.Input(shape=(story_maxlen,), dtype='int32')
input_story_12 = layers.Embedding(input_dim=vocab_size, output_dim=query_maxlen)(input_story_11)
input_story_13 = layers.Dropout(0.3)(input_story_12)

add = layers.add([match, input_story_13])
add = layers.Permute((2, 1))(add)

# Generating Final Predictions
x = layers.concatenate([add, input_question_1])
x = layers.LSTM(32)(x)
x = layers.Dropout(0.3)(x)
x = layers.Dense(vocab_size)(x)
x - layers.Activation('softmax')(x)

model = Model(inputs=[input_story, input_question], outputs=x)

This is the error I was getting

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_143:0", shape=(None, 552), dtype=int32) at layer "input_143". The following previous layers were accessed without issue: ['input_142', 'input_141', 'embedding_141', 'embedding_140', 'dropout_152']

I've re-checked all the input layers, sizes and all seems to be fine but I can't figure out why I'm getting a value error. Can anyone help out?

1 Answer 1

1

You have three Input layers in your code. You have added only two layers in your model input.

# Output Memory Representation.
input_story_11 = layers.Input(shape=(story_maxlen,), dtype='int32')

This layer is not added to model inputs. Hence it is giving graph disconnected error.

Add this layer to your inputs:

model = Model(inputs=[input_story, input_question, input_story_11], outputs=x)
Sign up to request clarification or add additional context in comments.

4 Comments

thank you but if you see the architecture from this post they have only given two layers in the model input. I was trying to replicate that. Can you check once.
In this post I am seeing only two Input layers: input_sequence and question, while you have three.
So am I doing something wrong here? I'm just trying to replicate that exact architecture but without using Sequential. Are there any mistakes that I need to correct to make both alike?
@user_12 I didn't go through entire code, but you are passing third input input_story_13 to add layer, while in the blog it is different. You might want to start with that.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.