I have an auto encoder defined like this
inputs = Input(batch_shape=(1,timesteps, input_dim))
encoded = LSTM(4,return_sequences = True)(inputs)
encoded = LSTM(3,return_sequences = True)(encoded)
encoded = LSTM(2)(encoded)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(3,return_sequences = True)(decoded)
decoded = LSTM(4,return_sequences = True)(decoded)
decoded = LSTM(input_dim,return_sequences = True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs,encoded)
I want the encoder to be connected to a LSTM layer like this
f_input = Input(batch_shape=(1, timesteps, input_dim))
encoder_input = encoder(inputs=f_input)
single_lstm_layer = LSTM(50, kernel_initializer=RandomUniform(minval=-0.05, maxval=0.05))(encoder_input)
drop_1 = Dropout(0.33)(single_lstm_layer)
output_layer = Dense(12, name="Output_Layer"
)(drop_1)
final_model = Model(inputs=[f_input], outputs=[output_layer])
But it gives me a dimension error.
Input 0 is incompatible with layer lstm_3: expected ndim=3, found ndim=2
How can I do this properly.?
encoder_input) is of shape(1, 2). That's why it cannot be fed to the following LSTM layer.(1, ?, 2)to be able to feed it to LSTM layer. To achieve this, one way is to useRepeatVectorlayer onencoder_input. However, this may or may not be appropriate thing to do depending on the problem you are working on and the results you want to achieve.