0

I realize now that implementing it like this would have been a good idea. However, I have an already trained and fine-tuned autoencoder that looks like this:

Model: "autoencoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
user_input (InputLayer)      [(None, 5999)]            0         
_________________________________________________________________
user_e00 (Dense)             (None, 64)                384000    
_________________________________________________________________
user_e01 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_e02 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_e03 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_out (Dense)             (None, 32)                2080      
_________________________________________________________________
emb_dropout (Dropout)        (None, 32)                0         
_________________________________________________________________
user_d00 (Dense)             (None, 64)                2112      
_________________________________________________________________
user_d01 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_d02 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_d03 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_res (Dense)             (None, 5999)              389935    
=================================================================
Total params: 803,087
Trainable params: 0
Non-trainable params: 803,087
_________________________________________________________________

Now I want to split it into encoder and decoder. I believe I already found the right way for the encoder, which would be:

encoder_in = model.input
encoder_out = model.get_layer(name='user_out').output
encoder = Model(encoder_in, encoder_out, name='encoder')

For the decoder I would like to do something like:

decoder_in = model.get_layer("user_d00").input
decoder_out = model.output
decoder = Model(decoder_in, decoder_out, name='decoder')

but that throws:

WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "decoder" was not an Input tensor, it was generated by layer emb_dropout.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: emb_dropout/cond_3/Identity:0 

I believe I have to create an Input layer with the shape of the output of emb_dropout and probably add it to user_d00 (since the Dropout layer is not needed anymore since training has ended). Anyone knows how to do it correctly?

3
  • 1
    Please see: stackoverflow.com/questions/52800025/… Same would apply to tensorflow.keras. As a side comment: this is just a pain in tf/keras. Strongly recommend to rewrite your solution if possible Commented Feb 6, 2021 at 9:33
  • Thanks, do you by any chance know if I can take the user_d00 directly? the dropout layer should not contain any weights.. or does it? Commented Feb 6, 2021 at 10:34
  • Note: since the network is very fast to train, I ended up reimplementing it correctly and doing that instead. Commented Feb 6, 2021 at 12:18

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.