0

After having trained an AutoEncoder with PyTorch, how can I extract the low-dimensional embeddings of input features at some hidden-level?

1 Answer 1

1

You can just define your model such that it optionally returns the intermediate pytorch variable calculated during the forward pass. Simple example:

class Autoencoder(nn.Module):
    def __init__(self, input_size, hidden_size):
    super().__init__()
    self.encoder = nn.Sequential(
    nn.Linear(input_size, hidden_size),
    nn.ReLU(),
    nn.Linear(hidden_size, 3)) #reduce the size

    self.decoder = nn.Sequential(
    nn.Linear(3, hidden_size),
    nn.ReLU(),
    nn.Linear(hidden_size, input_size),
    nn.ReLU()) #reduce the size

def forward(self, x, return_encoding = False):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)

    if return_encoding:
        return decoded,encoded
    return decoded
Sign up to request clarification or add additional context in comments.

5 Comments

That's great, sorry I'm pretty new to PyTorch, hence trying to wrap my hands around it. Many thanks :) Did you miss another nn.ReLU() in the encoder after the final linear layer?
Sounds good. If this works go ahead and accept the answer to help keep SO organized, thanks!
accepted! Can you just precise me if you forgot an extra nn.ReLU() in the encoder or you just didn't put it?
I just copied this network architecture from your other question. It's likely that you'd want an activation function after the final fully connected layer in the encoder
got it, so I guess I had forgotten to put a final activation function in the encoder part..Many thanks!!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.