After having trained an AutoEncoder with PyTorch, how can I extract the low-dimensional embeddings of input features at some hidden-level?
1 Answer
You can just define your model such that it optionally returns the intermediate pytorch variable calculated during the forward pass. Simple example:
class Autoencoder(nn.Module):
def __init__(self, input_size, hidden_size):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(input_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 3)) #reduce the size
self.decoder = nn.Sequential(
nn.Linear(3, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, input_size),
nn.ReLU()) #reduce the size
def forward(self, x, return_encoding = False):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
if return_encoding:
return decoded,encoded
return decoded
5 Comments
James Arten
That's great, sorry I'm pretty new to PyTorch, hence trying to wrap my hands around it. Many thanks :) Did you miss another nn.ReLU() in the encoder after the final linear layer?
DerekG
Sounds good. If this works go ahead and accept the answer to help keep SO organized, thanks!
James Arten
accepted! Can you just precise me if you forgot an extra nn.ReLU() in the encoder or you just didn't put it?
DerekG
I just copied this network architecture from your other question. It's likely that you'd want an activation function after the final fully connected layer in the encoder
James Arten
got it, so I guess I had forgotten to put a final activation function in the encoder part..Many thanks!!