0

I have a Linear() layer in Pytorch after a few Conv() layers. All the images in my dataset are black and white. However most of the images in my test set are of a different dimension than the images in my training set. Apart from resizing the images themselves, is there any way to define the Linear() layer in such a way that it takes a variable input dimension? For example something similar to view(-1)

1
  • You have to pre-process your image array before passing in torch.nn.Linear() layer. I think there is no way to pass any external parameter as you want. Commented Dec 25, 2019 at 12:09

1 Answer 1

2

Well, it doesn't make sense to have a Linear() layer with a variable input size. Because in fact it's a learnable matrix of shape [n_in, n_out]. And matrix multiplication is not defined for inputs if theirs feature dimension != n_in

What you can do is to apply pooling from functional API. You'll need to specify kernel_size and stride such that resulting output will have feature dimension size = n_in.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.