1

I've used Pytorch for a few months. But I recently want to create a customized pooling layer which is similar to the "Max-Pooling Dropout" layer and I think Pytorch provided us a bunch of tools to build such a layer I need. Here is my approach:

  1. use MaxPool2d with indices returned
  2. set tensor[indices] to zero
  3. I want it behaves like torch.take (without flatten) if possible.

here is how to get the "index tensor". (I think it is called "index tensor". correct me if I was wrong)

input1 = torch.randn(1, 1, 6, 6)
m = nn.MaxPool2d(2,2, return_indices=True)
val, indx = m(input1)

indx is the "index tensor" which can be used easily as

torch.take(input1, indx)

No flatten needed, no argument needed to set dimension. I think it make sense since indx is generated from input1.

Question: how do I set the values input1 pointed by indx to 0 in the "torch.take" style? I saw some answers like Indexing a multi-dimensional tensor with a tensor in PyTorch. But I don't think FB returning such "index tensor" thing which cannot be applied directly. (Maybe I was wrong.)

Is there something like

torch.set_value(input1, indx, 0) ?
2
  • 1
    Generating a mask tensor and multiplying it can be the trick, but cannot avoid flattening since values of index tensor are based on flattened one. mask=torch.ones_like(input1); mask.view(-1)[indx.view(-1)] = 0; input1 * mask Commented Jan 25, 2023 at 8:09
  • Thanks Hayoung. This is a working solution. So far, I don't see how powerful the "tensor index" is. Maybe it is just used in some special scenarios. torch.take is just one of such scenario... Commented Jan 26, 2023 at 8:12

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.