0

I have a tensor T with dimension (d1 x d2 x d3 x ... dk) and a tensor I with dimension (p x q). Here, I contains coordinates of T but q < k, each column of I corresponds to a dimension of T. I have another tensor V of dimension p x di x ...dj where sum([di, ..., dj]) = k - q. (di, .., dj) corresponds to missing dimensions from I. I need to perform T[I] = V

A specific example of such problem using numpy array posted here[1].

The solution[2] uses fancy indexing[3] which relies on numpy.index_exp. In case of pytorch such option is not available. Is there any alternative way to mimic this in pytorch without using loops or casting tensors to numpy array?

Below is a demo:

import torch
t = torch.randn((32, 16, 60, 64)) # tensor

i0 = torch.randint(0, 32, (10, 1)).to(dtype=torch.long) # indexes for dim=0
i2 = torch.randint(0, 60, (10, 1)).to(dtype=torch.long) # indexes for dim=2

i = torch.cat((i0, i2), 1) # indexes
v = torch.randn((10, 16, 64)) # to be assigned

# t[i0, :, i2, :] = v ?? Obviously this does not work

[1] Slice numpy array using list of coordinates

[2] https://stackoverflow.com/a/42538465/6422069

[3] https://numpy.org/doc/stable/reference/generated/numpy.s_.html

9
  • Can you provide a minimal reproducible example? Commented Mar 15, 2022 at 11:17
  • @Ivan added an example Commented Mar 15, 2022 at 13:08
  • You can use slice(None) instead of np.index_exp[:]. Commented Mar 15, 2022 at 13:11
  • @aretor I am dealing with pytorch tensor and want to avoid casting tensor to numpy. If the tensor is in GPU casting means transferring the tensor to ram and finally again transferring it back to GPU. Commented Mar 15, 2022 at 13:14
  • Can't you use something like t[(i0, slice(None), i2, slice(None))]? If I got it correctly, it should make the job. Everything still resides in the GPU. Let me know Commented Mar 15, 2022 at 13:15

1 Answer 1

1

After some discussion in the comments, we arrived at the following solution:

import torch
t = torch.randn((32, 16, 60, 64)) # tensor

# indices
i0 = torch.randint(0, 32, (10,)).to(dtype=torch.long) # indexes for dim=0
i2 = torch.randint(0, 60, (10,)).to(dtype=torch.long) # indexes for dim=2

v = torch.randn((10, 16, 64)) # to be assigned

t[(i0, slice(None), i2, slice(None))] = v
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.