I’m currently running a deep learning program using PyTorch and wanted to free the GPU memory for a specific tensor.
I’ve thought of methods like del and torch.cuda.empty_cache(), but del doesn’t seem to work properly (I’m not even sure if it frees memory at all) and torch.cuda.empty_cache() seems to free all unused memory, but I want to free memory for just a specific tensor.
Is there any functionality in PyTorch that provides this?
Thanks in advance.