Like many othersm I'm getting a Runtime error of Cuda out of memory, but for some reason pytorch has reserved a large amount of it.
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 6.00 GiB total capacity; 4.31 GiB already allocated; 844.80 KiB free; 4.71 GiB reserved in total by PyTorch)
I've tried the torch.cuda.empy_cache(), but this isn't working either and none of the other CUDA out of memory posts have helped me either.
When I've checked my gpu usage(nvidia-smi) before running my python program, it is plenty free.