5

I am using Cuda and Pytorch:1.4.0.

When I try to increase batch_size, I've got the following error:

CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.74 GiB already allocated; 7.80 MiB free; 2.96 GiB reserved in total by PyTorch)

I haven't found anything about Pytorch memory usage.

Also, I don't understand why I have only 7.80 mib available?

Should I just use a videocard with better perfomance, or can I free some memory? FYI, I have a GTX 1050 TI, python 3,7 and torch==1.4.0 and my os is Windows 10.

1
  • 8
    Please don't post error messages as images. They can't be searched for by future visitors Commented Feb 18, 2020 at 8:36

1 Answer 1

5

I had the same problem, the following worked for me:

torch.cuda.empty_cache()
# start training from here

Even after this if you get the error, then you should decrease the batch_size

Sign up to request clarification or add additional context in comments.

1 Comment

To complement, one can check the GPU memory using nvidia-smi command on terminal. Also, if you're storing tensors on GPU you can move them to cpu using tensor.cpu(). I solve most of my problems with memory using these commands.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.