3

Currently I can't use GPU but I have to load several BERTopic pertained gpu models in CPU.I tried to add map.location=torch.device("cpu") as suggested but without results. I keep obtaining the same error : Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. How can I resolve it?

1 Answer 1

1
model = torch.load(path/to/the/gpu/model.pt, ,map_location=torch.device('cpu'))

torch.save(model, "GPUToCPU.pt")

In,
:~$ pip show torch
Name: torch
Version: 1.9.0

Then use as usual:
    model = torch.load('path/to/the/new/GPUToCPU.pt')['model'].cpu().float() 
    results = model(im)
    prob = F.softmax(results, dim=1)  # probabilities
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.