Currently I can't use GPU but I have to load several BERTopic pertained gpu models in CPU.I tried to add map.location=torch.device("cpu") as suggested but without results. I keep obtaining the same error : Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. How can I resolve it?
1 Answer
model = torch.load(path/to/the/gpu/model.pt, ,map_location=torch.device('cpu'))
torch.save(model, "GPUToCPU.pt")
In,
:~$ pip show torch
Name: torch
Version: 1.9.0
Then use as usual:
model = torch.load('path/to/the/new/GPUToCPU.pt')['model'].cpu().float()
results = model(im)
prob = F.softmax(results, dim=1) # probabilities