0

I need to use Tensorflow Object Detection API to make some classification connected with recognition.

My problem is that using the API for detection with a pretrained coco model takes too much time and for sure does not use the GPU. I checked my tensorflow-gpu installation on different scripts and it works fine, but when I use this model for detection I can only see increse in CPU usage.

I checked different version of tensorflow (1.12, 1.14), different combinations of CUDA Toolkit (9.0, 10.0) and CuDNN (7.4.2, 7.5.1, 7.6.1) but it is all the same, also tried it on both Windows 7 and Ubuntu 16.04, no difference. My project however requires much faster detection time.

System information: System: Windows 7, Ubuntu 16.04 Tensorflow: 1.12, 1.14 GPU: GTX 970

1 Answer 1

2

Run following python code, if it detects GPU then you can use GPU for training otherwise there is some problem,

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

One more thing, just because your CPU is utilizing does not mean GPU is not at work. CPU always be busy, GPU should also spike when you are training.

Paste the output of above code in the comment if you are not sure about the output.

Edit: After chat with OP on comments, I see the suggested code, and it is using pretrained model, so no training happening here. You are using model and not training a new model. So no gpu is being used.

Sign up to request clarification or add additional context in comments.

9 Comments

Thanks for your answer. [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 12868789687935921631 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 3546435584 locality { bus_id: 1 links { } } incarnation: 5695922347637991756 physical_device_desc: "device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0, compute capability: 5.2" ]
I also checked the usage of GPU with GPU-Z software for monitoring GPU work and I did not notice even a small spike or VRAM utilisation.
Yes, the output is correct, it is suggesting you can use GPU, not sure what is wrong if possible can you paste your code, also try using google colab if you still face same issue there.
Right now I only use the exemplary code from delivered with the library Jupyter Notebook object_detection_tutorial.ipynb. I just wanted to use it to check how long does it take to make the detections, since it is crucial for my project. I see some people on the internet report something like even 10 detections per second, when I get 1 detection in 5 seconds.
@Makintosz Ah!, you are not training you are using pretrained model. Look at my answer. I have edited it, you are not training model here so no GPU will be required. Please accept the answer, if I have solved your problem.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.