0

I am biggner in tensorflow. I used transfer learning machanism and create custom object detection model using "ssd_resnet101_v1_fpn_keras" pre-trained model.

I follow the below documentation for custom traning:

https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

I observed one issue while I used it for detection it takes lot of RAM and not releasing it.

I am sharing you the code snippet where it took lot of RAM and not releasing it.

detect_fn = tf.saved_model.load(visa_icon_model)

visa_icon_detections = detect_fn(input_tensor

Memory profiler info:

301   1675.0 MiB    191.8 MiB           1           visa_icon_detections = detect_fn(input_tensor)

As you can see, it's take 191.8 Mb RAM. It's not releasing it after competion the process.

I used gc.collect() and tf.keras.backend.clear_session() for releasing the memory.

Both is not working for me.

Please anyone can help me how can I solve this problem.

1 Answer 1

1

For me, the solution was following:

# first define detect_fn and decorate with tf.function
detect_fn = tf.function(tf.saved_model.load(visa_icon_model))

# when predicting 
visa_icon_detections = detect_fn.signatures['serving_default'](input_tensor)

I did a stress test with about 100 requests (I have a running model inside docker container) and it went for about 3 GB maximum after allocation and it uses about 1.9-2.5 GB stable.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.