0

I have a quite simple task. I have a video and first i have to slice it per frame, predict it using tensorflow, and combine those predicted frames into a single video. Here's the result I got:

Slicing ... Time elapsed since last slice : 0s. Frame-1
.
Slicing ... Time elapsed since last slice : 2.0517446994781494s. Frame-2
.
Slicing ... Time elapsed since last slice : 0.8912339210510254s. Frame-3
.
Slicing ... Time elapsed since last slice : 0.8657193183898926s. Frame-4
.
Slicing ... Time elapsed since last slice : 0.8655312061309814s. Frame-5
.
Slicing ... Time elapsed since last slice : 0.8827650547027588s. Frame-6
.
Slicing ... Time elapsed since last slice : 0.8690683841705322s. Frame-7
.
Slicing ... Time elapsed since last slice : 0.8906550407409668s. Frame-8
.
Slicing ... Time elapsed since last slice : 0.8798754215240479s. Frame-9
.
Slicing ... Time elapsed since last slice : 0.9341959953308105s. Frame-10

each frame took about 0.8 second. Let's say i have a 5 seconds video with 30 fps. It will take about 120 seconds to process.

I have a thought about parallelizing for this like multithreading etc. Is this even possible? Where do I have to start? Thank you.

5
  • Not easily. Python cant truly multithread because it has a Global Interpreter Lock (GIL). It can multi-task/multi-thread but that doesn't speed anything up because the threads can't run at the same time they just keep alternating turns. Here is a good article about the GIL Grok the GIL: How to write fast and thread-safe Python Commented Sep 26, 2019 at 16:16
  • The article above also goes over what is needed for Parallelism and how it works. Commented Sep 26, 2019 at 16:22
  • 1
    @Error-SyntacticalRemorse That's not exactly true – multiple threads can run native code (or anything that releases the GIL). Only Python code is run in a quasi-cooperative-multitasking sort of manner. Commented Sep 26, 2019 at 16:31
  • @AKX I tried to emphasize that with the "Not easily". It is possible but not easy to multi thread to multiple cores. It is possible to truly multi-thread (such as NumPy) but not easily. Any native code that uses the GIL though will not use multiple cores unless you make it so each thread gets a GIL which the link emphasizes. Commented Sep 26, 2019 at 16:33
  • 1
    For OP's usecase, they can easily use multiprocessing to spread out frame processing to multiple processes instead. Commented Sep 27, 2019 at 6:43

1 Answer 1

1

You can try out multiprocessing package in python. For more details refer: https://docs.python.org/3/library/multiprocessing.html.

Or other thing you can try out is to Maximize TensorFlow Performance on CPU by optimization. The primary goal of the performance optimization is to make use of all the cores available in the machine and thereby speed up the process. This can be achieved by setting the intra_op_parallelism_threads, inter_op_parallelism_threads, OMP_NUM_THREADS etc.

Please refer : https://software.intel.com/en-us/articles/maximize-tensorflow-performance-on-cpu-considerations-and-recommendations-for-inference

Hope this helps.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.