I have a quite simple task. I have a video and first i have to slice it per frame, predict it using tensorflow, and combine those predicted frames into a single video. Here's the result I got:
Slicing ... Time elapsed since last slice : 0s. Frame-1
.
Slicing ... Time elapsed since last slice : 2.0517446994781494s. Frame-2
.
Slicing ... Time elapsed since last slice : 0.8912339210510254s. Frame-3
.
Slicing ... Time elapsed since last slice : 0.8657193183898926s. Frame-4
.
Slicing ... Time elapsed since last slice : 0.8655312061309814s. Frame-5
.
Slicing ... Time elapsed since last slice : 0.8827650547027588s. Frame-6
.
Slicing ... Time elapsed since last slice : 0.8690683841705322s. Frame-7
.
Slicing ... Time elapsed since last slice : 0.8906550407409668s. Frame-8
.
Slicing ... Time elapsed since last slice : 0.8798754215240479s. Frame-9
.
Slicing ... Time elapsed since last slice : 0.9341959953308105s. Frame-10
each frame took about 0.8 second. Let's say i have a 5 seconds video with 30 fps. It will take about 120 seconds to process.
I have a thought about parallelizing for this like multithreading etc. Is this even possible? Where do I have to start? Thank you.