4

Im working on object detection from a live stream video using opencv python. The program I have is running on a single thread because of that the resulting video shown on the screen doesnt even look like a video, since there is a delay in detection process. So, Im trying to re-implement it using multiple threads. I am using one thread for reading frames and another for showing the detection result and about 5 threads to run the detection algorithm on multiple frames at once. I have written the following code but the result is not different from the single thread program. Im new to python. So, any help is appreciated.

import threading, time
import cv2
import queue


def detect_object():
    while True:
        print("get")
        frame = input_buffer.get()
        if frame is not None:
            time.sleep(1)
            detection_buffer.put(frame)
        else:
            break
    return


def show():
    while True:
        print("show")
        frame = detection_buffer.get()
        if frame is not None:
            cv2.imshow("Video", frame)
        else:
            break
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    return


if __name__ == "__main__":

    input_buffer = queue.Queue()
    detection_buffer = queue.Queue()

    cap = cv2.VideoCapture(0)

    for i in range(5):
        t = threading.Thread(target=detect_object)
        t.start()

    t1 = threading.Thread(target=show)
    t1.start()

    while True:
        ret, frame = cap.read()
        if ret:
            input_buffer.put(frame)
            time.sleep(0.025)
        else:
            break

    print("program ended")

2
  • 1
    Well, the first (in fact the only) thing your detect_object threads do after they get a frame from the queue is sleep for 1 second... hence 5 of them won't do anything better than 5 frames per second. Commented May 17, 2021 at 11:29
  • @DanMašek Thanks for your replay. Yes I get that. I put that to simplify the code. The detection algorithm I have takes about 0.7 seconds. So, I thought the sleeping for 1 second simulates the delay there. Commented May 17, 2021 at 12:08

2 Answers 2

4

Working on the assumption that the detection algorithm is CPU-intensive, you need to be using multiprocessing instead of multithreading since multiple threads will not run Python bytecode in parallel due to contention for the Global Interpreter Lock. You should also get rid of all the calls to sleep. It is also not clear when you run multiple threads or processes the way you are doing it what guarantees that the frames will be output in the correct order, that is, the processing of the second frame could complete before the processing of the first frame and get written to the detection_buffer first.

The following uses a processing pool of 6 processes (there is no need now for an implicit input queue).

from multiprocessing import Pool, Queue
import time
import cv2

# intialize global variables for the pool processes:
def init_pool(d_b):
    global detection_buffer
    detection_buffer = d_b


def detect_object(frame):
    time.sleep(1)
    detection_buffer.put(frame)


def show():
    while True:
        print("show")
        frame = detection_buffer.get()
        if frame is not None:
            cv2.imshow("Video", frame)
        else:
            break
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    return


# required for Windows:
if __name__ == "__main__":

    detection_buffer = Queue()
    # 6 workers: 1 for the show task and 5 to process frames:
    pool = Pool(6, initializer=init_pool, initargs=(detection_buffer,))
    # run the "show" task:
    show_future = pool.apply_async(show)

    cap = cv2.VideoCapture(0)

    futures = []
    while True:
        ret, frame = cap.read()
        if ret:
            f = pool.apply_async(detect_object, args=(frame,))
            futures.append(f)
            time.sleep(0.025)
        else:
            break
    # wait for all the frame-putting tasks to complete:
    for f in futures:
        f.get()
    # signal the "show" task to end by placing None in the queue
    detection_buffer.put(None)
    show_future.get()
    print("program ended")
Sign up to request clarification or add additional context in comments.

Comments

1

for me what I've done is building 2 threads for 2 funtions and use one queue:

  1. to get the frame and process it
  2. to dispaly

the cap variable was inside my process function.

def process:
     cap = cv2.VideoCapture(filename)
     ret, frame = cap.read()

      while ret:
          ret, frame = cap.read()
#detection part in my case I use tensorflow then 
     # end of detection part 
          q.put(result_of_detection)

def Display():
  while True:
    if q.empty() != True:
        frame = q.get()
        cv2.imshow("frame1", frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

if __name__ == '__main__':
#  start threads
p1 = threading.Thread(target=process)
p2 = threading.Thread(target=Display)
p1.start()
p2.start()

it works just fine for me

hope I helped :D

Also, I think this page could may help: https://pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/

2 Comments

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
How would this work if I had to do processing on the frame? My though is that if I were to use 3 threads: one for getting the frame, the other for processing it, and the last for showing the image. Wouldn't that create a concurrency issue? The "showing thread" is not on the same frame as the "image processing thread".

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.