2

I have a use-case in which I am simulating IP camera using python and opencv.

I am playing video using opencv and sending the bytes of the frame to an application which is streaming it at a port 8080.

The problem is as soon as the video gets finished, I have nothing to send to the application which is streaming this fake simulated camera at port 8080, so the application takes it as timeout and stops working.

My question is, how can I send some fake bytes lets days a black screen noise just to keep alive my application which is listening to my face simulated camera at 8080?

Edit 1: Adding code

app.py

from camera import VideoCamera
from flask import Flask, render_template, Response
import time

app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html')

def gen(camera):
    while True:
        try:
            frame = camera.get_frame()
        except Exception:
            print("Video is finished or empty")
            #return None
            frame = camera.get_heartbeat()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')

@app.route('/video_feed')
def video_feed():
    return Response(gen(VideoCamera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':
    app.run(debug=True)

camera.py

import cv2

class VideoCamera(object):
    def __init__(self):
        # Using OpenCV to capture from device 0. If you have trouble capturing
        # from a webcam, comment the line below out and use a video file
        # instead.
        #self.video = cv2.VideoCapture(0)
        # If you decide to use video.mp4, you must have this file in the folder
        # as the main.py.
        # self.video = cv2.VideoCapture('suits_hd.mp4') 
        self.video = cv2.VideoCapture('nature.mp4')

    def __del__(self):
        self.video.release()

    def get_frame(self):
        success, image = self.video.read()
        # We are using Motion JPEG, but OpenCV defaults to capture raw images,
        # so we must encode it into JPEG in order to correctly display the
        # video stream.
        ret, jpeg = cv2.imencode('.jpg', image)
        return jpeg.tobytes()

    def get_heartbeat(self):
        # jpeg = cv2.imread('noise-black.jpg')
        image = cv2.imread('noise-green.jpg')
        ret, jpeg = cv2.imencode('.jpg', image)
        return jpeg.tobytes()
7
  • If you show your code that sends the "normal" bytes, it should become clear how to send fake bytes. Commented Jul 21, 2018 at 10:19
  • Added code, please have a look Commented Jul 21, 2018 at 10:52
  • So, when you init the VideoCamera, get the width and height of the video frames in the file and remember them. Then, if self.video.read() fails, just use numpy to create a random array the same size as a video frame and imencode() and send that. Commented Jul 21, 2018 at 11:14
  • ` just use numpy to create a random array the same size as a video frame and imencode() and send that`, how to do this? Commented Jul 21, 2018 at 11:16
  • image = np.random.randint(0,256,(320,240,3), dtype=np.uint8) for a 320x240 frame in colour Commented Jul 21, 2018 at 11:18

2 Answers 2

1

So, when you init the VideoCamera, get the width and height of the video frames in the file and remember them. Then, if self.video.read() fails, just use numpy to create a random array the same size as a video frame and imencode() and send that.

Make a random green-ish frame with:

import numpy as np

# Make the Green channel out of intensities in range 200-255
G=np.random.randint(200,256,(320,240,1), dtype=np.uint8)
# Make Red and Blue channel out of intensities in range 0-49
X=np.random.randint(0,50,(320,240,1), dtype=np.uint8)
# Merge into a 3-channel image, use BGR order with OpenCV - although it won't matter because G is in the middle in both!
image=np.concatenate((X,G,X),axis=2)
Sign up to request clarification or add additional context in comments.

Comments

1

If you are able to recognize the end of the video file, you could just send a black image, which you could read from file. cv2.imread('black.jpg')and then send this one through the socket.

2 Comments

For color green in the np array you'd need to generate the values with a coresponding rgb code.
I did this but the application reads a real ip camera (in my case reading a simulated fake ip camera) exit when feeded black image for more than 4 second. Green image works though.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.