4

I have written an OpenGL game and I want to allow remote playing of the game through a canvas element. Input is easy, but video is hard.

What I am doing right now is launching the game via node.js and in my rendering loop I am sending to stdout a base64 encoded stream of bitmap data representing the current frame. The base64 frame is sent via websocket to the client page, and rendered (painstakingly slowly) pixel by pixel. Obviously this can't stand.

I've been kicking around the idea of trying to generate a video stream and then I can easily render it onto a canvas through a tag (ala http://mrdoob.github.com/three.js/examples/materials_video.html).

The problem I'm having with this idea is I don't know enough about codecs/streaming to determine at a high level if this is actually possible? I'm not sure if even the codec is the part that I need to worry about being able to have the content dynamically changed, and possibly on rendered a few frames ahead.

Other ideas I've had:

  • Trying to create an HTMLImageElement from the base64 frame
  • Attempting to optimize compression / redraw regions so that the pixel bandwidth is much lower (seems unrealistic to achieve the kind of performance I'd need to get 20+fps).

Then there's always the option of going flash...but I'd really prefer to avoid it. I'm looking for some opinions on technologies to pursue, ideas?

2 Answers 2

2

Try transforming RGB in YCbCr color space and stream pixel values as:

Y1 Y2 Y3 Y4 Y5 .... Cb1 Cb2 Cb3 Cb4 Cb5 .... Cr1 Cr2 Cr3 Cr4 Cr5 ...

There would be many repeating patterns, so any compressing algorithm will compress it better then RGBRGBRBG sequence.

http://en.wikipedia.org/wiki/YCbCr

Sign up to request clarification or add additional context in comments.

3 Comments

Also, check github.com/pkrumins/node-video. It seems that this library could do all dirty work with video encoding.
Would it really compress better, than RRRRRGGGGGBBBBB?
Just tested with 1920x1080 PNG real-life photo. Compressed RGB..RGB size is 392kb, YCbCr..YCbCr is 309kb. Compressed RR..GG..BB is 409kb and YY..CbCb..CrCr is 218kb. Really better :) Also you can try to split image to N squares of M pixels side and write YY..CbCb..CrCr sequences for each square linear, I think it would compress better, will check it later. But don't forget that you can loose some color information in YCbCr - check wiki for detailed information. And sorry for late answer ;)
1
  1. Why base64 encode the data? I think you can push raw bytes over a WebSocket

  2. If you've got a linear array of RGBA values in the right format you can dump those straight into an ImageData object for subsequent use with a single ctx.putImageData() call.

2 Comments

1. Googling around seems to indicate otherwise, but I never tried (because I googled around before trying). 2. Now that I think about it..you're right, that combined with redraw region/compression might be fast enough. I'll experiment with it.
yeah, on further thought the WebSocket can probably receive a raw string, but you'd still have to unpack that into a local array to be able to use it with putImageData().

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.