1

In python, I want to convert an image that has been processed using OpenCV for transmission in a certain image format (TIFF in this case, but could be: BMP, JPEG, PNG, .....).

For this purpose it will suffice to encode the OpenCV image into a memory buffer. The problem is that when I use cv2.imencode() to do this the returned object still looks like a numpy array:

import cv2
img_cv2_aka_nparr = cv2.imread('test.jpg')
my_format = '.tiff'
retval, im_buffer = cv2.imencode(my_format, img_cv2_aka_nparr)
print type(im_buffer)

im_buffer is just another numpy array - it is not at all a TIFF-encoded bytestream! As far as I can tell, OpenCV images in python always behave like numpy arrays, and even look like numpy arrays via type().

In fact, if you want to create a dummy "OpenCV image", you have to use numpy - see e.g. https://stackoverflow.com/a/22921648/1021819

Why is this, and how do I fix it? That is, how do I obtain an actual TIFF-encoded bytestream rather than another numpy array?

Now I love numpy, but in this case I need the image to be readable by non-python services, so it needs to be in a commonly-available (preferably lossless) format (see list above).

(I've gone round the houses of embedding numpy within JSON and decided against it.)

I could use PIL/pillow, scipy, and some, but I am trying to minimize dependencies (i.e. so far only cv2, numpy and intrinsics).

Thanks!

7
  • 1
    "it is not at all a TIFF-encoded bytestream" -- Sure it is, contained in a single row numpy array. Commented Aug 24, 2018 at 9:55
  • @DanMašek Interesting... So if I return this over a RESTful interface I can just open it as a TIFF client-side? Commented Aug 24, 2018 at 10:22
  • Ya. You will want to call tostring() or tobytes() first. Commented Aug 24, 2018 at 10:30
  • @DanMašek Thanks - that does it - please could you post an answer that I can accept? I do feel this falls in the cracks of the OpenCV/numpy/SO literature to date. (Also: Why..?!) Commented Aug 24, 2018 at 11:08
  • 1
    That's cool, feel free to accept your self-answer. Glad I could help you solve the problem :) Commented Aug 24, 2018 at 13:34

1 Answer 1

4

Following https://stackoverflow.com/a/50630390/1021819 and building (but not very much) on the comment by Dan Masek, the required extra step is simply to use im_buffer.tobytes(), which returns a bytestring that can be sent through a stream.

Python OpenCV images do seem to be represented as pure numpy arrays. As pointed out by Dan Masek, the arrays can be transformed using cv2.imencode() into TIFF, PNG, BMP, JPG etc.

There is obviously some debate around choice of formats and compression. In the case above there was a preference for lossless compression, implying TIFF, BMP or PNG (still there will be debate). The python bindings to OpenCV have no tunable parameters for TIFF compression (unlike(?) the C++ bindings?), so it's not very easy to discover what is being done there. The compression level could apparently be better using other libraries but my aim was to minimize dependencies (see above).

A BMP-encoded image was no smaller than a TIFF one. PNG compression is tunable in python OpenCV, and setting

cv2.imencode('.png', nparr, [int(cv2.IMWRITE_PNG_COMPRESSION),1])

has given the best size-speed trade-off so far. Thanks.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.