In python, I want to convert an image that has been processed using OpenCV for transmission in a certain image format (TIFF in this case, but could be: BMP, JPEG, PNG, .....).
For this purpose it will suffice to encode the OpenCV image into a memory buffer. The problem is that when I use cv2.imencode() to do this the returned object still looks like a numpy array:
import cv2
img_cv2_aka_nparr = cv2.imread('test.jpg')
my_format = '.tiff'
retval, im_buffer = cv2.imencode(my_format, img_cv2_aka_nparr)
print type(im_buffer)
im_buffer is just another numpy array - it is not at all a TIFF-encoded bytestream! As far as I can tell, OpenCV images in python always behave like numpy arrays, and even look like numpy arrays via type().
In fact, if you want to create a dummy "OpenCV image", you have to use numpy - see e.g. https://stackoverflow.com/a/22921648/1021819
Why is this, and how do I fix it? That is, how do I obtain an actual TIFF-encoded bytestream rather than another numpy array?
Now I love numpy, but in this case I need the image to be readable by non-python services, so it needs to be in a commonly-available (preferably lossless) format (see list above).
(I've gone round the houses of embedding numpy within JSON and decided against it.)
I could use PIL/pillow, scipy, and some, but I am trying to minimize dependencies (i.e. so far only cv2, numpy and intrinsics).
Thanks!
tostring()ortobytes()first.