3

How can i get the coordinates of the produced bounding boxes using the inference script of Google's Object Detection API? I know that printing boxes[0][i] returns the predictions of the ith detection in an image but what exactly is the meaning of these returned numbers? Is there a way that i can get xmin,ymin,xmax,ymax? Thanks in advance.

1
  • if you are happy with my answer feel free to mark it as the accepted one. Commented Nov 20, 2019 at 8:52

2 Answers 2

15

Google Object Detection API returns bounding boxes in the format [ymin, xmin, ymax, xmax] and in normalised form (full explanation here). To find the (x,y) pixel coordinates we need to multiply the results by width and height of the image. First get the width and height of your image:

width, height = image.size

Then, extract ymin,xmin,ymax,xmax from the boxes object and multiply to get the (x,y) coordinates:

ymin = boxes[0][i][0]*height
xmin = boxes[0][i][1]*width
ymax = boxes[0][i][2]*height
xmax = boxes[0][i][3]*width

Finally print the coordinates of the box corners:

print 'Top left'
print (xmin,ymin,)
print 'Bottom right'
print (xmax,ymax)
Sign up to request clarification or add additional context in comments.

7 Comments

Any explanation for why this is done? Your link is dead. Is it because the input images get resized to a standard size? And that normalised coordinates are useful to work any sized input?
is image a numpy array? If so image.size gives number of elements in the array, and image.shape gives dimensions of the image. But I thought it gives number of rows, then number of columns for a matrix i.e. height, width = image.shape.
@CMCDragonkai, yes that would make sense. Lots of sizing and resizing in neural networks.
@KolaB Expect the docs to keep moving for some time to come. tensorflow.org/api_guides/python/…
@Gal_M Thanks for the updated link. My comment was about the line in your answer that says width, height = image.size. I think this should be height, width = image.shape[:2]. I still think so after reading the updated link. The very first section "Encoding and Decoding" says "Encoded images are represented by scalar string Tensors, decoded images by 3-D uint8 tensors of shape [height, width, channels]. It would be great if you can clarify why you use width, height = image.size.
|
3

The boxes array that you mention contains this information and the format is a [N, 4] array where each row is of the format: [ymin, xmin, ymax, xmax] in normalized coordinates relative to the size of the input image.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.