4

I am trying to convert the white background of the input image into black using python OpenCV.But all the white pixels are not completely getting converted to black. I have attached the input and output images.

Input Image :

Input Image in the window

Output Image:

Output Image in the Window

I have used the following code for conversion:

img[np.where((img==[255,255,255]).all(axis=2))] = [0,0,0];

What should I do?

2
  • 2
    detect the elliptical region and mask all outside of it Commented Jul 15, 2018 at 10:03
  • 1
    or for example you could first mask a "near white" image and then only use those pixels as background that are connected to the image border. Commented Jul 15, 2018 at 10:22

2 Answers 2

8

I know this has already been answered. I have a coded python solution for you.

Firstly I found this thread explaining how to remove white pixels.

The Result:

result

Another Test img:

Edit This is a way better and shorter method. I looked into it after @ZdaR commented on looping over an images matrix.

[Updated Code]

img = cv2.imread("Images/test.pnt")

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)

img[thresh == 255] = 0

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img, kernel, iterations = 1)

cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()

Source

[Old Code]

img = cv2.imread("Images/test.png")

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)

white_px = np.asarray([255, 255, 255])
black_px = np.asarray([0, 0, 0])

(row, col) = thresh.shape
img_array = np.array(img)

for r in range(row):
    for c in range(col):
        px = thresh[r][c]
        if all(px == white_px):
            img_array[r][c] = black_px

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img_array, kernel, iterations = 1)

cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()

Other Sources used: OpenCV Morphological Transformations

Sign up to request clarification or add additional context in comments.

1 Comment

While using OpenCV its is never a good idea to iterate the image matrix pixel by pixel using nested for loop in Python, using Numpy syntax is always preferable.
1

I think not all the "white" pixels in the image are [255,255,255]. Instead, make a threshold. Try [220,220,220] and above and convert them to [0, 0, 0].

5 Comments

this might affect the bright yellow region in the centre!
conversion to hsv may help in segmenting the image
As Jeru Luke said, the threshold [220,220,220] is affecting the yellow region in some images.
Use hit and trial. Try [230, 230, 230] next, if it still affects yellow, then go for [235, 235, 235] and so on.
This approach does not scale well. Whatever you set the threshold to, any occurrence inside the image will also be affected. The question is badly specified – OP does not want all lighter pixels converted to black, but only those around the main image.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.