16

Is there any way to subtract two images in python opencv2 ?

  • Image 1 : Any image (eg. a house Image) (static image)
  • Image 2 : The same Image with an Object (In house, a person is standing...) (static image + dynamic objects)
  • Image 3 = Image 2 - Image 1

If we subtract Image2 from Image1 means Image3 should give Object(person) only.

6 Answers 6

44

Try background subtraction.

Use cv2.subtract(img1,img2) instead of arithmetic operation, as cv2 will take care of negative values.

Sign up to request clarification or add additional context in comments.

1 Comment

background subtraction is for video processing. it typically expects several frames of "idle scene" to learn the appearance.
12

If the background in the two images are exactly the same, you can subtract them as you mention in your post.

image1 = imread("/path/to/image1")
image2 = imread("/path/to/image2")
image3 = image1 - image2

2 Comments

You should not use the - operator because it doesn't handle negative values (which don't make sense in images). Use this answer provided by @viki instead.
Good luck on that with uint8 images, the result will be totally wrong
9

@dvigneshwr's answer does a subtraction where the resulting negative values are rounded up to 0. @Taha Anwar M-Holmes' answer preserves the negatives but changes the datatype of the resulting array so its no longer a conventional image type.

For those wanting to identify a foreground from a background image based on the absolute difference in values and return an array of the same data type as the inputs (that's how I ended up here), use absdiff.

Assuming the arrays are the same width and height...

import cv2 as cv

image3 = cv.absdiff(image1, image2)

Its worth noting that the OP did not provide any details wrt to the images being subtracted here... depending on what the contents of the images are, all of these approaches may answer the OP's question.

Comments

1

cv2.subtract does not work it just binds the values between 0-255 so if you wanna get negative values just convert the image from unit8 to int32 or int64. Note unint8 could only take on the values of 0-255 thus it could not handle negative values

image1= np.int32(image1)

image2= np.int32(image2)

image3 = image1 - image2

4 Comments

in the context of identifying foreground objects, why would you want negative values after subtracting 2 images? (I'm wondering if you're using some object detection algorithm I'm not familiar with)
You cant guarantee if you subtract background from the foreground you'll be left with positive foreground values, the foreground intensity values can easily be less then the previous background that was there. Thus you go for absolute difference between foreground and background pixels.
yup, I'm with you for calculating the absolute difference, but opencv already provides a function for this (see my answer). I was more so wondering if there was any reason to preserve the negatives (i.e. calculating the difference, rather than than absolute difference) as your answer does.
gray background. foreground object may be white (positive difference) or black (negative difference). this may be relevant for further processing (say, classification). -- negative difference has to be handled anyway, be it by preserving, clipping (saturating math), or taking the absolute
0

When working with grayscale images, one mistake I made was forgetting that 0 is black and 255 is white, opposite of what you expect on printed media.

A proper subtraction would involve a bitwise_not() done at each step:

image3 = bitwise_not( bitwise_not(image1) - bitwise_not(image2) )

Comments

-2
# find moving image.
#
# running the program pops up a window to watch the video.
# the program video window shows the first monitor,
# but watch the program video window on second extended monitor

import cv2
import numpy as np

# Path to video file
cap = cv2.VideoCapture(
    1,
    apiPreference=cv2.CAP_ANY,
    params=[cv2.CAP_PROP_FRAME_WIDTH, 1280, cv2.CAP_PROP_FRAME_HEIGHT, 720],
)  # I made cap = 1280, 720 resolution to speed the program up on my computer. I have a rtx 3060, obs studio at 60 fps

# Used as counter variable
count = 1

# checks whether frames were extracted
success = 1

# create frame_1 and frame_2 to be able to use the frames between if conditions
frame_1 = 0
frame_2 = 0

# get 2 frames to start with
count_subtraction = 0

while success:

    # function extract frames
    success, image = cap.read()

    if count_subtraction == 0:
        if count <= 2:
            # Saves the frames with frame-count
            cv2.imwrite("frame_%d.jpg" % count, image, [int(cv2.IMWRITE_JPEG_QUALITY), 100])  # jpg 100% quality

            count += 1

        if count == 3:

            frame_1 = cv2.imread("frame_1.jpg", 0)
            frame_2 = cv2.imread("frame_2.jpg", 0)

            # use the frames below

            # Create the sharpening kernel
            kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])

            # Sharpen the image
            frame_1 = cv2.filter2D(frame_1, -1, kernel)
            frame_2 = cv2.filter2D(frame_2, -1, kernel)

            # subtract the images
            subtracted = cv2.subtract(frame_2, frame_1)

            subtracted_sharpened = cv2.filter2D(subtracted, -1, kernel)

            # TO show the output
            cv2.imshow("image", subtracted_sharpened)

            # the else condition to count_subtraction needs a first frame
            frame_1 = frame_2

            count = 1
            count_subtraction = 1

    else:

        cv2.imwrite("frame_2.jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), 100])  # jpg 100% quality

        frame_2 = cv2.imread("frame_2.jpg", 0)

        # use the frames below

        # Create the sharpening kernel
        kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])

        # Sharpen the image
        frame_2 = cv2.filter2D(frame_2, -1, kernel)

        # subtract the images
        subtracted = cv2.subtract(frame_2, frame_1)

        subtracted_sharpened = cv2.filter2D(subtracted, -1, kernel)

        # TO show the output
        cv2.imshow("image", subtracted_sharpened)

        # the second frame becomes a first frame
        frame_1 = frame_2


    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.