0

Sorry for lengthier explanation. As am new to Open want to give more details with example.

My requirement is to find the delta of 2 static images, for this am using the following technique:

cv::Mat prevImg = cv::imread("prev.bmp");

cv::Mat currImg = cv::imread("curr.bmp");

cv::Mat deltaImg;
cv::absdiff(prevImg,currImg,deltaImg);

cv::namedWindow("image", CV_WINDOW_NORMAL);

cv::absdiff(prevImg,currImg,deltaImg);

cv::imshow("image", deltaImg);

And in the deltaImg, am getting the difference between the images, but it includes the background of the first image also. I know i have to remove the background using BackgroundSubtractorMOG2, but am unable to understand this class usage as most of the examples are based on webcamera captures.

Please note that my images are static (Screen shots of the desktop activity).

Please guide me in resolving this issue, some sample code will be helpful.

Note I want to calculate delta in RGB.

Detailed Explination:

Images are at : https://picasaweb.google.com/105653560142316168741/OpenCV?authkey=Gv1sRgCLesjvLEjNXzZg#

Prev.bmp: The previous screen shot of my dektop

curr.bmp: The current screen shot of my desktop

The delta between the prev.bmp and curr.bmp, should be the startup menu image only, please find the image below:

The delta image should contain only the startup menu, but even contains the background image of the prev.bmp, this background i want to remove.

Thanks in advance.

9
  • How do you define "delta"? Where are some examples? Have you read up on vision theory regarding this? Commented Jan 27, 2014 at 6:47
  • hi scap3y, edited with more details and image links, please review...thanks Commented Jan 27, 2014 at 7:50
  • uploading two working screenshots (same size and same location and bmp/png instead of jpg) would be very great... now I can only tell you what I would do (instead of posting tested code): add a cv::Mat mask = deltaImg > 0; currImg.copyTo(deltaImg, mask); should work. Commented Jan 27, 2014 at 9:03
  • one problem I see without testing: since prevImg and currImg are multi channel images, you have to convert the mask to grayscale. Commented Jan 27, 2014 at 9:17
  • hi micka, thanks for quick response, i didnt understand you first comment regarding upload. I tried you solution, and the output is uploaded at: picasaweb.google.com/105653560142316168741/… , this has removed some of the background, but not totally, please suggest. Commented Jan 27, 2014 at 9:20

1 Answer 1

1

After computing cv::absdiff your image contains non-zero values for each pixel that changed it's value. So you want to use all image regions that changed.

cv::Mat deltaImg;
cv::absdiff(currImg,prevImg,deltaImg);

cv::Mat grayscale;
cv::cvtColor(deltaImg, grayscale, CV_BGR2GRAY);

// create a mask that includes all pixel that changed their value
cv::Mat mask = grayscale>0;

cv::Mat output;
currImg.copyTo(output,mask);

Here are sample images:

previous:

enter image description here

current:

enter image description here

mask:

enter image description here

output:

enter image description here

and here is an additional image for the deltaImg before computing the mask:

enter image description here

Problems occur if foreground pixels have the same value as background pixel but belong to some other 'objects'. You can use cv::dilate operator (followed by cv::erode) to fill single pixel gaps. Or you might want to extract the rectangle of the start menu if you are not interested in all the other parts of the image that changed, too.

Sign up to request clarification or add additional context in comments.

3 Comments

hi micka, thanks for your solution. But how to fix the pixel matching problem, which you specified. (Matching pixels in background and foreground image). Am currently facing this issue. Thanks.
hi micka, can you guide me on how to know the rect of the mask, so that i can grab that rect from my currImg, which will be my changed region w.r.t my prevImg.
depending on the quality of your subtracted background, it can be easy or a bit harder. In my example, since not only the startmenu appeared, but other window elements also changed, the foreground mask contains the complete sidebar etc. If that would not be the case, just finding the minimum/maximum coordinate of white pixel in the mask would give me the correct rectangle. But since the sidebar and other things are contained, I would try to detect the rectangle on different ways, maybe starting with an edge detection in the mask image.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.