I'm trying to remove foreground from two images, here's a sample pair of images:
As you can see, the Budweiser bottle is removed from the scene before the second shot is taken.
These photos were captured from a pinhole camera (iPhone), and, the tricky part is I'm hand-holding the camera, so it cannot be guaranteed that the images are perfectly aligned pixel by pixel, so a simple minus-threshold method will not work.
Then, I've decided to perform image registration using findHomography and warpPerspective from OpenCV, here's the result image:
This image is warped with the matrix I've got from findHomography, it kind of improved the alignment quality, but still not that aligned so I can use a simple way to remove the foreground.
So, finally, I decided to implement a "fuzzy-minus" algorithm: for every pixel in image1, I'll look through a 7x7 neighbour in image2 (a 7 by 7 kernel?), using the minimal difference in grayscale as the result of minus, and threshold the result into binary image, here's what I've got:
And the result is still not good. Notice the white wholes in the bottle, this is produced due to similar grayscale value of foreground and background. So I'm not sure what to do now.
I can think of two ways to solve the problem, the first is to get a better aligned pair of images, and simply minus the pairs; the second is to use a more robust way to extract the foreground.
Can anyone give me some advice on how to deal with this kind of problem? I believe there should be some state-of-art algorithms or processing pipelines, but after googling around, I get nothing.
I'm using OpenCV with C++, it would be fantastic if you can tell me how to do it with these tools in hand.
Big big thanks in advance!
The problem is not in your algorithm. You are having problem because the two scenes were not taken from exactly the same angle, as shown in the animation below. This slight difference highlight the edges in the subtraction.
You need a static camera in order to apply this approach.
I suggest using mathematical morphology on the mask that you got to get rid of the artifacts.
Try applying both opening and closing to get rid of the black and the white small regions.
Mathematical Morphology
Mathematical Morphology in opencv
The difference between the two picture is pretty huge, so you will need to use a large structure element, but I don't think you will be able to get rid of the shadow.
For the two large strips in the background, you may try to use a horizontally shaped structure element as well.
Edit
Is it possible to produce a grayscale image instead of a binary image? if yes, you may try to experiment with the hat method for the shadow, but I am not sure about this point.
This is what I got using two different structure elements for closing THEN opening
Mat mask = imread("mask.jpg",CV_LOAD_IMAGE_GRAYSCALE);
morphologyEx(mask,mask,MORPH_CLOSE,getStructuringElement(CV_SHAPE_ELLIPSE,Size(50,10)));
morphologyEx(mask,mask,MORPH_OPEN,getStructuringElement(CV_SHAPE_ELLIPSE,Size(10,50)));
imshow("open",mask);
imwrite("maskopenclose.jpg",mask);
I would suggest optical flow for alignment and OpenCV's background subtraction algorithm:
http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html
I suggest that instead of using findHomography try using some of openCV's stereo correspondence functions: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
there is a sample code here: https://github.com/Itseez/opencv/blob/master/samples/cpp/stereo_calib.cpp
Related
I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.
I'm working on a facedetection app using segmentation of skinpixels with a predefined skinmodel in YCrCb space.
I'm loosely basing my algorithms of this report; http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=767122&tag=1, by Douglas Chai and King N. Ngan.
I first segment out all skin pixels (see left).
After that I perform some calculations to reduce the noise (see steps below). It results in a filtered bitmap 1/8th the size of original. Ideally this would be noise free both in face and background area, but it isn't. I have already tried to reduce it by using my density map and then checking neighbouring 3x3 area pixels and eroding/dilating pixel values depending on their neighbours. Then I resize this bitmap and apply the result as a mask on the original image (see right image for result, ignore my censorship).
My question is, what methods do you recommend for getting rid of the noise?
Also, are there any good methods to get smoother contours ? Ideally I would not like to use "find biggest contour and flood fill", preferably something more sophisticated.
There also seem to be bit of a displacement of the resized mask (it cuts of a bit too much on my right side of the face, and shows a bit too much on the left side). What can be causing this?
the easiest way to do smoother contours is to interpolate your data to a higher resolution using an interpolation scheme. You may look in openCV, that will result in smoother transitions between points.
I hope it will help a little bit. Good luck.
Problem
Is there a build-in function for interpolating single pixels?
Given a normal image as Mat and a Point, e.g. an anomaly of the sensor or an outlier, is their some function to repair this Point?
Furthermore, if I have more than one Point connected (let's say a blob with area smaller 10x10) is there a possibility to fix them too?
Trys but not really solutions
It seems that interpolation is implemented in the geometric transformations including resizing images and to extrapolate pixels outside of the image with borderInterpolate, but I haven't found a possibility for single pixels or small clusters of pixels.
A solution with medianBlur like suggested here does not seem appropriate as it changes the whole image.
Alternative
If there isn't a build-in function, my idea would be to look at all 8-connected surrounding pixels which are not part of the blob and calculate the mean or weighted mean. If doing this iteratively, all missing or erroneous pixel should be filled. But this method would be dependent of the applied order to correct each pixel. Are there other suggestions?
Update
Here is an image to illustrate the problem. Left the original image with a contour marking the pixels to fix. Right side shows the fixed pixels. I hope to find some sophisticated algorithms to fix the pixel.
The build-in function inpaint of OpenCV does the desired interpolation of chosen pixels. Simply create a mask with all pixels to be repaired.
See the documentation here: OpenCV 3.2. Description: inpaint and Function: inpaint
I'm doing a binary thresholding on an image using opencv, while moving or animating for example a circle on a binary image, there are few noise that appears around the moveable object. An image to illustrate what I mean is attached. How can I get rid of those artifacts?
You could try to apply several cycles of the erosion algorithm (until there is only one object left) followed by same number of cycles of the dilation algorithm (the erosion/dilation pair is called opening)
See here: http://en.wikipedia.org/wiki/Mathematical_morphology
If you want to get rid of object that are non circles, you can filter contours according to several metrics this seems to be a good starting link.
In your case, you could find all contours and keep only the ones with a high circularity and a small aspect ratio.
You can go further and calculates metrics such as area/area_of_the_convex_hull. This one should be one for your circle.
Good luck
ps: this pdf seems more exhaustive.
I am playing around with Kinect and I'm trying to get an as accurate as possible human contour.
So far I tried changing threshold values, blurring, etc... but I was wondering if there was an existing effective method of doing it.
I believe there are two main problems in order to get a good shape. One is that if keeps flickering all the time. The other, how is not a very good shape (hair not reflecting IR lighting, etc...).
Any reccomendations on how to proceed? At the moment I'm trying to average values of the most recent frames to stabilize for the first problem and I might try to convert the shape to a polygon and simplify it (however that's done).
One approach would be to improve the depth mask with a forground/background mask calculated from the RGB image and a classic background removal algorithm.
You could also work with the morphology functions of opencv to remove unwanted small parts of the mask or close holes.1