What happens in GrabCut Algorithm - c++

I want to know the what actually happnens in the following code.
cv::Rect rectangle(x,y,width,height);
cv::Mat result;
cv::Mat bgModel,fgModel;
cv::grabCut(image,
result,
rectangle,
bgModel,fgModel,
1,
cv::GC_INIT_WITH_RECT);
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result);
According to my knowledge when we define the rectangle outside the rectangle will be consider as the known background and the inside as the unknown foreground.
Then bgmodel and fgmodel are Gausian mixed models which keeps the foreground and background pixels separately.
The parameter we are passing as 1 means, we are asking to divide the pixels to separate pixels process to run only once.
what i can't understand is
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
What actually happens in the above method.
If anyone could explain, that would be a great help.
thanx.

I've found this book which tells that this code compares values of pixels in result to GC_PR_FGD value. Those that are equal it doesn't touch, all the other pixels it deletes. Here's a citation from there:
The input/output segmentation image can have one of the four values:
cv::GC_BGD, for pixels certainly belonging to the background (for example, pixels outside the rectangle in our example)
cv::GC_FGD, for pixels certainly belonging to the foreground (none in our example)
cv::GC_PR_BGD, for pixels probably belonging to the background
cv::GC_PR_FGD for pixels probably belonging to the foreground (that is the initial value for the pixels inside the rectangle in our
example).
We get a binary image of the segmentation by extracting the pixels
having a value equal to cv::GC_PR_FGD:
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,
cv::Scalar(255,255,255));
image.copyTo(foreground, // bg pixels are not copied
result);

Related

OpenCV color histogram calcHist considering only specific pixels (and not full image)

I want to calculate the color histogram of an image but only taking into account specific pixels (whose 2D coordinates I know).
Is it possible to use calcHist specifying that only these concrete pixels should be taken into consideration (instead of the whole cv::Mat and all the pixels in it)? If not, is it possible to create a new Mat including only those specific pixels at known positions, and how? (Considering that for a histogram the pixel coordinates do not matter, could they be added to a (1 x number_of_specific_pixels)-dim Mat keeping the original type of the Mat?)
Thanks a lot in advance!
The third parameter of clalHist is called Mask.
So, you create a new single channel 8 bit cv::Mat that has the same size of your input image. It should contain 255's where you want to calculate the histogram and 0's where you do not. Then, pass it as Mask.

Filling Borders after undistort results in unwanted artefacts

I am using the OpenCV Function "undistort" to have a kind of barrel distortion. Of course I have now black borders and I wish to fill the black borders with the value 65535which is required for my following processing pipeline. To achieve this behavior, I am executing the following code:
cv::undistort(src,dst,K,dis);
dst.setTo(65535,dst == 0);
where src is the original image, dst the image with the barrel distortion, K the camera matrix, and dis the distortion coeffizients. The function setTo will set all parameters that are 0 to 65535 to 0.
This results in teh following exemplary image:
It can be seen that it is almost everywhere white, - the black large bar is wanted. However, there is still an outline around the image left.
These are values which were not caught by the setTo-Function since they are not 0. On further inspection, they are showing a kind of linear tendency.
So my question is, if it is possible to "forbid" OpenCV to make these smooth edges when using undistort? Or is there any kind of solution to get rid of these values without destroying the ground truth? The shown image is only exemplary, the values that are outside on the edges may occur in the wanted ground truth

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

Scanning and Detecting Object Color in Image

I'm developing a software that detects boxers punching motion. At the moment i used color based segmentation using inRange function and set it to detect blue Minimum value and Blue Maximum value. The problem is that the range is quite wide and my cam at times picks out noise and segments objects of no interest. To improve the software i though of scanning image of a boxing glove and establishing exact Blue color Value before further processing.
It would make sens to me to store that value in a Vector and call it in inRange fiction
// My current function which takes the Minimum and Maximum values of Blue Color
Mat range_out;
inRange(blur_out, Scalar(100, 100, 100), Scalar(120, 255, 255), range_out);
So i would image the vector to go somewhere here.
Scan this above image compute the Blue value
Store this value in an array
recall the array in a inRange function
Could someone suggest a solution to this problem or direct me to a source of information where I can look for answers ?
since you are detecting the boxer gloves in motion so first use motion to separate it from other elements in the scene...use frame differentiation or optical flow to separate the glove and other moving areas from non moving areas...now in those moving area try for some colour detection...
Separe luminosity and cromaticity - your fixed range will not work very well in different light conditions. Your range is wide probably because you are trying to see "blue" in dark and on light at the same time. Convert your image to HSV (or La*b*) and discard V (or L), keeping H and S (or a* and b*).
Learn a color distribution instead a simple range - take some samples and compute a 2D
color histogram on H and S (a* or b*) for pixels on the glove. This histogram will be a model for the color distribution of your object. Then, use c2.calcBackProjection to detect the pixels of interest in your scene.
Clean the result using morphological close operation
Important: on step 2, play a little with different quantization values (ie, different numbers of bins).

cimg display rendering black

Using CImg; I'll keep this quick and simple.
CImg<float> i = *spectralImages->at(currentImage);
disp.display(i);
float* f = i.data();
disp is displaying a black image despite the fact that stepping through *(f), *(f+1), *(f+2), etc. is retrieving the correct numbers (255.0, 245.0, etc.)
I've been working on this all day. Is there a quirk with CImg that I'm missing?
EDIT:
Saving the file as a BMP seems to make the correct result, so there's just an issue with drawing it.
If your CImg image contains only a single value, or several equal values, the default display will display them as black images, because of the normalization applied to the pixel values for the display.
As CImg is able to manage any type of images (including float-valued), it always normalize the pixel values in [0,255] for the display (it does not change the pixel value in your object of course, it just normalizes them internally for its display).
So if your image has a single pixel values, the normalization will always result to '0', hence the black image as a result.
That means you probably didn't construct your CImgDisplay disp with the right pixel normalization argument (by default, it is enabled).
disp should be constructed like this :
CImgDisplay disp(100,100,"my display",0);
to disable the default normalization of pixel values.