I am using opencv grabcut function for image segmentation. I have looked at the sample given in opencv of the grabcut usage, the sample is simply returning as an image where all the "background" parts are colored as black (0,0,0) . I can simply flood from every black point and get the contour. But I would like to use given function if they exist.
Grabcut returns a mask. You can use this code to get the contours:
std::vector<std::vector<cv::Point> > contours;
cv::findContours(mask, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Related
I am working with binary images from CASIA database and opencv in a C++ project. I am looking for a way of extracting only the silhouette(the bounding box containing the silhouette). The original images are 240x320 and my goal is to get only the silhouette in a new image (let’s say 100x50 size).
My first idea would be to get the minimum and maximum position of “white” pixels on rows and columns and get the pixels inside this rectangle in a new image, but I consider this not efficient at all. If you have any suggetion, I would be more than happy to hear it. On the left is the input and on the right is the output.
You can use the built-in OpenCV functionalities to find contours from your binary image:
e.g.
// using namespace cv;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( your_binary_mat, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
Note this will look for external contours (ignores inner contours which for the image above don't apply anyway) and retrieve a simplified approximation of the points.
Once you access the contour you can use either boundingRect() or minAreaRect() (wether you need the bounding box rotated or not).
I am using the exact same steps to find the contours of an image but I am getting two different results in Opencv 2.4.8 and Opencv 3.2! Anybody knows why?
Here is the procedure:
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::imwrite("binImageInB.jpg", binImageIn);
// find contour of the binary image
cv::findContours( binImageIn, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) ); // Find the contours in the image // save
cv::imwrite("binImageIn.jpg", binImageIn);
The input image is:
The output when using opencv 2.4.8:
And the output when using Opencv3.2:
The documentation for 2.4.x mentions:
Note: Source image is modified by this function.
The documentation for 3.3.1 mentions:
Since opencv 3.2 source image is not modified by this function.
In general, you use the contours and hierarchy output parameters. Since the later versions no longer modify the input image, I'd consided that a side effect, which was not intended to be useful.
I am trying to find and separate all edges in an edge detected image using python OpenCV. The edges can be in a form of contour but they don't have to. I just want all connected edges pixels to be grouped together. So technically the algorithm may procedurally sound like this:
For each edge pixel, find a neighbouring (connected) edge pixel and add it to a current subdivision of the image, until you can't find one anymore.
Then move on to the next unchecked edge pixel and start a new subdivision and do 1) again.
I have looked through cv.findContours but the results wasn't satisfying, maybe because it was intended for contours (enclosed edges) rather than free-ended ones. Here are the results:
Original Edge Detected:
After Contour Processing:
I expected the five edges would each be grouped into its own subdivision of the image, but apparently the cv2.findContours function breaks 2 of the edges even further into subdivisions which I don't want.
Here is the code I used to save these 2 images:
def contourForming(imgData):
cv2.imshow('Edge', imgData)
cv2.imwrite('EdgeOriginal.png', imgData)
contours = cv2.findContours(imgData, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.imshow('Contours', imgData)
cv2.imwrite('AfterFindContour.png', imgData)
cv2.waitKey(0)
pass
There are restrictions to my implementation, however. I have to use Python 2.7 and OpenCV2. I cannot use any other revision or languages besides these. I say this because I know OpenCV 2 has a connectedComponent function using C++. I could have used that but the problem is, I cannot use it due to certain limitations.
So, any idea how I should approach the problem?
Using findContours is the correct approach, you're simply doing it wrong.
Take a closer look to the documentation:
Note: Source image is modified by this function.
Your "After Contour Processing" image is in fact the garbage result from findContours. Because of this, if you want the original image to be intact after the call to findContours, it's common practice to pass a cloned image to the function.
The meaningful result of findContours is in contours. You need to draw them using drawContours, usually on a new image.
This is the result I get:
with the following C++ code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char** argv)
{
// Load the grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Prepare the result image, 3 channel, same size as img, all black
Mat3b res(img.rows, img.cols, Vec3b(0,0,0));
// Call findContours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Draw each contour with a random color
for (int i = 0; i < contours.size(); ++i)
{
drawContours(res, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
// Show results
imshow("Result", res);
waitKey();
return 0;
}
It should be fairly easy to port to Python (I'm sorry but I can't give you Python code, since I cannot test it). You can also have a look at the specific OpenCV - Python tutorial to check how to correctly use findContours and drawContours.
my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!
From OpenCV documentation, source image in cv::findContours is aquired as const, but something strange is going on with my application. I'm using cv::inRange function to get thresholded image over specific color, and after that, using cv::moments, I can get the center of white pixels in thresholded image and this is working ok.
In addition, I would like to implement the code for finding biggest contour and locating central moment in that contour. After adding just cv::findContours in the code, I spotted strange behavior in the output and after that I wanted to check what is going on with source image using this code:
cv::Mat contourImage;
threshedImage.copyTo(contourImage); // threshedImage is the output from inRange
cv::findContours(threshedImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE, cv::Point(0,0));
cv::Mat temp;
cv::absdiff(threshedImage,contourOutput, temp);
cv::namedWindow("absdiff");
cv::imshow("absdiff",temp);
After this, output is showing that there is a difference between threshedImage and contourImage. How this is possible? Does anyone have similar results with cv::findContours?
Wrong! The docs clear states that:
Source image is modified by this function.
So if you need the original image intact, make a copy of this image and pass the copy to cv::findContours().