I am using the exact same steps to find the contours of an image but I am getting two different results in Opencv 2.4.8 and Opencv 3.2! Anybody knows why?
Here is the procedure:
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::imwrite("binImageInB.jpg", binImageIn);
// find contour of the binary image
cv::findContours( binImageIn, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) ); // Find the contours in the image // save
cv::imwrite("binImageIn.jpg", binImageIn);
The input image is:
The output when using opencv 2.4.8:
And the output when using Opencv3.2:
The documentation for 2.4.x mentions:
Note: Source image is modified by this function.
The documentation for 3.3.1 mentions:
Since opencv 3.2 source image is not modified by this function.
In general, you use the contours and hierarchy output parameters. Since the later versions no longer modify the input image, I'd consided that a side effect, which was not intended to be useful.
Related
I am working with binary images from CASIA database and opencv in a C++ project. I am looking for a way of extracting only the silhouette(the bounding box containing the silhouette). The original images are 240x320 and my goal is to get only the silhouette in a new image (let’s say 100x50 size).
My first idea would be to get the minimum and maximum position of “white” pixels on rows and columns and get the pixels inside this rectangle in a new image, but I consider this not efficient at all. If you have any suggetion, I would be more than happy to hear it. On the left is the input and on the right is the output.
You can use the built-in OpenCV functionalities to find contours from your binary image:
e.g.
// using namespace cv;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( your_binary_mat, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
Note this will look for external contours (ignores inner contours which for the image above don't apply anyway) and retrieve a simplified approximation of the points.
Once you access the contour you can use either boundingRect() or minAreaRect() (wether you need the bounding box rotated or not).
I'm using VS2015, EmguCV 3 and VB, and am trying to translate some C++ code.
C++
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(bw, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for (size_t i = 0; i < contours.size(); ++i)
{...}
I'm trying to use some object orientation code given in full here. Basically the code is going to tell me the angle at which an object is oriented in an image. Unfortunately it is C++ code and VB developers' brains can explode at the sight of some C++ syntax. Any help avoiding the need to clean my screen again would be welcome. The explosive material was vector<vector<point> > contours; in this particular case, and my question is about how to translate it.
I got this far:
VB
Imports Emgu.CV
Imports Emgu.CV.Structure
...
contours = New Mat
hierarchy = New Mat
CvInvoke.FindContours(m, contours, hierarchy, CvEnum.RetrType.List, CvEnum.ChainApproxMethod.ChainApproxNone)
I'm using EmguCV 3. This claims that FindContours takes image As IInputOutputArray, contours As IOutputArray, hierarchy As IOutputArray. So I figured I could provide three Mats. m is defined earlier has been successfully processed (e.g. with Threshold) so I'm happy with m. contours and hierarchy on the other hand may be problematic. When I run the code, I get an unhandled exception:
Emgu.CV.Util.CvException: OpenCV: (_contours.kind() == _InputArray::STD_VECTOR_VECTOR || _contours.kind() == _InputArray::STD_VECTOR_MAT || _contours.kind() == _InputArray::STD_VECTOR_UMAT)
This seems to suggest I've passed the wrong types to OpenCV although I would have expected Emgu to handle that. But I have no clue. Any help?
Based on the Documentation and under the VB section:
"contours Type: Emgu.CV.IOutputArray -> Detected contours. Each contour
is stored as a vector of points."
Therefore, instead of sending a single MAT as your contours, you should be sending a container of vectors of points.
See here The Equivalent of C++ Vectors for VB.Net.
I am using opencv grabcut function for image segmentation. I have looked at the sample given in opencv of the grabcut usage, the sample is simply returning as an image where all the "background" parts are colored as black (0,0,0) . I can simply flood from every black point and get the contour. But I would like to use given function if they exist.
Grabcut returns a mask. You can use this code to get the contours:
std::vector<std::vector<cv::Point> > contours;
cv::findContours(mask, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Recently I have moved a project from visual studios 2010 to 2012. As soon as I did I started experiencing memory exceptions that result from the use of findContours(). I've played around with other solutions, but none of them seem to be working. However, based on those solutions I believe that the problem is with the project properties. Any suggestions?
To be more specific, the code crashes with a memory heap error. Here is the code:
//cameraFeed is of type Mat and is declared earlier on.
Mat temp;
Mat HSV;
Mat threshold;
Vid_mtx.lock();
//convert frame from BGR to HSV colorspace
cvtColor(cameraFeed,HSV,COLOR_BGR2HSV);
Vid_mtx.unlock();
//track objects based on the HSV slider values.
inRange(HSV,Scalar(H_MIN,S_MIN,V_MIN),Scalar(H_MAX,S_MAX,V_MAX),threshold);
morphOps(threshold);
//if(calibrationMode==true)
imshow(windowName2,threshold);
threshold.copyTo(temp);
//cvtColor(temp, temp_grey,COLOR_BGR2GRAY);
if(temp.empty()) printf("Whatcha doin?");
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarchy, CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
From OpenCV documentation, source image in cv::findContours is aquired as const, but something strange is going on with my application. I'm using cv::inRange function to get thresholded image over specific color, and after that, using cv::moments, I can get the center of white pixels in thresholded image and this is working ok.
In addition, I would like to implement the code for finding biggest contour and locating central moment in that contour. After adding just cv::findContours in the code, I spotted strange behavior in the output and after that I wanted to check what is going on with source image using this code:
cv::Mat contourImage;
threshedImage.copyTo(contourImage); // threshedImage is the output from inRange
cv::findContours(threshedImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE, cv::Point(0,0));
cv::Mat temp;
cv::absdiff(threshedImage,contourOutput, temp);
cv::namedWindow("absdiff");
cv::imshow("absdiff",temp);
After this, output is showing that there is a difference between threshedImage and contourImage. How this is possible? Does anyone have similar results with cv::findContours?
Wrong! The docs clear states that:
Source image is modified by this function.
So if you need the original image intact, make a copy of this image and pass the copy to cv::findContours().