Convexity defects C++ OpenCv - c++

I would be grateful to you if you could help me with this issue :)
Relating to this question cvConvexityDefects in OpenCV 2.X / C++?, I have the same problem.
The OpenCV C++ wrapper has not the function cvConvexityDefects that appears in the C version, so I tried to write my own version.
Part of the code is (please note that both countour and hull are vector< Point >, calculated separately :
CvSeq* contourPoints;
CvSeq* hullPoints;
CvSeq* defects;
CvMemStorage* storage;
CvMemStorage* strDefects;
CvMemStorage* contourStr;
CvMemStorage* hullStr;
CvConvexityDefect *defectArray = 0;
strDefects = cvCreateMemStorage();
defects = cvCreateSeq( CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvSeq),sizeof(CvPoint), strDefects );
//We start converting vector<Point> resulting from findContours
contourStr = cvCreateMemStorage();
contourPoints = cvCreateSeq(CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvSeq), sizeof(CvPoint), contourStr);
printf("Metiendo valores\n");
for(int i=0; i<(int)contour.size(); i++) {
CvPoint cp = {contour[i].x, contour[i].y};
cvSeqPush(contourPoints, &cp);
}
//Now, the hull points obtained from convexHull c++
hullStr = cvCreateMemStorage(0);
hullPoints = cvCreateSeq(CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvSeq), sizeof(CvPoint), hullStr);
for(int i=0; i<(int)hull.size(); i++) {
CvPoint cp = {hull[i].x, hull[i].y};
cvSeqPush(hullPoints, &cp);
}
//And we compute convexity defects
storage = cvCreateMemStorage(0);
defects = cvConvexityDefects(contourPoints, hullPoints, storage);
The output is Convex hull must represented as a sequence of indices or sequence of pointers in function cvConvexityDefects. Really I don't know how to do conversion in the right way, I've ben searching on the web and tried to adapt/copy/understand some pieces of code, but it is always with the C syntax.
I hope I was clear. Thank you in advance!

I raised this question because I wasn't able to figure out a solution (it is not only today that I was dealing with the matter hehe), but after all I was able to manage the problem!
I had to change the way I calculated the convex hull, using the index array form. So now we have a vector< int > instead a vector< Point >.
This is the code I used (it works I painted the points over an image):
void HandDetection::findConvexityDefects(vector<Point>& contour, vector<int>& hull, vector<Point>& convexDefects){
if(hull.size() > 0 && contour.size() > 0){
CvSeq* contourPoints;
CvSeq* defects;
CvMemStorage* storage;
CvMemStorage* strDefects;
CvMemStorage* contourStr;
CvConvexityDefect *defectArray = 0;
strDefects = cvCreateMemStorage();
defects = cvCreateSeq( CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvSeq),sizeof(CvPoint), strDefects );
//We transform our vector<Point> into a CvSeq* object of CvPoint.
contourStr = cvCreateMemStorage();
contourPoints = cvCreateSeq(CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvSeq), sizeof(CvPoint), contourStr);
for(int i=0; i<(int)contour.size(); i++) {
CvPoint cp = {contour[i].x, contour[i].y};
cvSeqPush(contourPoints, &cp);
}
//Now, we do the same thing with the hull index
int count = (int)hull.size();
//int hullK[count];
int* hullK = (int*)malloc(count*sizeof(int));
for(int i=0; i<count; i++){hullK[i] = hull.at(i);}
CvMat hullMat = cvMat(1, count, CV_32SC1, hullK);
//We calculate convexity defects
storage = cvCreateMemStorage(0);
defects = cvConvexityDefects(contourPoints, &hullMat, storage);
defectArray = (CvConvexityDefect*)malloc(sizeof(CvConvexityDefect)*defects->total);
cvCvtSeqToArray(defects, defectArray, CV_WHOLE_SEQ);
//printf("DefectArray %i %i\n",defectArray->end->x, defectArray->end->y);
//We store defects points in the convexDefects parameter.
for(int i = 0; i<defects->total; i++){
CvPoint ptf;
ptf.x = defectArray[i].depth_point->x;
ptf.y = defectArray[i].depth_point->y;
convexDefects.push_back(ptf);
}
//We release memory
cvReleaseMemStorage(contourStr);
cvReleaseMemStorage(strDefects);
cvReleaseMemStorage(storage);
}
}
This worked for me. If you see something wrong or another way to manage it, please tell me!

found some direct approach using the cpp convexityDefects.
Typehandling by convexHull-function. It fills by type, int* returns indizes, Point* returns coordinates.
void WorkFrame( Mat img, double minArea )
{
//assumption:
// img already preprocessed, threshold, gray, smooth, morphology whatever..
//get some contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( img, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
for( int i=0; i<contours.size(); i++ )
{
vector<Point>& c=contours[i];
double area = contourArea( c );
if( area<minArea ){ continue; } //filter remaining noise
//convexHull works typedependent.
//std::vector<Point> ptHull1; //uncomment and compare to ptHull2
//convexHull( c, ptHull1 ); //convexHull is smart and fills direct coordinates
std::vector<int> ihull;
convexHull( c, ihull ); //convexHull is smart and fills in contourIndices
std::vector<Vec4i> defects;
convexityDefects( c, ihull, defects ); //expects indexed hull (internal assertion mat.channels()==1)
std::vector< Point > ptHull2;
std::vector<int>::iterator ii=ihull.begin();
while( ii!=ihull.end() )
{
int idx=(*ii);
ptHull2.push_back( c[idx] );
ii++;
}
cv::polylines( mat, c, true, Scalar( 0xCC,0xCC,0xCC ), 1 );
cv::polylines( mat, ptHull2, true, Scalar( 0xFF, 0x20, 0x20 ), 1 );
std::vector<Vec4i>::iterator d=defects.begin();
while( d!=defects.end() )
{
Vec4i& v=(*d); d++;
int startidx=v[0]; Point ptStart( c[startidx] );
int endidx=v[1]; Point ptEnd( c[endidx] );
int faridx=v[2]; Point ptFar( c[faridx] );
cv::circle( img, ptStart, 4, Scalar( 0x02,0x60,0xFF ), 2 );
cv::circle( img, ptEnd, 4, Scalar( 0xFF,0x60,0x02 ), 2 );
cv::circle( img, ptFar, 4, Scalar( 0x60,0xFF,0x02 ), 2 );
}
}
}

Related

OpenCV get coordinates of red-only rectangle area

I have the following output from red-only filtration done by the following algorithm:
cv::Mat findColor(const cv::Mat & inputBGRimage, int rng=20)
{
// Make sure that your input image uses the channel order B, G, R (check not implemented).
cv::Mat mt1, mt2;
cv::Mat input = inputBGRimage.clone();
cv::Mat imageHSV; //(input.rows, input.cols, CV_8UC3);
cv::Mat imgThreshold, imgThreshold0, imgThreshold1; //(input.rows, input.cols, CV_8UC1);
assert( ! input.empty() );
// blur image
cv::blur( input, input, Size(11, 11) );
// convert input-image to HSV-image
cv::cvtColor( input, imageHSV, cv::COLOR_BGR2HSV );
// In the HSV-color space the color 'red' is located around the H-value 0 and also around the
// H-value 180. That is why you need to threshold your image twice and the combine the results.
cv::inRange( imageHSV, cv::Scalar( H_MIN, S_MIN, V_MIN ), cv::Scalar( H_MAX, S_MAX, V_MAX ), imgThreshold0 );
if ( rng > 0 )
{
// cv::inRange(imageHSV, cv::Scalar(180-rng, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold1);
// cv::bitwise_or( imgThreshold0, imgThreshold1, imgThreshold );
}
else
{
imgThreshold = imgThreshold0;
}
// cv::dilate( imgThreshold0, mt1, Mat() );
// cv::erode( mt1, mt2, Mat() );
return imgThreshold0;
}
And here is the output:
And I want to detect the four coordinates of the rectangle. As you can see, the output is not perfect, I used cv::findContours in conjunction with cv::approxPolyDP before, but it's not working good anymore.
Is there any filter that I can apply for input image (except blur, dilate, erode) to make image better for processing?
Any suggestions?
Updated:
When I am using findContours like this:
findContours( src, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
double largest_area = 0;
for( int i = 0; i < contours.size(); i++) { // get the largest contour
area = fabs( contourArea( contours[i] ) );
if( area >= largest_area ) {
largest_area = area;
largestContours.clear();
largestContours.push_back( contours[i] );
}
}
if( largest_area > 5000 ) {
cv::approxPolyDP( cv::Mat(largestContours[0]), approx, 100, true );
cout << approx.size() << endl; /* ALWAYS RETURN 2 ?!? */
}
The approxPolyDP is not working as expected.
I think your result is quite good, maybe if you select the contour with greatest area using Image Moments and then finding the minimal rotated rectangle of the bigger contour.
vector<cv::RotatedRect> cv::minRect( contours.size() );
for( size_t = 0; i < contours.size(); i++ )
{
minRect[i] = minAreaRect( cv::Mat(contours[i]) );
}
Rotated Rect class already has a vector of Point2f to store the points.
RotatedRect rRect = RotatedRect(Point2f(100,100), Size2f(100,50), 30);
Point2f vertices[4];
rRect.points(vertices);
for(int i = 0; i < 4; i++){
std::cout << vertices[i] << " ";
}

Remove bounding rect with an area < n OpenCV

I am eroding an image with text blocks on it then using findContours() to find all of the text blocks, then drawing their bounding rect. However sometimes there are very small rects created by noise in the image which either are in a bigger rect or in a place where there is no text.
I am using this code to find the contours and draw them.
double element_size = 20;
RNG rng(12345);
Mat element = getStructuringElement( cv::MORPH_ELLIPSE,cv::Size( 2*element_size + 1, 2*element_size+1 ),cv::Point( element_size, element_size ) );
erode(quad, quad, element);
vector<vector<cv::Point> > contours;
vector<Vec4i> hierarchy;
quad.convertTo(quad, CV_8UC1);
findContours( quad, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
vector<vector<cv::Point> > contours_poly( contours.size() );
vector<cv::Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
}
Mat drawing = Mat::zeros( quad.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar(0,255, 0 );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
}
After a sample run here is what I get back:
How can I modify my code so that I can remove any rects that are not greater than n so I can only keep complete text blocks, I also need to remove the largest contour which surrounds the entire card.
Yo could use contourArea to eliminate contour, use below code before finding bounding rect, this will remove all the contour with area less than a threshold.
double min_area=100; // area threshold
for( int i = 0; i< contours.size(); i++ ) // iterate through each contour.
{
double area=contourArea( contours[i],false); // Find the area of contour
if(area<min_area)
contours.erase(contours.begin() + i);
}
Edit:-
For any one who is going to use the above code, please see below comment.
For a safer method of removing cv::Rect elements from your vector, you could use the erase-remove idiom to remove the elements which are below a certain area threshold. This method is much safer than deleting elements one-by-one by their index, as in Haris' answer, since you do not risk running off the end of the vector.
boundRect.erase(std::remove_if(boundRect.begin(), boundRect.end(),
[] (cv::Rect r)
{
const int min_area = 100;
return r.area() < min_area;
}), boundRect.end());
Here I use a C++11 lambda for the comparison. If you don't have C++11, it's easy enough to create a functor class instead.
As for removing the largest contour, you can use another function from the standard library, max_element (Again using a lambda for comparison):
boundRect.erase(std::max_element(boundRect.begin(), boundRect.end(),
[] (cv::Rect left, cv::Rect right)
{
return left.area() < right.area();
}));

Get List of Black Pixel of a cv::Mat

I' am actually working with a cv::Mat with B&W pixels.
I'am searching for a way to get a list of my black point in this Mat.
Does someone know how to do such thing ?
I want to do that because I want to detect the bounding rect of this points.
(The best is to get them back in a vector)
somekind of :
cv::Mat blackAndWhite;
std::vector<cv::Point> blackPixels = MAGIC_FUNCTION(blackAndWhite);
Thanks for your help.
Edit: I want to precise that I want the best practices, the more Opencv compliant as possible.
You can traverse the cv::Mat to check the pixels that are 0, and get the x and y coordinates from the linear index if the matrix is continuous in memory:
// assuming your matrix is CV_8U, and black is 0
std::vector<cv::Point> blackPixels;
unsigned int const *p = blackAndWhite.ptr<unsigned char>();
for(int i = 0; i < blackAndWhite.rows * blackAndWhite.cols; ++i, ++p)
{
if(*p == 0)
{
int x = i % blackAndWhite.cols;
int y = i / blackAndWhite.cols;
blackPixels.push_back(cv::Point(x, y));
}
}
This example from OpenCV shows how to do exactly what you want: Creating Bounding boxes and circles for contours. Basically, it this:
// ...
/// Find contours
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Approximate contours to polygons + get bounding rects and circles
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
}

C++ OpenCV Eliminate smaller Contours

I am developing a OpenCV project.
I am currently working on detecting contours of particular ROI (Regoin Of Interest). What I want to achieve is to eliminate all the smaller contours in other words I do not want these smaller contours to be drown at all.
So Far if I have coded this algorithm to do this job:
CODE:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(mBlur, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
//----------------------------------------------------------------------------->
//Contours Vectors
vector<vector<Point> > contours_poly(contours.size());
vector<Rect> boundRect (contours.size());
vector<Point2f> ContArea(contours.size());
Mat drawing = Mat::zeros( range_out.size(), CV_8UC3 );
//----------------------------------------------------------------------------->
//Detecting Contours
for( int i = 0; i < contours.size(); i++ )
{
ContArea.clear();
ContArea.push_back(Point2f(boundRect[i].x, boundRect[i].y));
ContArea.push_back(Point2f(boundRect[i].x + boundRect[i].width, boundRect[i].y));
ContArea.push_back(Point2f(boundRect[i].x + boundRect[i].width, boundRect[i].y + boundRect[i].height));
ContArea.push_back(Point2f(boundRect[i].x, boundRect[i].y + boundRect[i].height));
double area = contourArea(ContArea);
if(area > 2000)
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]));
cout<<"The area of Contour: "<<i<< " is: " <<area<<endl;
}
}
/// Draw polygonal contour + bonding rects
//////////////////////////////////////////////////////////////////////////////////
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar(255,255,255);
drawContours( drawing, contours_poly, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
fillPoly(drawing, contours, Scalar(255,0,0));
}
The problem here is that it looks like the if(area > 2000) statement is not executed even dough i know some of areas present in image are way bigger than this.
I have been trying a lot of different solutions but this looks to be the most appropriate one to me.
THE KEY QUESTIONS:
Is it Possible to achieve what I want with the given code....?
If so can anyone see where I am going wrong with this
Else Could someone suggest some kind of solution or a good online source....?
If you want to remove based on the size of ROIs, you can do as follow (based on the example of bounding box of opencv):
vector< vector< Point > > contours_poly( contours.size() );
vector< Rect > boundRect (contours.size() );
vector< Point2f > centeres ( contours.size() );
vector< float > radiuses ( contours.size() );
// finding the approximate rectangle and circle of contours
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat ( contours[i] ), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat ( contours_poly[i] ) );
minEnclosingCircle( ( Mat ) contours_poly[i], centeres[i], radiuses[i] );
}
The above code snippet gives you the approximate rectangles of contours (boundRect), and approximate circles (centers, radiuses). After this you must be able to call contourArea(); and if it's smaller than certain threshold, you can eliminate it.
If you just want to remove contours that are smaller certain length, you can calculate it's length, look at the answer to similar question, and it seems you can use arcLength() function. I think something like this: double perimeter = arcLength ( Mat ( contours[i] ), true );.

OpenCV Draw draw contours of 2 largest objects

I am doing a OpenCV software that detects boxing gloves therefore i want to detect and draw only 2 largest contours (one for each boxing glove).
My software draws contours for everything and some things are noise only which ofcourse i dont want
My code for drawing contours:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(mBlur, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
//----------------------------------------------------------------------------->
vector<vector<Point> > contours_poly(contours.size());
vector<Rect> boundRect (contours.size());
vector<Point2f> boundingBoxArea(boundRect.size());
//----------------------------------------------------------------------------->
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
}
/// Draw polygonal contour + bonding rects
Mat drawing = Mat::zeros( range_out.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar(0,0,255);
drawContours( drawing, contours_poly, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
fillPoly(drawing, contours, Scalar(255,0,0));
}
Here is an Image Example:
My program already segments the gloves by color, The problelem is that at times small contours are drawn in random locations due to noise. Now of course the gloves contours are by far dominant and this is why i want to keep only the contours of these. Hope this makes my question clearer
Could someone suggest a solution please
I am coding in C++ environment
regards
The easiest way to look at the two largest contours is to simply look at the contours size. Something like this should do the trick:
int largestIndex = 0;
int largestContour = 0;
int secondLargestIndex = 0;
int secondLargestContour = 0;
for( int i = 0; i< contours.size(); i++ )
{
if(contours[i].size() > largestContour){
secondLargestContour = largestContour;
secondLargestIndex = largestIndex;
largestContour = contours[i].size();
largestIndex = i;
}else if(contours[i].size() > secondLargestContour){
secondLargestContour = contours[i].size();
secondLargestIndex = i;
}
}
Scalar color = Scalar(0,0,255);
drawContours( drawing, contours, largestIndex, color, CV_FILLED, 8);
drawContours( drawing, contours, secondLargestIndex, color, CV_FILLED, 8);
It seems that vector<vector<Point> > contours stores all your contours. What you need to do is iterate on this vector and do a little arithmetics with it elements, to be able to detect the 2 largest contours in the vector.
On this answer I shared code that detects the largest contour in a vector<vector<Point> >, so you are half way there.