Few days back ive found a code which is sorting a contours inside vector by using their areas.But i could not understand the code very well especially in comparator function. Why does the contour 1&2 are convert into Mat in the contourArea parameter? Can anyone explain me the whole code in step by step. Your explanation will be usefull for my learning process
// comparison function object
bool compareContourAreas ( std::vector<cv::Point> contour1, std::vector<cv::Point> contour2 )
{
double i = fabs( contourArea(cv::Mat(contour1)) );
double j = fabs( contourArea(cv::Mat(contour2)) );
return ( i < j );
}
[...]
// find contours
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( binary_image, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE,
cv::Point(0, 0) );
// sort contours
std::sort(contours.begin(), contours.end(), compareContourAreas);
// grab contours
std::vector<cv::Point> biggestContour = contours[contours.size()-1];
std::vector<cv::Point> smallestContour = contours[0];
Your input image is binary, so it only exists of 0 and 1. When you use cv::findContours it searches for points with the value '1' and are touching other 1's, this makes a contour. Then puts all contours in std::vectorstd::vector<cv::Point> contours.
When you would take a grid and draw the points of one contour in it, you will get a 2D array of points, this basically is a one-channel cv::Mat.
In 'compareContourAreas' you take two contours out of your vector 'contours' and compare the absolute sum of all of the points in the contour. To add all the points you use contourArea, which needs a cv::Mat as input, so you first need to convert your contour-vector of points to a cv::Mat.
Then with the sort function you sort all the contours from small to big.
I have written a program to
Combined Edge information and colour information to form the image I would need to detect straight line from
Then I use findContour and drawContour to redraw the evidence map
after which I use some thinning algorithm to collapse the line into one single line
Use HoughLinesP to calculate for line Segment. Only have traces of the line. And most importantly have missed out the horizontal line
From these set of line Segment, calculate the intersection point (not included in code)
From the intersection, draw a quadrilateral from the intersection point, vertices(called it v1 and v2, furthest from the intersection point) of the horizontal/vertical line and a vertex calculate based on the reflection of the intersection point on the line between v1 and v2
However, it is not working as i think it should. I think the problem lies with that the interior border of the rectangle is not filled. Should I use morphology eg dilation then erode. I have run out of ideas to preprocess the two "cues" image before trying to detect for intersection with Hough Transform
Need everyone help!
THanks in advance
Below is my code snippet to generate the following
int FindBoxes(cv::Mat& colorMap,cv::Mat& edgeMap)
{
cv::Mat frame;
// colorMap is a coloured filtered map while edgeMap is an edge filter Map. frame will be the colour i want
cv::bitwise_and(colorMap, edgeMap, frame);
// A trial method by using findContour to get the interior line filled up
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(frame, contours, hierarchy, CV_RETR_EXTERNAL,
cv::CHAIN_APPROX_SIMPLE);
cv::Mat Map = cv::Mat::zeros(cueMap1.size(), CV_8U);
cv::drawContours(Map, contours, -1, cv::Scalar(255, 255, 255), CV_FILLED);
// thin the line to collapse it into one single line for Hough Detection
cv::Mat thin;
thinning(Map, Map);
cv::GaussianBlur(Map, Map, cv::Size(5,5),1.0,1.0);
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(Map, lines,1, CV_PI/90, 2, 30, 0);
return 0;
}
Currently I am using the following sequence
vector<vector<Point>> contours
1. findContours(srcMat, contours, ...)
2. convert contours to Point2f
3. findHomography(src, dst, RANSAC)
4. warpPerspective(srcMat, destMat, homo)
5. findContours
I would like to avoid step#4, while also transforming the Mat since I use some ROI relative to the contours from the transformed Mat.
The answer to running warpPerspective but on contours is to use cv::perspectiveTransform with the translation matrix.
The limitation is that it can transform only one contour at a time. Sample below.
vector<vector<Point2f>> contours; // from findContours
Mat trnsmat = getPerspectiveTransform(srcPoints, destPoints);
for (int i=0; i< contours.size(); i++)
cv::perspectiveTransform(contours[i], contours[i], trnsmat);
I assume your goal is to project the contour co-ordinates into the transformed space without using the entire image?
Load the contour co-ordinates into RoiMat structure and multiply it with the homography matrix computed using your findHomography function.
There's no need to warp the entire original image for the same.
If you want to view the transformed ROI on the image, probably pick a few interest points(for reference) from the original image and add it to the RoiMat structure.
In Python you can isolate each contour with the code below and afterwards do whatever processing you've to perform on each contour.
contours, hierarchy = cv2.findContours(image, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
contor_img = image[y:y+h, x:x+w]
Now you can process each contour individually(contor_img)
I have stored the area of all contours in a vector for first frame and now I want to compare area of each contour for second frame with first frame. If there area are same they put in a vector and if they are not same they put in another vector. how can I do it? Please help me.
Thanks in advance
// one frame at a time
void draw_ellipse(Mat &a,int count)
{
findContours( th1, contours, hierarchy, CV_RETR_TREE,
CV_CHAIN_APPROX_SIMPLE, Point(0,0) );
for(i=0;i<contours.size();i++)
{
area.push_back(contourArea(contours[i]));// area is a vector
}
for(j=1;j<=count;j++)
{
for(i=0;i<contours.size();i++)
{
if(area[i]>area[i+1]) // v1,v2,v3 are vectors
v1.push_back(contours[i]);
else if(area[i]<area[i+1])
v2.push_back(contours[i]);
else if(area[i]==area[i+1])
v3.push_back(contours[i]);
}
}
}
Now this function compares contours within same frame but I want that second frame's each contour compare with each contour of 1st frame then it should go in different vectors.
There are pre defined functions in OpenCV to match contours Opencv Contour Matching C++
I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.
I am using code found in this tutorial
When I give it an image that looks like this:
I get this as the result:
Here is what dst gives me that might help.
I am using this to call the method which contains the code.
quadSegmentation(Img, bw, dst, quad);
Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?
For perspective transform you need,
source points->Coordinates of quadrangle vertices in the source image.
destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.
Here we will calculate these point by contour process.
Calculate Coordinates of quadrangle vertices in the source image
You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.
Calculate Coordinates of the corresponding quadrangle vertices in the destination image
This can be easily find out by calculating bounding rectangle for largest contour.
In below image the red rectangle represent source points and green for destination points.
Adjust the co-ordinates order and Apply Perspective transform
Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
Then calculate transformation matrix and apply wrapPrespective
See the final result
Code
Mat src=imread("card.jpg");
Mat thr;
cvtColor(src,thr,CV_BGR2GRAY);
threshold( thr, thr, 70, 255,CV_THRESH_BINARY );
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ){
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
vector<vector<Point> > contours_poly(1);
approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
Rect boundRect=boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size()==4){
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
Point P1=contours_poly[0][0];
Point P2=contours_poly[0][1];
Point P3=contours_poly[0][2];
Point P4=contours_poly[0][3];
line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);
imshow("quadrilateral", transformed);
imshow("thr",thr);
imshow("dst",dst);
imshow("src",src);
imwrite("result1.jpg",dst);
imwrite("result2.jpg",src);
imwrite("result3.jpg",transformed);
waitKey();
}
else
cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;
teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):
Get the edge map. - will probably work since your edges are fine
Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
Get the corners by finding intersections between lines. - fail for the above mentioned reason
Check if the approximate polygonal curve has 4 vertices. - fail
Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
Apply the perspective transformation. - fail completely
To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).