Apply homography on contours as well as Mat - c++

Currently I am using the following sequence
vector<vector<Point>> contours
1. findContours(srcMat, contours, ...)
2. convert contours to Point2f
3. findHomography(src, dst, RANSAC)
4. warpPerspective(srcMat, destMat, homo)
5. findContours
I would like to avoid step#4, while also transforming the Mat since I use some ROI relative to the contours from the transformed Mat.

The answer to running warpPerspective but on contours is to use cv::perspectiveTransform with the translation matrix.
The limitation is that it can transform only one contour at a time. Sample below.
vector<vector<Point2f>> contours; // from findContours
Mat trnsmat = getPerspectiveTransform(srcPoints, destPoints);
for (int i=0; i< contours.size(); i++)
cv::perspectiveTransform(contours[i], contours[i], trnsmat);

I assume your goal is to project the contour co-ordinates into the transformed space without using the entire image?
Load the contour co-ordinates into RoiMat structure and multiply it with the homography matrix computed using your findHomography function.
There's no need to warp the entire original image for the same.
If you want to view the transformed ROI on the image, probably pick a few interest points(for reference) from the original image and add it to the RoiMat structure.

In Python you can isolate each contour with the code below and afterwards do whatever processing you've to perform on each contour.
contours, hierarchy = cv2.findContours(image, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
contor_img = image[y:y+h, x:x+w]
Now you can process each contour individually(contor_img)

Related

OPENCV C++: Sorting contour by using their areas

Few days back ive found a code which is sorting a contours inside vector by using their areas.But i could not understand the code very well especially in comparator function. Why does the contour 1&2 are convert into Mat in the contourArea parameter? Can anyone explain me the whole code in step by step. Your explanation will be usefull for my learning process
// comparison function object
bool compareContourAreas ( std::vector<cv::Point> contour1, std::vector<cv::Point> contour2 )
{
double i = fabs( contourArea(cv::Mat(contour1)) );
double j = fabs( contourArea(cv::Mat(contour2)) );
return ( i < j );
}
[...]
// find contours
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( binary_image, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE,
cv::Point(0, 0) );
// sort contours
std::sort(contours.begin(), contours.end(), compareContourAreas);
// grab contours
std::vector<cv::Point> biggestContour = contours[contours.size()-1];
std::vector<cv::Point> smallestContour = contours[0];
Your input image is binary, so it only exists of 0 and 1. When you use cv::findContours it searches for points with the value '1' and are touching other 1's, this makes a contour. Then puts all contours in std::vectorstd::vector<cv::Point> contours.
When you would take a grid and draw the points of one contour in it, you will get a 2D array of points, this basically is a one-channel cv::Mat.
In 'compareContourAreas' you take two contours out of your vector 'contours' and compare the absolute sum of all of the points in the contour. To add all the points you use contourArea, which needs a cv::Mat as input, so you first need to convert your contour-vector of points to a cv::Mat.
Then with the sort function you sort all the contours from small to big.

OpenCV findContours of points vector

I've got a vector of 2D points and I need to find all the contours formed by these points. Unfortunately, cv::findContours can't handle an array of points, it requires a binary image.
So the question is there any workaround to get contours of points? Maybe it is possible to form a binary image using the points and then use this image in cv::findContours function?
Please advise here.
If you know the size of image, you can create a binary image of zeros and fill all the 2D points with value 255. Then use cv::findContours to find all the contours in binary image.
Following code snippet may help you:
// If pts is your array of float points and r,c are number of rows and columns of image
//calculate total number of points in array
int n = sizeof(pts) / sizeof(*pts);
//if points are stored in vector, then use n = pts.size()
//create binary image
cv::Mat image = cv::Mat::zeros(Size(c, r), CV_8UC1);
//fill all the points with value 255
for (int i = 0; i < n; i++) {
image.at<uchar>(p[i]) = 255;
}
//find all contours in binary image and save in contours variale
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(image, contours, hierarchy, RETR_LIST, CHAIN_APPROX_NONE);

Automatic perspective correction OpenCV

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.
I am using code found in this tutorial
When I give it an image that looks like this:
I get this as the result:
Here is what dst gives me that might help.
I am using this to call the method which contains the code.
quadSegmentation(Img, bw, dst, quad);
Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?
For perspective transform you need,
source points->Coordinates of quadrangle vertices in the source image.
destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.
Here we will calculate these point by contour process.
Calculate Coordinates of quadrangle vertices in the source image
You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.
Calculate Coordinates of the corresponding quadrangle vertices in the destination image
This can be easily find out by calculating bounding rectangle for largest contour.
In below image the red rectangle represent source points and green for destination points.
Adjust the co-ordinates order and Apply Perspective transform
Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
Then calculate transformation matrix and apply wrapPrespective
See the final result
Code
Mat src=imread("card.jpg");
Mat thr;
cvtColor(src,thr,CV_BGR2GRAY);
threshold( thr, thr, 70, 255,CV_THRESH_BINARY );
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ){
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
vector<vector<Point> > contours_poly(1);
approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
Rect boundRect=boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size()==4){
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
Point P1=contours_poly[0][0];
Point P2=contours_poly[0][1];
Point P3=contours_poly[0][2];
Point P4=contours_poly[0][3];
line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);
imshow("quadrilateral", transformed);
imshow("thr",thr);
imshow("dst",dst);
imshow("src",src);
imwrite("result1.jpg",dst);
imwrite("result2.jpg",src);
imwrite("result3.jpg",transformed);
waitKey();
}
else
cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;
teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):
Get the edge map. - will probably work since your edges are fine
Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
Get the corners by finding intersections between lines. - fail for the above mentioned reason
Check if the approximate polygonal curve has 4 vertices. - fail
Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
Apply the perspective transformation. - fail completely
To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).

With OpenCV, try to extract a region of a picture described by ArrayOfArrays

I am developing some image processing tools in iOS. Currently, I have a contour of features computed, which is of type InputArrayOfArrays.
Declared as:
std::vector<std::vector<cv::Point> > contours_final( temp_contours.size() );
Now, I would like to extract areas of the original RGB picture circled by contours and may further store sub-image as cv::Mat format. How can I do that?
Thanks in advance!
I'm guessing what you want to do is just extract the regions in the the detected contours. Here is a possible solution:
using namespace cv;
int main(void)
{
vector<Mat> subregions;
// contours_final is as given above in your code
for (int i = 0; i < contours_final.size(); i++)
{
// Get bounding box for contour
Rect roi = boundingRect(contours_final[i]); // This is a OpenCV function
// Create a mask for each contour to mask out that region from image.
Mat mask = Mat::zeros(image.size(), CV_8UC1);
drawContours(mask, contours_final, i, Scalar(255), CV_FILLED); // This is a OpenCV function
// At this point, mask has value of 255 for pixels within the contour and value of 0 for those not in contour.
// Extract region using mask for region
Mat contourRegion;
Mat imageROI;
image.copyTo(imageROI, mask); // 'image' is the image you used to compute the contours.
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi); // Save this if you want a mask for pixels within the contour in contourRegion.
// Store contourRegion. contourRegion is a rectangular image the size of the bounding rect for the contour
// BUT only pixels within the contour is visible. All other pixels are set to (0,0,0).
subregions.push_back(contourRegion);
}
return 0;
}
You might also want to consider saving the individual masks to optionally use as a alpha channel in case you want to save the subregions in a format that supports transparency (e.g. png).
NOTE: I'm NOT extracting ALL the pixels in the bounding box for each contour, just those within the contour. Pixels that are not within the contour but in the bounding box are set to 0. The reason is that your Mat object is an array and that makes it rectangular.
Lastly, I don't see any reason for you to just save the pixels in the contour in a specially created data structure because you would then need to store the position for each pixel in order to recreate the image. If your concern is saving space, that would not save you much space if at all. Saving the tightest bounding box would suffice. If instead you wish to just analyze the pixels in the contour region, then save a copy of the mask for each contour so that you can use it to check which pixels are within the contour.
You are looking for the cv::approxPolyDP() function to connect the points.
I shared a similar use of the overall procedure in this post. Check the for loop after the findContours() call.
I think what you're looking for is cv::boundingRect().
Something like this:
using namespace cv;
Mat img = ...;
...
vector<Mat> roiVector;
for(vector<vector<Point> >::iterator it=contours.begin(); it<contours.end(); it++) {
if (boundingRect( (*it)).area()>minArea) {
roiVector.push_back(img(boundingRect(*it)));
}
}
cv::boundingRect() takes a vector of Points and returns a cv::Rect. Initializing a Mat myRoi = img(myRect) gives you a pointer to that part of the image (so modifying myRoi will ALSO modify img).
See more here.

Dealing with pixels in contours (OpenCV)?

I have retrieved a contour from an image and want to specifically work on the pixels in the contour. I need to find the sum (not area) of the pixel values in the contour. OpenCV only supports rectangle shaped ROI, so I have no idea how to do this. cvSum also only accepts complete images and doesn't have a mask option, so I am a bit lost on how to proceed. Does anyone have any suggestions on how to find the sum of the values of pixels in a specific contour?
First get all of your contours. Use this information to create a binary image with the white parts being the contour's outline and area. Perform an AND operation on the two images. The result will be the contours and area on a black background. Then just sum all of the pixels in this image.
If I understand right you want to sum all the pixel intensities from a gray image that are inside a contour. If so, the method that i think of is to draw that contour on a blank image and fill it , in so making yourself a mask. After that to optimize the process you can also compute the bounding rect of the contour with :
CvRect cvBoundingRect(CvArr* points, int update=0 );
After this you can make an intermediate image with :
void cvAddS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);
using the value 0, the mask obtained from the contour and setting before as ROI the bounding rect.
After this, a sum on the resulting image will be a little faster.
To access the contour's point individually follow the code
vector<vector<Point> > contours;
...
printf("\n Contours pixels \n");
for(int a=0; a< contours.size(); a++)
{
printf("\nThe contour NO = %d size = %d \n",a, contours[a].size() );
for( int b = 0; b < contours[a].size(); b++ )
{
printf(" [%d, %d] ",contours[a][b].x, contours[a][b].y );
}
}