OpenCV rotate, distort and translate ROI in new image - c++

I have an image from which I want to get a vertical ROI, apply some transformations and add to another image.
I read a lot of questions and answer on StackOverflow and other forums, but I'm still stuck with this problem. For the moment I'm using the C interface of OpenCV, but I could use the C++ one if needed (I would have to write a conversion function, since I'm working with CGImageRef in Cocoa).
To get from the top image (see below) to the bottom image, I guess I have to :
Get the ROI on the first image ;
Scale it down ;
Get the intersection points on the lines between the center and the 2 circles for my "width" angle (the angle is fixed) ;
Distort the image so the corners stick to my intersection points ;
Rotate around the center point and put it in the output image.
For the moment, I manage well to do this :
Getting the ROI ;
Scaling it with cvResize ;
Getting the intersection points shouldn't be too complicated, as it is pure geometry and I implemented it yet for another purpose.
But, I have no idea at all of how to distort the resulting image of my ROI, and I don't know if it is even possible in OpenCV. Would I have to use a kind of perspective correction ?
And, I've been trying the few good posts solutions I found by here to rotate with the rotated bounding box, but with no good results for the moment.
EDIT :
Well, I managed to do the first part of the work :
Getting a ROI in a basis image ;
Rotating and placing it at a fixed distance from the center.
I used the method explained and coded in this post : https://stackoverflow.com/a/16285286/1060921
I only added a variable to set the rotation point and get my inner circle.
NB : I set the ROI BEFORE to call the method, so the ROI in the post method is... the image size. Then I place it at the center of my final image with a cvAdd.
Here I get one pixel slices of my camera input. What I want to do now is to distort bigger slices, for example from 2 pixels on the inner circle to 5 pixels on the outer one.

See this tutorial which uses warpPerspective to correct perspective distortion.
EDIT: In your case warpAffine should be better and simpler solution.
So, you could do something like this, just use four points instead of three:
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( ... );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );

Related

Extract pixels within given cordinates

I'm new to image processing and development. I have pixel coordinates of an image. By connecting each coordinate can be obtained a triangle. I want to extract pixels inside pixels of giving coordinates (pixels within a triangle)
Cordinates as follows.
1(x,y) -> (146 , 548)
2(x,y) -> (155, 548)
3(x,y) -> (149.6 , 558.1)
How do i take pixels that are inbound of above coordinates. Any help is appreciated. Thank you.
You should apply mask on your image.
Example code:
First you should load your image:
//load default image
Mat image = cv::imread("/home/fabio/code/lena.jpg", cv::IMREAD_GRAYSCALE);
Then create a mask for your image and apply the triangle points to the mask.
//mask image definition
cv::Mat mask = cv::Mat::zeros(image.size(), image.type());
//triangle definition (example points)
vector<Point> points;
points.push_back( Point(100,70));
points.push_back( Point(60,150));
points.push_back( Point(190,120));
//apply triangle to mask
fillConvexPoly( mask, points, Scalar( 255 ));
After that your mask will look like this:
Finally create the final image applying the mask to the original image:
//final image definition
cv::Mat finalImage = cv::Mat::zeros(image.size(), image.type());
image.copyTo(finalImage, mask);
Result:
Is that what you're looking for?
There is a thread talking about this, the solution given there is easily applicable to your case, just replacing the shape used in the example provided.
Here is the link : opencv-binary-image-mask-for-image-analysis-in-c

Matching small grayscale images

I want to test whether two images match. Partial matches also interest me.
The problem is that the images suffer from strong noise. Another problem is that the images might be rotated with an unknown angle. The objects shown in the images will roughly always have the same scale!
The images show area scans from a top-shot perspective. "Lines" are mostly walls and other objects are mostly trees and different kinds of plants.
Another problem was, that the left image was very blurry and the right one's lines were very thin.
To compensate for this difference I used dilation. The resulting images are the ones I uploaded.
Although It can easily be seen that these images match almost perfectly I cannot convince my algorithm of this fact.
My first idea was a feature based matching, but the matches are horrible. It only worked for a rotation angle of -90°, 0° and 90°. Although most descriptors are rotation invariant (in past projects they really were), the rotation invariance seems to fail for this example.
My second idea was to split the images into several smaller segments and to use template matching. So I segmented the images and, again, for the human eye they are pretty easy to match. The goal of this step was to segment the different walls and trees/plants.
The upper row are parts of the left, and the lower are parts of the right image. After the segmentation the segments were dilated again.
As already mentioned: Template matching failed, as did contour based template matching and contour matching.
I think the dilation of the images was very important, because it was nearly impossible for the human eye to match the segments without dilation before the segmentation. Another dilation after the segmentation made this even less difficult.
Your first job should be to fix the orientation. I am not sure what is the best algorithm to do that but here is an approach I would use: fix one of the images and start rotating the other. For each rotation compute a histogram for the color intense on each of the rows/columns. Compute some distance between the resulting vectors(e.g. use cross product). Choose the rotation that results in smallest cross product. It may be good idea to combine this approach with hill climbing.
Once you have the images aligned in approximately the same direction, I believe matching should be easier. As the two images are supposed to be at the same scale, compute something analogous to the geometrical center for both images: compute weighted sum of all pixels - a completely white pixel would have a weight of 1, and a completely black - weight 0, the sum should be a vector of size 2(x and y coordinate). After that divide those values by the dimensions of the image and call this "geometrical center of the image". Overlay the two images in a way that the two centers coincide and then once more compute cross product for the difference between the images. I would say this should be their difference.
You can also try following methods to find rotation and similarity.
Use image moments to get the rotation as shown here.
Once you rotate the image, use cross-correlation to evaluate the similarity.
EDIT
I tried this with OpenCV and C++ for the two sample images. I'm posting the code and results below as it seems to work well at least for the given samples.
Here's the function to calculate the orientation vector using image moments:
Mat orientVec(Mat& im)
{
Moments m = moments(im);
double cov[4] = {m.mu20/m.m00, m.mu11/m.m00, m.mu11/m.m00, m.mu02/m.m00};
Mat covMat(2, 2, CV_64F, cov);
Mat evals, evecs;
eigen(covMat, evals, evecs);
return evecs.row(0);
}
Rotate and match sample images:
Mat im1 = imread(INPUT_FOLDER_PATH + string("WojUi.png"), 0);
Mat im2 = imread(INPUT_FOLDER_PATH + string("XbrsV.png"), 0);
// get the orientation vector
Mat v1 = orientVec(im1);
Mat v2 = orientVec(im2);
double angle = acos(v1.dot(v2))*180/CV_PI;
// rotate im2. try rotating with -angle and +angle. here using -angle
Mat rot = getRotationMatrix2D(Point(im2.cols/2, im2.rows/2), -angle, 1.0);
Mat im2Rot;
warpAffine(im2, im2Rot, rot, Size(im2.rows, im2.cols));
// add a border to rotated image
int borderSize = im1.rows > im2.cols ? im1.rows/2 + 1 : im1.cols/2 + 1;
Mat im2RotBorder;
copyMakeBorder(im2Rot, im2RotBorder, borderSize, borderSize, borderSize, borderSize,
BORDER_CONSTANT, Scalar(0, 0, 0));
// normalized cross-correlation
Mat& image = im2RotBorder;
Mat& templ = im1;
Mat nxcor;
matchTemplate(image, templ, nxcor, CV_TM_CCOEFF_NORMED);
// take the max
double max;
Point maxPt;
minMaxLoc(nxcor, NULL, &max, NULL, &maxPt);
// draw the match
Mat rgb;
cvtColor(image, rgb, CV_GRAY2BGR);
rectangle(rgb, maxPt, Point(maxPt.x+templ.cols-1, maxPt.y+templ.rows-1), Scalar(0, 255, 255), 2);
cout << "max: " << max << endl;
With -angle rotation in code, I get max = 0.758. Below is the rotated image in this case with the matching region.
Otherwise max = 0.293

Using transform matrix from larger area on ROI

I've a problem that I don't really know how to solve. The situation is that I've got a picture, inside this picture I have one ROI (a large rect). Inside this rect I then have X number of smaller ROI:s.
I then use the getPerspectiveTransform() method on the larger of the ROI. What I then want to do is to apply this matrix on the smaller ROI:s in-order to warp them separately but using the matrix from the larger ROI. The reason I want to do this is basically because I want to keep the bounding rects I've got before the warp. If there is any way to warp the larger ROI and keep the bounding rects inside of this (i.e. keep the smaller ROI:s) that would be very helpful!
Anyway, this is what I've tried and the results it produces:
vector<Mat> warpSmallerRoi( vector<Rect> smallerRoi ) {
vector<Mat> quad;
for( int i = 0; i < smallerRoi.size(); ++ i ) {
Mat wholeQuad;
wholeQuad = Mat::zeros( originalImage(smallerRoi[i]).rows, originalImage(smallerRoi[i]).cols, CV_8UC1 );
vector<Point2f> largerRoiCorners;
vector<Point2f> quadPoints;
// Takes the corners from the first smaller ROI and the last smaller ROI, represents the bigger ROI
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].tl().y ) ); // Top left
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].tl().y ) ); // Top right
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].br().y ) ); // Bottom right
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].br().y ) ); // Bottom left
quadPoints.push_back( Point2f(0, 0) );
quadPoints.push_back( Point2f(wholeQuad.cols, 0) );
quadPoints.push_back( Point2f( wholeQuad.cols, wholeQuad.rows ) );
quadPoints.push_back( Point2f(0, wholeQuad.rows) );
// Transform matrix for the larger ROI, this warps into a perfect result.
Mat largerRoiTransformMatrix = getPerspectiveTransform( largerRoiCorners, quadPoints );
//Corners of the smaller ROI
vector<Point2f> corners;
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].tl().y ) ); // Top left
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].tl().y ) ); // Top right
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].br().y )); // Bottom right
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].br().y ) ); // Bottom left
Mat transformMatrix = getPerspectiveTransform( corners, quadPoints );
/* This part is just experimental and does not work all the time, it works well sometimes though
Uses the pars from the larger ROI transform matrix and applies it on the smaller ROI
then warps it.
*/
transformMatrix.at<double>(1,0) = largerRoiTransformMatrix.at<double>(1,0);
transformMatrix.at<double>(1,1) = largerRoiTransformMatrix.at<double>(1,1);
transformMatrix.at<double>(1,2) = largerRoiTransformMatrix.at<double>(1,2);
transformMatrix.at<double>(2,0) = largerRoiTransformMatrix.at<double>(2,0);
transformMatrix.at<double>(2,1) = largerRoiTransformMatrix.at<double>(2,1);
transformMatrix.at<double>(2,2) = largerRoiTransformMatrix.at<double>(2,2);
// Warps the
warpPerspective( plateRgb, wholeQuad, transformMatrix, wholeQuad.size() );
quad.push_back(wholeQuad);
}
return quad;
}
This function works sometimes, especially when the larger ROI is already pretty straight (I guess this is because my replacements are not that big difference from the original values then.
Eg:
From:
To:
But then when the larger ROI has much skew the result is not so good:
From:
To:
As you can see the right part of the "H" is here a bit outside of the image. How should I go forward to transform my ROI:s so that the "H" (and all others) fit into the image and are warped with the correct tranformation matrix?
Sorry if I missed out on any information, ask in that case! Thanks :)

Automatic perspective correction OpenCV

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.
I am using code found in this tutorial
When I give it an image that looks like this:
I get this as the result:
Here is what dst gives me that might help.
I am using this to call the method which contains the code.
quadSegmentation(Img, bw, dst, quad);
Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?
For perspective transform you need,
source points->Coordinates of quadrangle vertices in the source image.
destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.
Here we will calculate these point by contour process.
Calculate Coordinates of quadrangle vertices in the source image
You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.
Calculate Coordinates of the corresponding quadrangle vertices in the destination image
This can be easily find out by calculating bounding rectangle for largest contour.
In below image the red rectangle represent source points and green for destination points.
Adjust the co-ordinates order and Apply Perspective transform
Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
Then calculate transformation matrix and apply wrapPrespective
See the final result
Code
Mat src=imread("card.jpg");
Mat thr;
cvtColor(src,thr,CV_BGR2GRAY);
threshold( thr, thr, 70, 255,CV_THRESH_BINARY );
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ){
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
vector<vector<Point> > contours_poly(1);
approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
Rect boundRect=boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size()==4){
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
Point P1=contours_poly[0][0];
Point P2=contours_poly[0][1];
Point P3=contours_poly[0][2];
Point P4=contours_poly[0][3];
line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);
imshow("quadrilateral", transformed);
imshow("thr",thr);
imshow("dst",dst);
imshow("src",src);
imwrite("result1.jpg",dst);
imwrite("result2.jpg",src);
imwrite("result3.jpg",transformed);
waitKey();
}
else
cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;
teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):
Get the edge map. - will probably work since your edges are fine
Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
Get the corners by finding intersections between lines. - fail for the above mentioned reason
Check if the approximate polygonal curve has 4 vertices. - fail
Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
Apply the perspective transformation. - fail completely
To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).

Filter out only one contour in OpenCV C/C++

I'm trying to make a program to detect an object in any shape using a video camera/webcam based on Canny filter and contour finding function. Here is my program:
int main( int argc, char** argv )
{
CvCapture *cam;
CvMoments moments;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
CvSeq* contours2 = NULL;
CvPoint2D32f center;
int i;
cam=cvCaptureFromCAM(0);
if(cam==NULL){
fprintf(stderr,"Cannot find any camera. \n");
return -1;
}
while(1){
IplImage *img=cvQueryFrame(cam);
if(img==NULL){return -1;}
IplImage *src_gray= cvCreateImage( cvSize(img->width,img->height), 8, 1);
cvCvtColor( img, src_gray, CV_BGR2GRAY );
cvSmooth( src_gray, src_gray, CV_GAUSSIAN, 5, 11);
cvCanny(src_gray, src_gray, 70, 200, 3);
cvFindContours( src_gray, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0));
if(contours==NULL){ contours=contours2;}
contours2=contours;
cvMoments(contours, &moments, 1);
double m_00 = cvGetSpatialMoment( &moments, 0, 0 );
double m_10 = cvGetSpatialMoment( &moments, 1, 0 );
double m_01 = cvGetSpatialMoment( &moments, 0, 1 );
float gravityX = (m_10 / m_00)-150;
float gravityY = (m_01 / m_00)-150;
if(gravityY>=0&&gravityX>=0){
printf("center point=(%.f, %.f) \n",gravityX,gravityY); }
for (; contours != 0; contours = contours->h_next){
CvScalar color = CV_RGB(250,0,0);
cvDrawContours(img,contours,color,color,-1,-1, 8, cvPoint(0,0));
}
cvShowImage( "Input", img );
cvShowImage( "Contours", src_gray );
cvClearMemStorage(storage);
if(cvWaitKey(33)>=0) break;
}
cvDestroyWindow("Contours");
cvDestroyWindow("Source");
cvReleaseCapture(&cam);
}
This program will detect all contours captured by the camera and the average coordinate of the contours will be printed. My question is how to filter out only one object/contour so I can get more precise (x,y) position of the object? If possible, can anyone show me how to mark the center of the object by using (x,y) coordinates?
Thanks in advance. Cheers
p/s:Sorry I couldn't upload a screenshot yet but if anything helps, here's the link.
Edit: To make my question more clear:
For example, if I only want to filter out only the square from my screenshot above, what should I do?
The object I want to filter out has the biggest contour area and most importantly has a shape(any shape), not a straight or a curve line
I'm still experimenting with the smooth and canny values so if anybody have the problem to detect the contours using my program please alter the values.
I think it can be solved fairly easy. I would suggest some morphological operations before contour detection. Also, I would suggest filtering "out" smaller elements, and getting the biggest element as the only one still in the image.
I suggest:
for filtering out lines (straight or curved): you have to decide what do you yourself consider a border between a "line" and a "shape". Let's say you consider all the objects of a thickness 5 pixel or more to be objects, while the ones that are less than 5 pixels across to be lines. An morphological opening that uses a 5x5 square or a 3-pixel sized diamond shape as a structuring element would take care of this.
for filtering out small objects in general: if objects are of arbitrary shapes, purely morphological opening won't do: you have to do an algebraic opening. A special type of algebraic openings is an area opening: an operation that removes all the connected components in the image that have (pixel) area smaller than a given threshold. If you have an upper bound on the size of uninteresting objects, or a lower bound on the size of interesting ones, that value should be used as a threshold. You can probably get a similar effect with a larger morphological opening, but it will not be so flexible.
for filtering out all the objects except the largest: it sounds like removing connected components from the smallest one to the largest one should work. Try labeling the connected components. On a binary (black & white image), this image transformation works by creating a greyscale image, labeling the background as 0 (black), and each component with a different, increasing grey value. In the end, pixels of each object are marked by a different value. You can now simply look at the gray level histogram, and find the grey value with the most pixels. Set all the other grey levels to 0 (black), and the only object left in the image is the biggest one.
The suggestions are written from the simplest to the most complex ones. Still, I think OpenCV can be of help with any of these. Morphological erosion, dilation, opening and closing are implemented in OpenCV. I think you might need to construct an algebraic opening operator on your own (or play with combining OpenCV basic morphology), but I'm sure OpenCV can help you with both labeling the connected components and examining the histogram of the resulting greyscale image.
In the end, when only pixels from one object are left, you do the Canny contour detection.
This is a blob processing problem that can not be solved (easily) by OpenCV itself. Have a look at cvBlobsLib. This library is extends OpenCV with functions/classes for connected component labeling.
http://opencv.willowgarage.com/wiki/cvBlobsLib