Matching small grayscale images - c++

I want to test whether two images match. Partial matches also interest me.
The problem is that the images suffer from strong noise. Another problem is that the images might be rotated with an unknown angle. The objects shown in the images will roughly always have the same scale!
The images show area scans from a top-shot perspective. "Lines" are mostly walls and other objects are mostly trees and different kinds of plants.
Another problem was, that the left image was very blurry and the right one's lines were very thin.
To compensate for this difference I used dilation. The resulting images are the ones I uploaded.
Although It can easily be seen that these images match almost perfectly I cannot convince my algorithm of this fact.
My first idea was a feature based matching, but the matches are horrible. It only worked for a rotation angle of -90°, 0° and 90°. Although most descriptors are rotation invariant (in past projects they really were), the rotation invariance seems to fail for this example.
My second idea was to split the images into several smaller segments and to use template matching. So I segmented the images and, again, for the human eye they are pretty easy to match. The goal of this step was to segment the different walls and trees/plants.
The upper row are parts of the left, and the lower are parts of the right image. After the segmentation the segments were dilated again.
As already mentioned: Template matching failed, as did contour based template matching and contour matching.
I think the dilation of the images was very important, because it was nearly impossible for the human eye to match the segments without dilation before the segmentation. Another dilation after the segmentation made this even less difficult.

Your first job should be to fix the orientation. I am not sure what is the best algorithm to do that but here is an approach I would use: fix one of the images and start rotating the other. For each rotation compute a histogram for the color intense on each of the rows/columns. Compute some distance between the resulting vectors(e.g. use cross product). Choose the rotation that results in smallest cross product. It may be good idea to combine this approach with hill climbing.
Once you have the images aligned in approximately the same direction, I believe matching should be easier. As the two images are supposed to be at the same scale, compute something analogous to the geometrical center for both images: compute weighted sum of all pixels - a completely white pixel would have a weight of 1, and a completely black - weight 0, the sum should be a vector of size 2(x and y coordinate). After that divide those values by the dimensions of the image and call this "geometrical center of the image". Overlay the two images in a way that the two centers coincide and then once more compute cross product for the difference between the images. I would say this should be their difference.

You can also try following methods to find rotation and similarity.
Use image moments to get the rotation as shown here.
Once you rotate the image, use cross-correlation to evaluate the similarity.
EDIT
I tried this with OpenCV and C++ for the two sample images. I'm posting the code and results below as it seems to work well at least for the given samples.
Here's the function to calculate the orientation vector using image moments:
Mat orientVec(Mat& im)
{
Moments m = moments(im);
double cov[4] = {m.mu20/m.m00, m.mu11/m.m00, m.mu11/m.m00, m.mu02/m.m00};
Mat covMat(2, 2, CV_64F, cov);
Mat evals, evecs;
eigen(covMat, evals, evecs);
return evecs.row(0);
}
Rotate and match sample images:
Mat im1 = imread(INPUT_FOLDER_PATH + string("WojUi.png"), 0);
Mat im2 = imread(INPUT_FOLDER_PATH + string("XbrsV.png"), 0);
// get the orientation vector
Mat v1 = orientVec(im1);
Mat v2 = orientVec(im2);
double angle = acos(v1.dot(v2))*180/CV_PI;
// rotate im2. try rotating with -angle and +angle. here using -angle
Mat rot = getRotationMatrix2D(Point(im2.cols/2, im2.rows/2), -angle, 1.0);
Mat im2Rot;
warpAffine(im2, im2Rot, rot, Size(im2.rows, im2.cols));
// add a border to rotated image
int borderSize = im1.rows > im2.cols ? im1.rows/2 + 1 : im1.cols/2 + 1;
Mat im2RotBorder;
copyMakeBorder(im2Rot, im2RotBorder, borderSize, borderSize, borderSize, borderSize,
BORDER_CONSTANT, Scalar(0, 0, 0));
// normalized cross-correlation
Mat& image = im2RotBorder;
Mat& templ = im1;
Mat nxcor;
matchTemplate(image, templ, nxcor, CV_TM_CCOEFF_NORMED);
// take the max
double max;
Point maxPt;
minMaxLoc(nxcor, NULL, &max, NULL, &maxPt);
// draw the match
Mat rgb;
cvtColor(image, rgb, CV_GRAY2BGR);
rectangle(rgb, maxPt, Point(maxPt.x+templ.cols-1, maxPt.y+templ.rows-1), Scalar(0, 255, 255), 2);
cout << "max: " << max << endl;
With -angle rotation in code, I get max = 0.758. Below is the rotated image in this case with the matching region.
Otherwise max = 0.293

Related

How to calculate the distance of two circles in a image by opencv

image with two circles
I have an image that include two fibers (presenting as two circles in the image). How can I calculate the distance of two fibers?
I find it hard to detect the position of the fiber. I have tried to use the HoughCircles function, but the parameters are hard to optimize and it cannot locate the circle precisely in most times. Should I subtract the background first or is there any other methods? MANY Thanks!
Unfortunately, you haven't shown your preprocessing steps. In my approach, I'll do the following:
Convert input image to grayscale (see cvtColor).
Median blurring, maintains the "edges" (see medianBlur).
Adaptive thresholding (see adaptiveTreshold).
Morphological opening to get rid of small noise (see morphologyEx).
Find circles by HoughCircles.
Not done here: Possible refinements of the found circles. Exclude too small or too large circles. Use all prior information you have on that! For example, how large can the circles be at all?
Here's my whole code:
// Read image.
cv::Mat img = cv::imread("images/i7aJJ.jpg", cv::IMREAD_COLOR);
// Convert to grayscale for processing.
cv::Mat blk;
cv::cvtColor(img, blk, cv::COLOR_BGR2GRAY);
// Median blurring to improve following thresholding.
cv::medianBlur(blk, blk, 11);
// Adaptive thresholding.
cv::adaptiveThreshold(blk, blk, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 51, -2);
// Morphological opening to get rid of small noise.
cv::morphologyEx(blk, blk, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)));
// Find circles using Hough transform.
std::vector<cv::Vec4f> circles;
cv::HoughCircles(blk, circles, cv::HOUGH_GRADIENT, 1.0, 300, 50, 25, 100);
// TODO: Refinement of found circles, if there are more than two.
// For example, calculate areas: Neglect too small or too large areas.
// Compare all areas, and keep the two with nearly matching areas and
// suitable areas.
// Draw circles in input image.
for (Vec4f& circle : circles) {
cv::circle(img, cv::Point(circle[0], circle[1]), circle[2], cv::Scalar(0, 0, 255), 4);
cv::circle(img, cv::Point(circle[0], circle[1]), 5, cv::Scalar(0, 255, 0), cv::FILLED);
}
// --- Assuming there are only the two right circles left from here. --- //
// Draw some debug output in input image.
const cv::Point c1 = cv::Point(circles[0][0], circles[0][1]);
const cv::Point c2 = cv::Point(circles[1][0], circles[1][1]);
cv::line(img, c1, c2, cv::Scalar(255, 0, 0), 2);
// Calculate distance, and put in input image.
double dist = cv::norm(c1 - c2);
cv::putText(img, std::to_string(dist), cv::Point((c1.x + c2.x) / 2 + 20, (c1.y + c2.y) / 2 + 20), cv::FONT_HERSHEY_COMPLEX, 1.0, cv::Scalar(255, 0, 0));
The final output looks like this:
The intermediate image right before the HoughCircles operation looke like this:
In general, I'm not that skeptical about HoughCircles. You "just" have to pay attention to your preprocessing.
Hope that helps!
It's possible using hough circle detection but you should provide more images if you want a more stable detection. I just do denoising and go straight to circle detection. Using a non-local means denoising is pretty good at preserving edges which is in turn good for the canny edge algorithm included in the hough circle algorithm.
My code is written in Python but can easily be translated into C++.
import cv2
from matplotlib import pyplot as plt
IM_PATH = 'your image path'
DS = 2 # downsample the image
orig = cv2.imread(IM_PATH, cv2.IMREAD_GRAYSCALE)
orig = cv2.resize(orig, (orig.shape[1] // DS, orig.shape[0] // DS))
img = cv2.fastNlMeansDenoising(orig, h=3, templateWindowSize=20 // DS + 1, searchWindowSize=40 // DS + 1)
plt.imshow(orig, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=1, minDist=200 // DS, param1=40 // DS, param2=40 // DS, minRadius=210 // DS, maxRadius=270 // DS)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=1, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
Important
Doing a bit of image processing is only the first step in a good (and stable!) object detection. You have to leverage every detail and property that you can get your hands on and apply some statistics to improve your results. For example:
Use Yves' approach as an addition and filter all detected circles that do not intersect the joints.
Is one circle always underneath the other? Filter out horizontally aligned pairs.
Can you reduce the ROI (are the circles always in a specific area in your image or can they be everywhere)?
Are both circles always the same size? Filter out pairs with different sizes.
...
If you can use multiple metrics you can apply a statistical model (ex. majority voting or knn) to find the best pair of circles.
Again: always think of what you know about your object, the environment and its behavior and take advantage of that knowledge.

OpenCV Center Homography

I am trying to create a stitching algorithm. I have been successful in creating it with a few tweaks needed. The photos below are examples of my stitching program so far. I am able to provide it with an unordered list of image (so long as image is in flight path or side by side it will work regardless of their orientation to one another.
The issue is if the images are reversed some of the image doesn't make it into the final product. Here is the code for the actual stitching. Assume that finding keypoints, matching, and homography is done correctly.
By altering this code is there a way to centre the first image to the destination blank image and still stitch to it. Also, I got this code on stack overflow (Opencv Image Stitching or Panorama ) and am not fully sure how it works and would love if someone could explain it.
Thanks for any help in advance!
Mat stitchMatches(Mat image1,Mat image2, Mat homography){
Mat result;
vector<Point2f> fourPoint;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
Mat destination;
perspectiveTransform(Mat(fourPoint), destination, homography);
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_32F);
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_CONSTANT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
return result;`
It's normally easy to center an image; you simply create a bigger matrix padded with zeros (or whatever color you want), and define an ROI in the center with the same size of your image, and place it in there. However, you cannot in general do this with your two images. The problem is that if an image is shifted, or rotated, so that parts of it are outside your destination image bounds, then your returned warped image from warpPerspective is cut off at those bounds. What you need to do is create the padded image, insert the image that is not being warped wherever you like, and modify the transformation (homography, in this case) by adding in the translation to those pixels.
For example, if your centered image has it's top-left point at (400,500) in the padded image, then you need to add a translation of (400, 500) to your homography so the pixels get mapped to the correct space, and as long as your padded image is large enough, none of it will be cut off.
You will need to create a translational homography and compose it with your original homography to add the translation in. For example, suppose your anchor point for the non-warped image inside the padded image is at (x,y). Translation in an homography is given by the last two columns; if your homography is a 3x3 matrix H then (using normal mathematical indexing) H(1,3) is your translation in x and H(2,3) is the translation in y given by your homography. So we need to create a new identity homography H_t and add those translations in:
1 0 x
H_t = 0 1 y
0 0 1
Then you can compose this with your original homography H (using matrix multiplication): H_n = H_t * H. Using the new homography H_n we can warp the image into this padded space with that added translation to move it to the correct spot using warpPerspective as usual.
You can also automate this to pad the image precisely as much as it needs, so that you don't have excess padding and the padding will stretch only as needed. See my answer here for a detailed explanation of how to calculate that and warp your images into the padded space.

Compare image edges with margin in OpenCV

I have two almost similar images with the difference that the shapes in the second image are a little different. Most of the time smaller, but can be larger. Also the shape count in one image can range from ~10 to >100 and can get relatively close to each other.
It would look something like this (Notice: both images would be not transparent):
The black triangle is image 1, the grey triangle is image 2.
Now i want to add a predefined margin (3px here - to both sides of the contour) to the edges of image 1 and test if the edges of the second image are in "the same" range as the first image. If not, display that visually:
Top left: Small difference between the two images (visualized by red outline)
Bottom right: "Same" edge -> No difference
How can i best accomplish this?
I'm using OpenCV with C++
In case the shapes are at the same positions in both images and you just need the markers on an image without additional information, this simple trick could do it.
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img1 = imread("D:/1.png");
Mat img2 = imread("D:/2.png");
Mat diff;
absdiff(img1, img2, diff);
cv::threshold(diff, diff, 128, 255, THRESH_BINARY);
Mat markers;
int minRadiusDiff = 2;
erode(diff, markers, Mat(), cv::Point(-1, -1), minRadiusDiff / 2);
imwrite("D:/out.png", markers);
}
Here are some example images:
The triangle becomes much bigger, the wobbly thing becomes much smaller and the quad ony shrinks slightly.
So we would like to have the triangle and the wobble marked, but not the quad.
And that is exactly our result.

Cumulative Homography Wrongly Scaling

I'm to build a panorama image of the ground covered by a downward facing camera (at a fixed height, around 1 metre above ground). This could potentially run to thousands of frames, so the Stitcher class' built in panorama method isn't really suitable - it's far too slow and memory hungry.
Instead I'm assuming the floor and motion is planar (not unreasonable here) and trying to build up a cumulative homography as I see each frame. That is, for each frame, I calculate the homography from the previous one to the new one. I then get the cumulative homography by multiplying that with the product of all previous homographies.
Let's say I get H01 between frames 0 and 1, then H12 between frames 1 and 2. To get the transformation to place frame 2 onto the mosaic, I need to get H01*H12. This continues as the frame count increases, such that I get H01*H12*H23*H34*H45*....
In code, this is something akin to:
cv::Mat previous, current;
// Init cumulative homography
cv::Mat cumulative_homography = cv::Mat::eye(3);
video_stream >> previous;
for(;;) {
video_stream >> current;
// Here I do some checking of the frame, etc
// Get the homography using my DenseMosaic class (using Farneback to get OF)
cv::Mat tmp_H = DenseMosaic::get_homography(previous,current);
// Now normalise the homography by its bottom right corner
tmp_H /= tmp_H.at<double>(2, 2);
cumulative_homography *= tmp_H;
previous = current.clone( );
}
It works pretty well, except that as the camera moves "up" in the viewpoint, the homography scale decreases. As it moves down, the scale increases again. This gives my panoramas a perspective type effect that I really don't want.
For example, this is taken on a few seconds of video moving forward then backward. The first frame looks ok:
The problem comes as we move forward a few frames:
Then when we come back again, you can see the frame gets bigger again:
I'm at a loss as to where this is coming from.
I'm using Farneback dense optical flow to calculate pixel-pixel correspondences as below (sparse feature matching doesn't work well on this data) and I've checked my flow vectors - they're generally very good, so it's not a tracking problem. I also tried switching the order of the inputs to find homography (in case I'd mixed up the frame numbers), still no better.
cv::calcOpticalFlowFarneback(grey_1, grey_2, flow_mat, 0.5, 6,50, 5, 7, 1.5, flags);
// Using the flow_mat optical flow map, populate grid point correspondences between images
std::vector<cv::Point2f> points_1, points_2;
median_motion = DenseMosaic::dense_flow_to_corresp(flow_mat, points_1, points_2);
cv::Mat H = cv::findHomography(cv::Mat(points_2), cv::Mat(points_1), CV_RANSAC, 1);
Another thing I thought it could be was the translation I include in the transformation to ensure my panorama is centred within the scene:
cv::warpPerspective(init.clone(), warped, translation*homography, init.size());
But having checked the values in the homography before the translation is applied, the scaling issue I mention is still present.
Any hints are gratefully received. There's a lot of code I could put in but it seems irrelevant, please do let me know if there's something missing
UPDATE
I've tried switching out the *= operator for the full multiplication and tried reversing the order the homographies are multiplied in, but no luck. Below is my code for calculating the homography:
/**
\brief Calculates the homography between the current and previous frames
*/
cv::Mat DenseMosaic::get_homography()
{
cv::Mat grey_1, grey_2; // Grayscale versions of frames
cv::cvtColor(prev, grey_1, CV_BGR2GRAY);
cv::cvtColor(cur, grey_2, CV_BGR2GRAY);
// Calculate the dense flow
int flags = cv::OPTFLOW_FARNEBACK_GAUSSIAN;
if (frame_number > 2) {
flags = flags | cv::OPTFLOW_USE_INITIAL_FLOW;
}
cv::calcOpticalFlowFarneback(grey_1, grey_2, flow_mat, 0.5, 6,50, 5, 7, 1.5, flags);
// Convert the flow map to point correspondences
std::vector<cv::Point2f> points_1, points_2;
median_motion = DenseMosaic::dense_flow_to_corresp(flow_mat, points_1, points_2);
// Use the correspondences to get the homography
cv::Mat H = cv::findHomography(cv::Mat(points_2), cv::Mat(points_1), CV_RANSAC, 1);
return H;
}
And this is the function I use to find the correspondences from the flow map:
/**
\brief Calculate pixel->pixel correspondences given a map of the optical flow across the image
\param[in] flow_mat Map of the optical flow across the image
\param[out] points_1 The set of points from #cur
\param[out] points_2 The set of points from #prev
\param[in] step_size The size of spaces between the grid lines
\return The median motion as a point
Uses a dense flow map (such as that created by cv::calcOpticalFlowFarneback) to obtain a set of point correspondences across a grid.
*/
cv::Point2f DenseMosaic::dense_flow_to_corresp(const cv::Mat &flow_mat, std::vector<cv::Point2f> &points_1, std::vector<cv::Point2f> &points_2, int step_size)
{
std::vector<double> tx, ty;
for (int y = 0; y < flow_mat.rows; y += step_size) {
for (int x = 0; x < flow_mat.cols; x += step_size) {
/* Flow is basically the delta between left and right points */
cv::Point2f flow = flow_mat.at<cv::Point2f>(y, x);
tx.push_back(flow.x);
ty.push_back(flow.y);
/* There's no need to calculate for every single point,
if there's not much change, just ignore it
*/
if (fabs(flow.x) < 0.1 && fabs(flow.y) < 0.1)
continue;
points_1.push_back(cv::Point2f(x, y));
points_2.push_back(cv::Point2f(x + flow.x, y + flow.y));
}
}
// I know this should be median, not mean, but it's only used for plotting the
// general motion direction so it's unimportant.
cv::Point2f t_median;
cv::Scalar mtx = cv::mean(tx);
t_median.x = mtx[0];
cv::Scalar mty = cv::mean(ty);
t_median.y = mty[0];
return t_median;
}
It turns out this was because my viewpoint was close to the features, meaning that the non-planarity of the tracked features was causing skew to the homography. I managed to prevent this (it's more of a hack than a method...) by using estimateRigidTransform instead of findHomography, as this does not estimate for perspective variations.
In this particular case, it makes sense to do so, as the view does only ever undergo rigid transformations.

Automatic perspective correction OpenCV

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.
I am using code found in this tutorial
When I give it an image that looks like this:
I get this as the result:
Here is what dst gives me that might help.
I am using this to call the method which contains the code.
quadSegmentation(Img, bw, dst, quad);
Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?
For perspective transform you need,
source points->Coordinates of quadrangle vertices in the source image.
destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.
Here we will calculate these point by contour process.
Calculate Coordinates of quadrangle vertices in the source image
You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.
Calculate Coordinates of the corresponding quadrangle vertices in the destination image
This can be easily find out by calculating bounding rectangle for largest contour.
In below image the red rectangle represent source points and green for destination points.
Adjust the co-ordinates order and Apply Perspective transform
Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
Then calculate transformation matrix and apply wrapPrespective
See the final result
Code
Mat src=imread("card.jpg");
Mat thr;
cvtColor(src,thr,CV_BGR2GRAY);
threshold( thr, thr, 70, 255,CV_THRESH_BINARY );
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ){
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
vector<vector<Point> > contours_poly(1);
approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
Rect boundRect=boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size()==4){
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
Point P1=contours_poly[0][0];
Point P2=contours_poly[0][1];
Point P3=contours_poly[0][2];
Point P4=contours_poly[0][3];
line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);
imshow("quadrilateral", transformed);
imshow("thr",thr);
imshow("dst",dst);
imshow("src",src);
imwrite("result1.jpg",dst);
imwrite("result2.jpg",src);
imwrite("result3.jpg",transformed);
waitKey();
}
else
cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;
teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):
Get the edge map. - will probably work since your edges are fine
Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
Get the corners by finding intersections between lines. - fail for the above mentioned reason
Check if the approximate polygonal curve has 4 vertices. - fail
Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
Apply the perspective transformation. - fail completely
To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).