Line Detect with HoughLinesP - c++

With OpenCV and C++ I'm trying to detect the lines of a street from an input video. I'm using HoughLinesP and I'd like to detect ONLY the lines that delimit the street, so not horizontal or vertical for example.
Using
HoughLinesP(dst, lines, 1, CV_PI/180, 8, 80, 3),
I detect all the lines so I changed double theta (CV_PI/180) to this
HoughLinesP(dst, lines, 10*CV_PI/180<=theta<=80*CV_PI/180 & 110*CV_PI/180<=theta<=170*CV_PI/180, 8, 80, 3);
But it doesn't work because the console display only the video without any type of lines.

The fourth argument to HoughLinesP is not an angle value that tell opencv to detect only lines which are oriented from the OX axis (ie in polar coordinates). Rather than, the angle value passed in tells opencv algorithm to iterate from 0 to PI (or 2*PI, depends how algorithm is implemented) having this angle as iteration step, e.g. iterating from 0 to PI by PI/180 will take 180 iterations in HoughLinesP trying to find a line for a given (r,alpha).
The solution to find lines having polar coords with given angle range (not the most robust one) could be to detect all lines with HoughLinesP and then iterate over them, calculate angle coorinate and filter out those which are having polar angle coord in a given range.
EDIT (a draft of the algorithm using C++11):
vector<Vec4i> detectedLines;
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
vector<Vec4i> filteredLines(detectedLines.size());
const float downAngleRange = 30*CV_PI/180;
const float upAngleRange = 60*CV_PI/180;
auto it = copy_if(detectedLines.begin(), detectedLines.end(),
filteredLines.begin(),
[](const Vec4i &v) {
float angle = calculateAnglePolarCord(v);
return angle <= upAngleRange && angle >= downAngleRange;
});
filteredLInes.resize(std::distance(filteredLines.begin(),it));
Where calculateAnglePolarCord is a method which on given line calculats its second (angle) polar coordinate.
Remember to implement good float comparison technique.

Related

Draw all lines obtained with HoughLines in OpenCV

I'm using OpenCV 3.2.
I'd like to extract and draw all lines in this image.
For this, I first obtain the contours of the image. For example, I'm using the Canny algorithm, with a double threshold 100 (low) and 200 (high).
Mat image = cv::imread(<image_path>, cv::IMREAD_GRAYSCALE);
cv::Mat contours;
cv::Canny(image, contours, 100, 200);
Then, I call the HoughLines function with a resolution of 1 pixel and π / 45 radians. I just want those lines which have a length of at least 60 pixels.
std::vector<cv::Vec2f> lines;
cv::HoughLines(canny, lines, 1, CV_PI/45, 60);
This returns me a vector lines with the rho p and theta θ parameters in the Hough space of the desired lines. As we know, the line going through a contour pixel (x_i, y_i) is:
p = x_i cos(θ) + y_i sin(θ)
We know p and θ, so we know all the pixels in this line. Two easy points to calculate are A with x_i = 0 and B with y_i = 0.
A = (0, p / sin(θ))
B = (p / cos(θ), 0)
Let's draw them with the line function in blue color.
cv::cvtColor(image, image, CV_GRAY2BGR);
for (unsigned int i = 0; i < lines.size(); ++i) {
float p = lines[i][0];
float theta = lines[i][1];
cv::Point a(0, static_cast<int>(p / std::sin(theta)));
cv::Point b(static_cast<int>(p / std::cos(theta)), 0);
cv::line(image, a, b, cv::Scalar(255, 0, 0));
}
The result is that it only draws me 6 lines, of a total of 14 obtained. As you can see, only those lines that intersect the row 0 and column 0 of the image are drawn. What is the same, those lines which have A and B points in the image boundary. The rest of the lines have these points outside the image.
How can I achieve to draw all the lines in an easy way? I can calculate all the pixels of the obtained lines and draw them (we know them), but I'd like to draw them by minimizing lines of code and using OpenCV api.

CGAL Intersection Circle and Vertical Lines (not segments)

In CGAL I need to compute the exact intersection points between a set of lines and a set o circles. Starting from the circles (which can have irrational radius but rational squared_radius) I should compute the vertical line passing through the x_extremal_points of each circle (not segment but lines) and calculate the intersection point of each circle with each line.
I’m using CircularKernel and Circle_2 for the circles and Line_2 for the lines.
Here’s an example of how I compute the circles and the lines and how I check if they intersect.
int main()
{
Point_2 a = Point_2(250.5, 98.5);
Point_2 b = Point_2(156, 139);
//Radius is half distance ab
Circular_k::FT aRad = CGAL::squared_distance(a, b);
Circle_2 circle_a = Circle_2(a, aRad/4);
Circular_arc_point_2 a_left_point = CGAL::x_extremal_point(circle_a, false);
Circular_arc_point_2 a_right_point = CGAL::x_extremal_point(circle_a, true);
//for example use only left extremal point of circle a
CGAL::Bbox_2 a_left_point_bb = a_left_point.bbox();
Line_2 a_left_line = Line_2(Point_2(a_left_point_bb.xmin(), a_left_point_bb.ymin()),
Point_2(a_left_point_bb.xmin(), a_left_point_bb.ymax()));
if ( do_intersect(a_left_line, circle_a) ) {
std::cout << "intersect";
}
else {
std::cout << " do not intersect ";
}
return 0;
}
This flow rises this exception:
CGAL error: precondition violation!
Expression : y != 0
File : c:\dev\cgal-4.7\include\cgal\gmp\gmpq_type.h
Line : 371
Explanation:
Refer to the bug-reporting instructions at http://www.cgal.org/bug_report.html
I can’t figure out how I can calculate the intersection points.
Also, Is there a better way to compute the lines? I know abot the x_extremal_point function but it returns the Circular_arc_point point and I’m not able to construct a vertical line passing through them directly without using Bounding box.
In your code, you seem to compute the intersection of a circle with the vertical line that passes through the extremal point of the circle (I forget the bounding box). Well, then the (double) intersection is the extremal point itself...
More globally, you say in your text of introduction that you want to compute exact intersections. Then you should certainly not use bounding boxes, which by definition introduce some approximation.
If I understand your text correctly,
* for testing the intersection of your vertical lines with the other circles, you don't need to construct the lines, you only need to compare the abscissae of the extremal points of two circles, which you can do with the CGAL circular kernel.
* for computing the intersection of a vertical line that has non-rational coefficients (since its equation is of the form x= +-sqrt(r)) with another circle, then the CGAL circular kernel will not give you a pre-cooked solution. That kernel will help, but you must still compute a few things by hand.
If you don't want to bother, then you can also just take a standard CGAL kernel with Core::Expr as underlying number type. It can do "anything", but it will be slower.
For efficiency, you should look at the underlying 1D problem: projecting the lines and the circle on the X axis, you have a set of points and a set of intervals [Xc-R, Xc+R].
If the L points are sorted increasingly, you can locate the left bound of an interval in time Lg(L) by dichotomy, and scan the list of points until the right bound. This results in a O(Lg(L).C + I) behavior (C circle intervals), where I is the number of intersections reported.
I guess that with a merge-like process using an active list, if the interval bounds are also sorted you can lower to O(L + C + I).
The extension to 2D is elementary.

Hough Line Transform - artifacts at 45 degree angle

I implemented the Hough Lines Transform in OpenCV (c++) and I get strange artifacts in the Hough Space. The following picture shows the Hough Space. The distance rho is depicted in the rows while the 180 columns represent the angle from 0 to 179 degree. If you zoom in on column 45 and 135 you see a vertical line with alternating dark and bright pixels.
http://imgur.com/NDtMn6S
For higher thresholds the lines of the fence are detected fine but when I lower the threshold the artifacts can be seen as 45° or 135° rotated lines in the final picture:
Detected lines for medium threshold
At first I thought it was a mistake in my implementation of the Hough Lines method but get similar lines for medium thresholds using OpenCV's Hough Line method. I also encounter the same problem when using Canny instead of Sobel.
So the question is: why do I get these artifacts and how can I get rid of them? I wasn't able to find anything about this and any help would be appreciated.
This is the code I used with the OpenCV Hough Lines method:
// read in input image and convert to grayscale
Mat frame = imread("fence.png", CV_LOAD_IMAGE_COLOR);
Mat final_out;
frame.copyTo(final_out);
Mat img, gx, gy, mag, angle;
cvtColor(frame, img, CV_BGR2GRAY);
// get the thresholded maggnitude image
Sobel(img, gx, CV_64F, 1, 0);
Sobel(img, gy, CV_64F, 0, 1);
cartToPolar(gx, gy, mag, angle);
normalize(mag, mag, 0, 255, NORM_MINMAX);
mag.convertTo(mag, CV_8U);
threshold(mag, mag, 55, 255.0, THRESH_BINARY);
// apply the hough lines transform and draw the lines
vector<Vec2f> lines;
HoughLines(mag, lines, 1, CV_PI / 180, 240);
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
pt1.x = 0;
pt1.y = (rho - pt1.x * cos(theta))/sin(theta);
pt2.x = mag.cols;
pt2.y = (rho - pt2.x * cos(theta))/sin(theta);
line(final_out, pt1, pt2, Scalar(0,0,255), 1, CV_AA);
}
// show the image
imshow("final_image", final_out);
cvWaitKey();
Answering the question - you can't get rid of such artifact - it's mathematical by nature due to discrete nature of image and pixels' grid orthogonality. The only way is to exclude exact 45 degree from the analysis.
I found the source - the bright pixels of anomaly are produced by the next issue:
Red dots - exactly 45 degree bright anomaly - you can see they are doubled making stairs pattern - which doubles number of pixels involved in accumulation.
Blue dots - exactly 45 degree dim anomaly - making chess-board pattern
Green dots - 44 degree line - you can see it's alternate doubling and chess patterns - which mediates anomaly.
If you look on whole picture of Hough transform matrix you will see how brightness slowly shifting across whole picture - representing slowly changing this alternation ratio depending on angle. However, due to nature of the pixel grid, at exactly 45 degree it makes this anomaly very acute. I don't know how to deal with it yet...
Stumbled across this and maybe useful to future people.
The image is inverted; the algorithm is accumulating the white pixels which obviously there are more of along the diagonals of the image. The lines you are looking for are black, which means they are zero valued and not considered.

Pass vector<Point2f> to getAffineTransform

I'm trying to calculate affine transformation between two consecutive frames from a video. So I have found the features and got the matched points in the two frames.
FastFeatureDetector detector;
vector<Keypoints> frame1_features;
vector<Keypoints> frame2_features;
detector.detect(frame1 , frame1_features , Mat());
detector.detect(frame2 , frame2_features , Mat());
vector<Point2f> features1; //matched points in 1st image
vector<Point2f> features2; //matched points in 2nd image
for(int i = 0;i<frame2_features.size() && i<frame1_features.size();++i )
{
double diff;
diff = pow((frame1.at<uchar>(frame1_features[i].pt) - frame2.at<uchar>(frame2_features[i].pt)) , 2);
if(diff<SSD) //SSD is sum of squared differences between two image regions
{
feature1.push_back(frame1_features[i].pt);
feature2.push_back(frame2_features[i].pt);
}
}
Mat affine = getAffineTransform(features1 , features2);
The last line gives the following error :
OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 3 && dst.checkVector(2, CV_32F) == 3) in getAffineTransform
Can someone please tell me how to calculate the affine transformation with a set of matched points between the two frames?
Your problem is that you need exactly 3 point correspondences between the images.
If you have more than 3 correspondences, you should optimize the transformation to fit all the correspondences (except of outliers).
Therefore, I recommend to take a look at findHomography()-function (http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography).
It calculates a perspective transformation between the correspondences and needs at least 4 point correspondences.
Because you have more than 3 correspondences and affine transformations are a subset of perspective transformations, this should be appropriate for you.
Another advantage of the function is that it is able to detect outliers (correspondences that do not fit to the transformation and the other points) and these are not considered for transformation calculation.
To sum up, use findHomography(features1 , features2, CV_RANSAC) instead of getAffineTransform(features1 , features2).
I hope I could help you.
As I read from your code and assertion, there is something wrong with your vectors.
int checkVector(int elemChannels,int depth) //
this function returns N if the matrix is 1-channel (N x ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise.
And according to the documentation; http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#getaffinetransform: Calculates an affine transform from three pairs of the corresponding points.
You seem to have more or less than three points in one or both of your vectors.

How to detect image gradient or normal using OpenCV

I wanted to detect ellipse in an image. Since I was learning Mathematica at that time, I asked a question here and got a satisfactory result from the answer below, which used the RANSAC algorithm to detect ellipse.
However, recently I need to port it to OpenCV, but there are some functions that only exist in Mathematica. One of the key function is the "GradientOrientationFilter" function.
Since there are five parameters for a general ellipse, I need to sample five points to determine one. Howevere, the more sampling points indicates the lower chance to have a good guess, which leads to the lower success rate in ellipse detection. Therefore, the answer from Mathematica add another condition, that is the gradient of the image must be parallel to the gradient of the ellipse equation. Anyway, we'll only need three points to determine one ellipse using least square from the Mathematica approach. The result is quite good.
However, when I try to find the image gradient using Sobel or Scharr operator in OpenCV, it is not good enough, which always leads to the bad result.
How to calculate the gradient or the tangent of an image accurately? Thanks!
Result with gradient, three points
Result without gradient, five points
----------updated----------
I did some edge detect and median blur beforehand and draw the result on the edge image. My original test image is like this:
In general, my final goal is to detect the ellipse in a scene or on an object. Something like this:
That's why I choose to use RANSAC to fit the ellipse from edge points.
As for your final goal, you may try
findContours and [fitEllipse] in OpenCV
The pseudo code will be
1) some image process
2) find all contours
3) fit each contours by fitEllipse
here is part of code I use before
[... image process ....you get a bwimage ]
vector<vector<Point> > contours;
findContours(bwimage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(size_t i = 0; i < contours.size(); i++)
{
size_t count = contours[i].size();
Mat pointsf;
Mat(contours[i]).convertTo(pointsf, CV_32F);
RotatedRect box = fitEllipse(pointsf);
/* You can put some limitation about size and aspect ratio here */
if( box.size.width > 20 &&
box.size.height > 20 &&
box.size.width < 80 &&
box.size.height < 80 )
{
if( MAX(box.size.width, box.size.height) > MIN(box.size.width, box.size.height)*30 )
continue;
//drawContours(SrcImage, contours, (int)i, Scalar::all(255), 1, 8);
ellipse(SrcImage, box, Scalar(0,0,255), 1, CV_AA);
ellipse(SrcImage, box.center, box.size*0.5f, box.angle, 0, 360, Scalar(200,255,255), 1, CV_AA);
}
}
imshow("result", SrcImage);
If you focus on ellipse(no other shape), you can treat the value of the pixels of the ellipse as mass of the points.
Then you can calculate the moment of inertial Ixx, Iyy, Ixy to find out the angle, theta, which can rotate a general ellipse back to a canonical form (X-Xc)^2/a + (Y-Yc)^2/b = 1.
Then you can find out Xc and Yc by the center of mass.
Then you can find out a and b by min X and min Y.
--------------- update -----------
This method can apply to filled ellipse too.
More than one ellipse on a single image will fail unless you segment them first.
Let me explain more,
I will use C to represent cos(theta) and S to represent sin(theta)
After rotation to canonical form, the new X is [eq0] X=xC-yS and Y is Y=xS+yC where x and y are original positions.
The rotation will give you min IYY.
[eq1]
IYY= Sum(m*Y*Y) = Sum{m*(xS+yC)(xS+yC)} = Sum{ m(xxSS+yyCC+xySC) = Ixx*S^2 + Iyy*C^2 + Ixy*S*C
For min IYY, d(IYY)/d(theta) = 0 that is
2IxxSC - 2IyySC + Ixy(CC-SS) = 0
2(Ixx-Iyy)/Ixy = (SS-CC)/SC = S/C+C/S = Z+1/Z
While programming, the LHS is just a number, let's said N
Z^2 - NZ +1 =0
So there are two roots of Z hence theta, let's said Z1 and Z2, one will min the IYY and the other will max the IYY.
----------- pseudo code --------
Compute Ixx, Iyy, Ixy for a hollow or filled ellipse.
Compute theta1=atan(Z1) and theta2=atan(Z2)
Put These two theta into eq1 find which is smaller. Then you get theta.
Go back to those non-zero pixels, transfer them to new X and Y by the theta you found.
Find center of mass Xc Yc and min X and min Y by sort().
-------------- by hand -----------
If you need the original equation of the ellipse
Just put [eq0] into the canonical form
You're using terms in an unusual way.
Normally for images, the term "gradient" is interpreted as if the image is a mathematical function f(x,y). This gives us a (df/dx, df/dy) vector in each point.
Yet you're looking at the image as if it's a function y = f(x) and the gradient would be f(x)/dx.
Now, if you look at your image, you'll see that the two interpretations are definitely related. Your ellipse is drawn as a set of contrasting pixels, and as a result there are two sharp gradients in the image - the inner and outer. These of course correspond to the two normal vectors, and therefore are in opposite directions.
Also note that your image has pixels. The gradient is also pixelated. The way your ellipse is drawn, with a single pixel width means that your local gradient takes on only values that are a multiple of 45 degrees:
▄▄ ▄▀ ▌ ▀▄