Draw vertical HoughLines till certain intersection points - c++

My idea is to draw all vertical lines, which are created by calculating the Canny before, from a intersection point to a diagonal lines to another intersection point (also this point comes from a intersection between a vetical and diagonal line). As a reference here an image, the red vertical (Hough)lines should be drawen:
Until yet I just detect all vertical lines with this implementation:
int main(int argc, char *argv[]) {
std::vector<cv::Point> diagonalLine = DiagonalLines::diagonalLines(src);
Mat wdst, cwdst, contRegion;
vector<Vec4i> vericalLines;
double maxLineGap = 200.0;
double threshold = 100;
std::vector<cv::Vec4i> elemLinesCur;
cv::Scalar mu, sigma;
meanStdDev(src, mu, sigma);
Canny(src, wdst, mu.val[0] - sigma.val[0], mu.val[0] + sigma.val[0], 3, false);
cvtColor(wdst, cwdst, CV_GRAY2BGR);
HoughLinesP(wdst, vericalLines, 1, CV_PI / 2, threshold, 50, 200);
cv::Vec4i current, previous;
cv::Point pt1, pt2, ppt1, ppt2;
for (size_t i = 1; i < vericalLines.size(); i++) {
current = vericalLines[i];
pt1 = cv::Point(current[0], current[1]);
pt2 = cv::Point(current[2], current[3]);
previous = vericalLines[i - 1];
ppt1 = cv::Point(previous[0], previous[1]);
ppt2 = cv::Point(previous[2], previous[3]);
if (diagonalLine[i - 1].y > pt2.y && diagonalLine[i].y < pt1.y) {
std::cout << "Intersection: " << pt2.x << "\n";
}
double distanceBetweenPointsX = abs(pt1.x - ppt1.x)*sqrt(2);
if (distanceBetweenPointsX >= 12) {
elemLinesCur.push_back(current);
double angle = atan2(ppt2.y - ppt1.y, ppt2.x - ppt1.x) * 180.0 / CV_PI; ///draw only vertical lines (90 degree)
if (angle) {
line(cwdst, pt1, pt2, cv::Scalar(0, 0, 255), 2, CV_AA);
}
//do some stuff
}
...and here a method, which detect only diagonal lines (it looks similiar to the above one):
std::vector<cv::Point> diagonalLines(cv::Mat src) {
std::vector<cv::Point> hitPoint;
Scalar mu, sigma;
meanStdDev(src, mu, sigma);
Canny(src, ddst, mu.val[0] - sigma.val[0], mu.val[0] + sigma.val[0], 3, false);
cvtColor(ddst, cddst, CV_GRAY2BGR);
HoughLinesP(ddst, vertlines, 1, CV_PI / 180, 100, 50, 10);
cv::Point pt1, pt2;
for (size_t i = 1; i < vertlines.size(); i++) {
cv::Vec4i current = vertlines[i];
pt1 = cv::Point(current[0], current[1]);
pt2 = cv::Point(current[2], current[3]);
double angle = atan2(pt2.y - pt1.y, pt2.x - pt1.x) * 180.0 / CV_PI;
if (angle != -90 && angle != 90) {
//line(cddst, pt1, pt2, Scalar(0, 0, 255), 2, CV_AA);
hitPoint.push_back(pt1);
hitPoint.push_back(pt2);
}
}
return hitPoint;
}
What I know:
I should calculate all those intersection points, yes, I also tried it in if (diagonalLine[i - 1].y > pt2.y && diagonalLine[i].y < pt1.y) but I don't get the further steps. Could some one help me? Thank you in advance!

The OpenCV function line() accepts endpoints as the arguments, so all you need to do is calculate the intersections and use those intersection points for the endpoints of the vertical lines. You can calculate the intersection directly from the endpoints you have as the result of HoughLinesP() using determinants.
In Python, a function to compute the intersection points might look like
def find_intersection(line1, line2):
# extract points
x1, y1 = line1[0]
x2, y2 = line1[1]
x3, y3 = line2[0]
x4, y4 = line2[1]
# compute determinant
Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4)) /
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4)) /
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
return (int(Px), int(Py))
Let's show how you might use this. Suppose your image looked like this:
# draw image and lines
img = np.ones((500, 500, 3)) * 255
diag1 = [(0, 0), (499, 100)]
diag2 = [(0, 499), (499, 399)]
vert1 = [(100, 0), (100, 499)]
vert2 = [(400, 0), (400, 499)]
cv2.line(img, diag1[0], diag1[1], color=[0, 0, 255])
cv2.line(img, diag2[0], diag2[1], color=[0, 0, 255])
cv2.line(img, vert1[0], vert1[1], color=[0, 255, 0])
cv2.line(img, vert2[0], vert2[1], color=[0, 255, 0])
To cut them off at the intersection, simply use the function to find those points and only draw the vertical lines at the intersection points with each diagonal line.
# get intersection points
vert1_intersect = [find_intersection(diag1, vert1), find_intersection(diag2, vert1)]
vert2_intersect = [find_intersection(diag1, vert2), find_intersection(diag2, vert2)]
# draw vertical lines from intersection points
img = np.ones((500, 500, 3)) * 255
diag1 = [(0, 0), (499, 100)]
diag2 = [(0, 499), (499, 399)]
vert1 = [(100, 0), (100, 499)]
vert2 = [(400, 0), (400, 499)]
cv2.line(img, diag1[0], diag1[1], color=[0, 0, 255])
cv2.line(img, diag2[0], diag2[1], color=[0, 0, 255])
cv2.line(img, vert1_intersect[0], vert1_intersect[1], color=[0, 255, 0])
cv2.line(img, vert2_intersect[0], vert2_intersect[1], color=[0, 255, 0])

Related

How to imRotate with OpenCV, C++, and get the exact same result?

I tried everything available here related to this problem, but none gave me the exact same results, so I'd like to know if there exist one? and if so, how can I achieve it?
Let's talk about a simple example and hopefully find a way to get the same result in Matlab as well as in OpenCV:
Matlab:
test = [ 10 20 10 ; 20 10 10 ; 30 30 30]
im_rot = imrotate(double(test), -45, 'bilinear', 'crop');
Result:
im_rot =
11.7157 14.1421 11.7157
26.2132 10.0000 12.0711
17.5736 24.1421 5.8579
OpenCV:
double data[9]{ 10, 20, 10, 20, 10, 10, 30, 30, 30 };
Mat test = Mat(Size(3, 3), CV_64F, data);
What lines of code can get the exact same result as the one above?
Edit:
Tried the following:
void matlabImrotate(Mat& src, double angle, int interpolationMethod, Mat& dst)
{
// src: https://stackoverflow.com/questions/38715363/how-to-implement-imrotate-of-matlab-in-opencv
try
{
// Special Cases
if (fmod(angle, 360.0) == 0.0)
dst = src;
else {
Point2f center(src.cols / 2.0F, src.rows / 2.0F);
Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
cout << "center: " << center << endl;
// determine bounding rectangle
Rect bbox = RotatedRect(center, src.size(), angle).boundingRect();
// adjust transformation matrix
// cout << "bbox.size: " << bbox.size() << endl;
rot.at<double>(0, 2) += ((bbox.width ) / 2.0 - center.x );
rot.at<double>(1, 2) += ((bbox.height ) / 2.0 - center.y );
warpAffine(src, dst, rot, bbox.size(), interpolationMethod);
}
}
}
matlabImrotate(test, -45, INTER_LINEAR, res);
Result:
[0, 0, 0, 1.40625, 0, 0;
0, 0, 6.6796875, 11.69921875, 6.6796875, 0;
0, 8.7890625, 24.53125, 13.41796875, 14.53125, 2.9296875;
0, 2.8125, 23.4375, 20, 7.8125, 0.9375;
0, 0, 2.8125, 18.310546875, 1.875, 0;
0, 0, 0, 0.263671875, 0, 0]
Also tried:
void matlabImrotate2(Mat& src, double angle, int interpolationMethod, Mat& dst, int border=0)
{
// src: https://stackoverflow.com/questions/14870089/how-to-imrotate-with-opencv-2-4-3
Mat bordered_source;
int top, bottom, left, right;
top = bottom = left = right = border;
copyMakeBorder(src, bordered_source, top, bottom, left, right, BORDER_CONSTANT, cv::Scalar());
Point2f src_center(bordered_source.cols / 2.0F, bordered_source.rows / 2.0F);
Mat rot_mat = getRotationMatrix2D(src_center , angle, 1.0);
warpAffine(bordered_source, dst, rot_mat, bordered_source.size());
return true;
}
matlabImrotate2(test, -45, INTER_LINEAR, res);
Result:
[9.375, 17.28515625, 17.28515625;
23.4375, 21.09375, 11.09375;
2.8125, 23.4375, 15.625]

Edge Extraction Suggections OpenCV

Im looking for suggestions to improve my algorithm to search for parts in the following image
so far I have the following
GaussianBlur(canny, canny, Size(5, 5), 2, 2);
Canny(canny, canny, 100, 200, 5);
HoughCircles(canny, Part_Centroids, CV_HOUGH_GRADIENT, 2, 30, 100, 50, 50, 60);
My edge detect output looks like this
and Im using a HoughCircle to try to find the parts. I havent been having great success though because the HoughCircle seems very fussy and often returns a circle that isnt really the best match for a part.
Any suggestions on improving this search algorithm
EDIT:
I have tried the suggestions in the comments below. The normalization made some improvements but removing the canny before hough circles altered the required settings but not the stability.
I think now that I need to do something like the hough circles with very open thresholds and then find a way to score the results. Are there any good methods to score the results of hough circle or correlate the results with the canny output for percentage of match
I thought I would post my solution as someone may find my lessons learned valuable.
I started by taking several frames and averaging them out. This solved some of the noise issues I was having while preserving the strong edges. Next I did a basic filter and canny edge to extract a decent edge map.
Scalar cannyThreshold = mean(filter);
// Canny Edge Detection
Canny(filter, canny, cannyThreshold[0]*(2/3), cannyThreshold[0]*(1+(1/3)), 3);
Next I use a cross correlation with increasing diametered templates and store matches that score over a threshold
// Iterate through diameter ranges
for (int r = 40; r < 70; r++)
{
Mat _mask, _template(Size((r * 2) + 4, (r * 2) + 4), CV_8U);
_template = Scalar(0, 0, 0);
_mask = _template.clone();
_mask = Scalar(0, 0, 0);
circle(_template, Point(r + 4, r + 4), r, Scalar(255, 255, 255), 2, CV_AA);
circle(_template, Point(r + 4, r + 4), r / 3.592, Scalar(255, 255, 255), 2, CV_AA);
circle(_mask, Point(r + 4, r + 4), r + 4, Scalar(255, 255, 255), -1);
Mat res_32f(canny.rows, canny.cols, CV_32FC1);
matchTemplate(canny, _template, res_32f, CV_TM_CCORR_NORMED, _mask);
Mat resize(canny.rows, canny.cols, CV_32FC1);
resize = Scalar(0, 0, 0);
res_32f.copyTo(resize(Rect((resize.cols - res_32f.cols) / 2, (resize.rows - res_32f.rows) / 2, res_32f.cols, res_32f.rows)));
// Strore Well Scoring Results
double minVal, maxVal;
double threshold = .25;
do
{
Point minLoc, maxLoc;
minMaxLoc(resize, &minVal, &maxVal, &minLoc, &maxLoc);
if (maxVal > threshold)
{
matches.push_back(CircleScore(maxLoc.x, maxLoc.y, r, maxVal,1));
circle(resize, maxLoc, 30, Scalar(0, 0, 0), -1);
}
} while (maxVal > threshold);
}
I filter out circles for the best match in each zone
// Sort Matches For Best Match
for (size_t i = 0; i < matches.size(); i++)
{
size_t j = i + 1;
while (j < matches.size())
{
if (norm(Point2f(matches[i].X, matches[i].Y) - Point2f(matches[j].X, matches[j].Y)) - abs(matches[i].Radius - matches[j].Radius) < 15)
{
if (matches[j].Score > matches[i].Score)
{
matches[i] = matches[j];
}
matches[j] = matches[matches.size() - 1];
matches.pop_back();
j = i + 1;
}
else j++;
}
}
Next was the tricky one. I wanted to see which part was likely to be on top. I did this by examining every set of parts that are closer then the sum of there radii, then seeing if the edges in the overlap zone are a stronger match for one over the other. Any covered circle should have little strong edges in the overlap zone.
// Layer Sort On Intersection
for (size_t i = 0; i < matches.size(); i++)
{
size_t j = i + 1;
while (j < matches.size())
{
double distance = norm(Point2f(matches[i].X, matches[i].Y) - Point2f(matches[j].X, matches[j].Y));
// Potential Overlapping Part
if (distance < ((matches[i].Radius+matches[j].Radius) - 10))
{
int score_i = 0, score_j = 0;
Mat intersect_a(canny.rows, canny.cols, CV_8UC1);
Mat intersect_b(canny.rows, canny.cols, CV_8UC1);
intersect_a = Scalar(0, 0, 0);
intersect_b = Scalar(0, 0, 0);
circle(intersect_a, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius) +4, Scalar(255, 255, 255), -1);
circle(intersect_a, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius / 3.592-4), Scalar(0, 0, 0), -1);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius) + 4, Scalar(255, 255, 255), -1);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius / 3.592-4), Scalar(0, 0, 0), -1);
bitwise_and(intersect_a, intersect_b, intersect_a);
double a, h;
a = (matches[i].Radius*matches[i].Radius - matches[j].Radius*matches[j].Radius + distance*distance) / (2 * distance);
h = sqrt(matches[i].Radius*matches[i].Radius - a*a);
Point2f p0((matches[j].X - matches[i].X)*(a / distance) + matches[i].X, (matches[j].Y - matches[i].Y)*(a / distance) + matches[i].Y);
circle(intersect_a, Point2f(p0.x + h*(matches[j].Y - matches[i].Y) / distance, p0.y - h*(matches[j].X - matches[i].X) / distance), 6, Scalar(0, 0, 0), -1);
circle(intersect_a, Point2f(p0.x - h*(matches[j].Y - matches[i].Y) / distance, p0.y + h*(matches[j].X - matches[i].X) / distance), 6, Scalar(0, 0, 0), -1);
bitwise_and(intersect_a, canny, intersect_a);
intersect_b = Scalar(0, 0, 0);
circle(intersect_b, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius), Scalar(255, 255, 255), 6);
bitwise_and(intersect_a, intersect_b, intersect_b);
score_i = countNonZero(intersect_b);
intersect_b = Scalar(0, 0, 0);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius), Scalar(255, 255, 255), 6);
bitwise_and(intersect_a, intersect_b, intersect_b);
score_j = countNonZero(intersect_b);
if (score_i < score_j)matches[i].Layer = matches[j].Layer + 1;
if (score_j < score_i)matches[j].Layer = matches[i].Layer + 1;
}
j++;
}
}
After that it was easy to extract the best part to pick(Im correlating to depth data as well
The blue circles are parts, the green circle is the tallest stack and red circles are part that are under other parts.
I hope this may help someone else working on similar problems

OpenCV Drawn Lines on Contour (c++)

I want to draw lines on the following picture, that I can caclulate the length of each line. My problem is that when I try it with the following code my image get completly white.
std::vector<cv::Vec2f> lines;
cv::HoughLines(drawing_small, lines, 1, CV_PI/180, 50, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cv::line( drawing_small, pt1, pt2, cv::Scalar(0,100,0), 3, CV_AA);
}
Something like that:
I would be very happy if anyone can say me what I can do.
Update
This is what I do before:
cv::findContours(dst, contours_small, hierarchy_small, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
//Detecting Contours
std::vector<cv::Point2f> ContCenter_small(contours_small.size());
cv::Mat drawing_small = cv::Mat::zeros( dst.size(), CV_8UC3 );
for( int i = 0; i < contours_small.size(); i++ )
{
ContArea_small[i] = moments(contours_small[i], false);
ContCenter_small[i] = cv::Point2f(ContArea_small[i].m10/ContArea_small[i].m00, ContArea_small[i].m01/ContArea_small[i].m00);
cv::Scalar color_small = cv::Scalar(0,255,0);
if(ContArea_small[i].m00 > 2000)
{
drawContours( drawing_small, contours_small, i, color_small, CV_FILLED , 8, hierarchy_small, 1, cv::Point() );
}
}
cv::imwrite("contour.jpg",drawing_small);
cv::dilate(drawing_small, drawing_small,C,cv::Point(-1,-1),1,1,20);
cv::threshold(drawing_small,drawing_small,100,255,cv::THRESH_BINARY_INV);
cv::GaussianBlur(drawing_small,drawing_small,cv::Size(9,9),11);
This probably means that Hough Transform did manage to find any lines on your picture. In this case you should pre-filter your image first. For example, you can try Otsu's thresholding and Gaussian blur. And if I were you than I would first start from trying to pass different parameters to cv::HoughLines (especially threshold -- The minimum number of intersections to “detect” a line)
Make sure you are drawing lines on and outputting the source image instead of some processed one. Can you show us more code about what you did exactly.

detecting 2 lines opencv

I have an image on which I run a dilation, and works fine, now I want to detect two dick lines on it :
and when run on it the part of code:
cv::Canny(dilationResult,canny,50,200,3);
cv::cvtColor(dilationResult,dilationResult,CV_BGR2GRAY);
cv::HoughLines(canny,lines,30,CV_PI/180,500,0);
cv::cvtColor(mask,mask,CV_GRAY2BGR);
if(lines.size()!=0){
std::cout << " line Size " << lines.size()<< std::endl;
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][2];
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
angle = atan2f((pt2.y-pt1.y),(pt2.x-pt1.x))*180.0/CV_PI;
std::cout << "angle " << angle<< std::endl;
line( mask, pt1, pt2, cv::Scalar(0,0,255), 3, CV_AA);
}
}
cv::imshow("mask " ,mask);
here's the result:
what I would like to get is something like this :
getting only 2 lines that have the same width, and by the way I don't want to use findcontour function
any idea how can do this !
I don't get it to work with hough transform, but with the probabilistic version cv::HoughLinesP
with lineDetection_Input.jpg being your linked image
int main()
{
cv::Mat color = cv::imread("../lineDetection_Input.jpg");
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
std::vector<cv::Vec4i> lines;
cv::HoughLinesP( gray, lines, 1, 2*CV_PI/180, 100, 100, 50 );
for( size_t i = 0; i < lines.size(); i++ )
{
cv::line( color, cv::Point(lines[i][0], lines[i][1]),
cv::Point(lines[i][2], lines[i][3]), cv::Scalar(0,0,255), 1);
}
cv::imwrite("lineDetection_Output.jpg", color);
cv::namedWindow("output"); cv::imshow("output", color); cv::waitKey(-1);
return 0;
}
lineDetection_Output.jpg:
for rotated image:
and for some different intersection angle:
there you can see some lines detected with a slightly false angle that start in the top-right and end near the intersection (close behind), but these might be easily filtered by length or something.

logic behind the code

this is from opencv hough lines, can any one explain me, after changing it tio cartesian
WHY THEY ADDED a+1000, -b*1000
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* src;
if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0)
{
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 1 );
IplImage* color_dst = cvCreateImage( cvGetSize(src), 8, 3 );
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* lines = 0;
int i;
cvCanny( src, dst, 50, 200, 3 );
cvCvtColor( dst, color_dst, CV_GRAY2BGR );
#if 1
lines = cvHoughLines2( dst,
storage,
CV_HOUGH_STANDARD,
1,
CV_PI/180,
100,
0,
0 );
for( i = 0; i < MIN(lines->total,100); i++ )
{
float* line = (float*)cvGetSeqElem(lines,i);
float rho = line[0];
float theta = line[1];
CvPoint pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cvLine( color_dst, pt1, pt2, CV_RGB(255,0,0), 3, 8 );
}
Cos and Sin go from -1 to +1, so the origin of the Hough accumalator space is at 0,0.
Assuming your display has positive size it's convenient to have the centre of the plot in the middle of the screen.
Perhaps they wanted to get corners of the bounding rectangle around a given center?
It is a hack.
Try this. Run the example as is. Remove the 4 instances of 1000. You will get points instead of lines.
Put in 750 instead of 1000. You get the same result as if you had put in 1000.
The 1000 is to make sure the lines get drawn across the image. You could also do the following, which is
a little better:
Right after HoughLines(...) is called, add the following:
int h = src.rows;
int w = src.cols;
int factor = (int) (sqrt(h * h + w * w)); // diagonal length of the image, maximum line length
Then instead of 1000, multiply by factor. If your image is greater than 1000x1000, the original
code won't work.
Roy