Draw all lines obtained with HoughLines in OpenCV - c++

I'm using OpenCV 3.2.
I'd like to extract and draw all lines in this image.
For this, I first obtain the contours of the image. For example, I'm using the Canny algorithm, with a double threshold 100 (low) and 200 (high).
Mat image = cv::imread(<image_path>, cv::IMREAD_GRAYSCALE);
cv::Mat contours;
cv::Canny(image, contours, 100, 200);
Then, I call the HoughLines function with a resolution of 1 pixel and π / 45 radians. I just want those lines which have a length of at least 60 pixels.
std::vector<cv::Vec2f> lines;
cv::HoughLines(canny, lines, 1, CV_PI/45, 60);
This returns me a vector lines with the rho p and theta θ parameters in the Hough space of the desired lines. As we know, the line going through a contour pixel (x_i, y_i) is:
p = x_i cos(θ) + y_i sin(θ)
We know p and θ, so we know all the pixels in this line. Two easy points to calculate are A with x_i = 0 and B with y_i = 0.
A = (0, p / sin(θ))
B = (p / cos(θ), 0)
Let's draw them with the line function in blue color.
cv::cvtColor(image, image, CV_GRAY2BGR);
for (unsigned int i = 0; i < lines.size(); ++i) {
float p = lines[i][0];
float theta = lines[i][1];
cv::Point a(0, static_cast<int>(p / std::sin(theta)));
cv::Point b(static_cast<int>(p / std::cos(theta)), 0);
cv::line(image, a, b, cv::Scalar(255, 0, 0));
}
The result is that it only draws me 6 lines, of a total of 14 obtained. As you can see, only those lines that intersect the row 0 and column 0 of the image are drawn. What is the same, those lines which have A and B points in the image boundary. The rest of the lines have these points outside the image.
How can I achieve to draw all the lines in an easy way? I can calculate all the pixels of the obtained lines and draw them (we know them), but I'd like to draw them by minimizing lines of code and using OpenCV api.

Related

OpenCV findContours, how to check colors on both sides

I have an Mat object derived using the canny edge detectors, I extracted contours from such image using the findContours function. Now what I'd like to do for each of such contours would be somehow check the colour on both sides.
For the "colour" bit I've discretized HSI color space, however I'm very confused on how I could "pick the colours" in both sides given a contour.
Is there a way to easily do this?
You can use the image that you apply the Canny edge detector to do this. Take the gradient of that image. Gradient is a vector. As shown in the wiki page image (shown below), the gradient points in the direction of the greatest rate of increase. If you take the negative gradient, then it points in the direction of the greatest rate of decrease. Therefore, if you sample the gradient of the image at contour points, positive and negative gradients at those points should point to the regions either side of contour points. So, you can sample points along these directions to get an idea about the colors you want.
Image gradient:
Sample python code shows how this is done for the simple image shown below. It uses Sobel to calculate the gradient.
Input image:
Canny edges and sampled points:
Green: point on contour
Red: point in the positive gradient direction
Blue: point in the negative gradient direction
import cv2
import numpy as np
from matplotlib import pyplot as plt
im = cv2.imread('grad.png', 0)
dx = cv2.Sobel(im, cv2.CV_32F, 1, 0)
dy = cv2.Sobel(im, cv2.CV_32F, 0, 1)
edge = cv2.Canny(im, 64, 192)
dx = dx / np.sqrt(dx*dx + dy*dy + 0.01)
dy = dy / np.sqrt(dx*dx + dy*dy + 0.01)
r = 20
y, x = np.nonzero(edge)
pos1 = (np.int32(x[128]+r*dx[y[128], x[128]]), np.int32(y[128]+r*dy[y[128], x[128]]))
pos2 = (np.int32(x[128]-r*dx[y[128], x[128]]), np.int32(y[128]-r*dy[y[128], x[128]]))
im2 = cv2.cvtColor(edge, cv2.COLOR_GRAY2BGR)
cv2.circle(im2, pos1, 10, (255, 0, 0), 1)
cv2.circle(im2, pos2, 10, (0, 0, 255), 1)
cv2.circle(im2, (x[128], y[128]), 10, (0, 255, 0), 1)
plt.imshow(im2)

How to detect the intensity gradient direction

Having a Mat that is square area of grayscale pixels. How to create a straight line whose direction is created as a perpendicular to most pixel values change direction (average gradient, aerage over the whole Mat, the result would be just one direction (which can be then drawn as a line))?
For example having
it would look like
How can one do such thing in OpenCV (in python or C++)?
An OpenCV implementation would look something like the following. It solves the problem in a similar fashion as explained in the answer by Mark Setchell, except that normalising the image does not have any effect on the resulting direction.
Mat img = imread("img.png", IMREAD_GRAYSCALE);
// compute the image derivatives for both the x and y direction
Mat dx, dy;
Sobel(img, dx, CV_32F, 1, 0);
Sobel(img, dy, CV_32F, 0, 1);
Scalar average_dx = mean(dx);
Scalar average_dy = mean(dy);
double average_gradient = atan2(-average_dy[0], average_dx[0]);
cout << "average_gradient = " << average_gradient << endl;
And to display the resulting direction
Point center = Point(img.cols/2, img.rows/2);
Point direction = Point(cos(average_gradient) * 100, -sin(average_gradient) * 100);
Mat img_rgb = imread("img.png"); // read the image in colour
line(img_rgb, center, center + direction, Scalar(0,0,255));
imshow("image", img_rgb);
waitKey();
I can't easily tell you how to do it with OpenCV, but I can tell you the method and demonstrate using ImageMagick just at the command-line.
First, I think you need to convert the image to grayscale and normalise it to the full range of black to white - like this:
convert gradient.png -colorspace gray -normalize stage1.png
Then you need to calculate the X-gradient and the Y-gradient of the image using a Sobel filter and then take the inverse tan of the Y-gradient over the X-gradient:
convert stage1.png -define convolve:scale='50%!' -bias 50% \
\( -clone 0 -morphology Convolve Sobel:0 \) \
\( -clone 0 -morphology Convolve Sobel:90 \) \
-fx '0.5+atan2(v-0.5,0.5-u)/pi/2' result.jpg
Then the mean value of the pixels in result.jpg is the direction of your line.
You can see the coefficients used in the convolution for X- and Y-gradient like this:
convert xc: -define morphology:showkernel=1 -morphology Convolve Sobel:0 null:
Kernel "Sobel" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 0 -1
1: 2 0 -2
2: 1 0 -1
convert xc: -define morphology:showkernel=1 -morphology Convolve Sobel:90 null:
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
See Wikipedia here - specifically this line:
Convert the image to grayscale and classify its pixels based on the gray level. For classification, you can use something like Otsu method or kmeans with 2 clusters. Then take the morphological gradient to detect the doundary.
Here are the classified pixels and the boundary using Otsu method.
Now find the non-zero pixels of the boundary image and fit a 2D line to those pixels using the fitLine function that finds a weighted least squares line or use this RANSAC implementation. fitLine gives a normalized vector collinear to the line. Using this vector, you can find an orthogonal vector to it.
I get [0.983035, -0.183421] for the collinear vector using the code below. So, [0.183421 0.983035] is orthogonal to this vector.
Here, in the left image, the red line is the least squares line and the blue line is a perpendicular line to the red one. In the right image, red line is the least squares line and the green one is the line fitted using the RANSAC library mentioned above.
Mat im = imread("LP24W.png", 0);
Mat bw, gr;
threshold(im, bw, 0, 255, CV_THRESH_BINARY|CV_THRESH_OTSU);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(bw, gr, CV_MOP_GRADIENT, kernel);
vector<vector<Point>> contours;
findContours(gr, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Point> points;
for (vector<Point>& cont: contours)
{
points.insert(points.end(), cont.begin(), cont.end());
}
Vec4f line;
fitLine(points, line, CV_DIST_L2, 0, 0.01, 0.01);
cout << line << endl;

Hough Line Transform - artifacts at 45 degree angle

I implemented the Hough Lines Transform in OpenCV (c++) and I get strange artifacts in the Hough Space. The following picture shows the Hough Space. The distance rho is depicted in the rows while the 180 columns represent the angle from 0 to 179 degree. If you zoom in on column 45 and 135 you see a vertical line with alternating dark and bright pixels.
http://imgur.com/NDtMn6S
For higher thresholds the lines of the fence are detected fine but when I lower the threshold the artifacts can be seen as 45° or 135° rotated lines in the final picture:
Detected lines for medium threshold
At first I thought it was a mistake in my implementation of the Hough Lines method but get similar lines for medium thresholds using OpenCV's Hough Line method. I also encounter the same problem when using Canny instead of Sobel.
So the question is: why do I get these artifacts and how can I get rid of them? I wasn't able to find anything about this and any help would be appreciated.
This is the code I used with the OpenCV Hough Lines method:
// read in input image and convert to grayscale
Mat frame = imread("fence.png", CV_LOAD_IMAGE_COLOR);
Mat final_out;
frame.copyTo(final_out);
Mat img, gx, gy, mag, angle;
cvtColor(frame, img, CV_BGR2GRAY);
// get the thresholded maggnitude image
Sobel(img, gx, CV_64F, 1, 0);
Sobel(img, gy, CV_64F, 0, 1);
cartToPolar(gx, gy, mag, angle);
normalize(mag, mag, 0, 255, NORM_MINMAX);
mag.convertTo(mag, CV_8U);
threshold(mag, mag, 55, 255.0, THRESH_BINARY);
// apply the hough lines transform and draw the lines
vector<Vec2f> lines;
HoughLines(mag, lines, 1, CV_PI / 180, 240);
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
pt1.x = 0;
pt1.y = (rho - pt1.x * cos(theta))/sin(theta);
pt2.x = mag.cols;
pt2.y = (rho - pt2.x * cos(theta))/sin(theta);
line(final_out, pt1, pt2, Scalar(0,0,255), 1, CV_AA);
}
// show the image
imshow("final_image", final_out);
cvWaitKey();
Answering the question - you can't get rid of such artifact - it's mathematical by nature due to discrete nature of image and pixels' grid orthogonality. The only way is to exclude exact 45 degree from the analysis.
I found the source - the bright pixels of anomaly are produced by the next issue:
Red dots - exactly 45 degree bright anomaly - you can see they are doubled making stairs pattern - which doubles number of pixels involved in accumulation.
Blue dots - exactly 45 degree dim anomaly - making chess-board pattern
Green dots - 44 degree line - you can see it's alternate doubling and chess patterns - which mediates anomaly.
If you look on whole picture of Hough transform matrix you will see how brightness slowly shifting across whole picture - representing slowly changing this alternation ratio depending on angle. However, due to nature of the pixel grid, at exactly 45 degree it makes this anomaly very acute. I don't know how to deal with it yet...
Stumbled across this and maybe useful to future people.
The image is inverted; the algorithm is accumulating the white pixels which obviously there are more of along the diagonals of the image. The lines you are looking for are black, which means they are zero valued and not considered.

How to detect image gradient or normal using OpenCV

I wanted to detect ellipse in an image. Since I was learning Mathematica at that time, I asked a question here and got a satisfactory result from the answer below, which used the RANSAC algorithm to detect ellipse.
However, recently I need to port it to OpenCV, but there are some functions that only exist in Mathematica. One of the key function is the "GradientOrientationFilter" function.
Since there are five parameters for a general ellipse, I need to sample five points to determine one. Howevere, the more sampling points indicates the lower chance to have a good guess, which leads to the lower success rate in ellipse detection. Therefore, the answer from Mathematica add another condition, that is the gradient of the image must be parallel to the gradient of the ellipse equation. Anyway, we'll only need three points to determine one ellipse using least square from the Mathematica approach. The result is quite good.
However, when I try to find the image gradient using Sobel or Scharr operator in OpenCV, it is not good enough, which always leads to the bad result.
How to calculate the gradient or the tangent of an image accurately? Thanks!
Result with gradient, three points
Result without gradient, five points
----------updated----------
I did some edge detect and median blur beforehand and draw the result on the edge image. My original test image is like this:
In general, my final goal is to detect the ellipse in a scene or on an object. Something like this:
That's why I choose to use RANSAC to fit the ellipse from edge points.
As for your final goal, you may try
findContours and [fitEllipse] in OpenCV
The pseudo code will be
1) some image process
2) find all contours
3) fit each contours by fitEllipse
here is part of code I use before
[... image process ....you get a bwimage ]
vector<vector<Point> > contours;
findContours(bwimage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(size_t i = 0; i < contours.size(); i++)
{
size_t count = contours[i].size();
Mat pointsf;
Mat(contours[i]).convertTo(pointsf, CV_32F);
RotatedRect box = fitEllipse(pointsf);
/* You can put some limitation about size and aspect ratio here */
if( box.size.width > 20 &&
box.size.height > 20 &&
box.size.width < 80 &&
box.size.height < 80 )
{
if( MAX(box.size.width, box.size.height) > MIN(box.size.width, box.size.height)*30 )
continue;
//drawContours(SrcImage, contours, (int)i, Scalar::all(255), 1, 8);
ellipse(SrcImage, box, Scalar(0,0,255), 1, CV_AA);
ellipse(SrcImage, box.center, box.size*0.5f, box.angle, 0, 360, Scalar(200,255,255), 1, CV_AA);
}
}
imshow("result", SrcImage);
If you focus on ellipse(no other shape), you can treat the value of the pixels of the ellipse as mass of the points.
Then you can calculate the moment of inertial Ixx, Iyy, Ixy to find out the angle, theta, which can rotate a general ellipse back to a canonical form (X-Xc)^2/a + (Y-Yc)^2/b = 1.
Then you can find out Xc and Yc by the center of mass.
Then you can find out a and b by min X and min Y.
--------------- update -----------
This method can apply to filled ellipse too.
More than one ellipse on a single image will fail unless you segment them first.
Let me explain more,
I will use C to represent cos(theta) and S to represent sin(theta)
After rotation to canonical form, the new X is [eq0] X=xC-yS and Y is Y=xS+yC where x and y are original positions.
The rotation will give you min IYY.
[eq1]
IYY= Sum(m*Y*Y) = Sum{m*(xS+yC)(xS+yC)} = Sum{ m(xxSS+yyCC+xySC) = Ixx*S^2 + Iyy*C^2 + Ixy*S*C
For min IYY, d(IYY)/d(theta) = 0 that is
2IxxSC - 2IyySC + Ixy(CC-SS) = 0
2(Ixx-Iyy)/Ixy = (SS-CC)/SC = S/C+C/S = Z+1/Z
While programming, the LHS is just a number, let's said N
Z^2 - NZ +1 =0
So there are two roots of Z hence theta, let's said Z1 and Z2, one will min the IYY and the other will max the IYY.
----------- pseudo code --------
Compute Ixx, Iyy, Ixy for a hollow or filled ellipse.
Compute theta1=atan(Z1) and theta2=atan(Z2)
Put These two theta into eq1 find which is smaller. Then you get theta.
Go back to those non-zero pixels, transfer them to new X and Y by the theta you found.
Find center of mass Xc Yc and min X and min Y by sort().
-------------- by hand -----------
If you need the original equation of the ellipse
Just put [eq0] into the canonical form
You're using terms in an unusual way.
Normally for images, the term "gradient" is interpreted as if the image is a mathematical function f(x,y). This gives us a (df/dx, df/dy) vector in each point.
Yet you're looking at the image as if it's a function y = f(x) and the gradient would be f(x)/dx.
Now, if you look at your image, you'll see that the two interpretations are definitely related. Your ellipse is drawn as a set of contrasting pixels, and as a result there are two sharp gradients in the image - the inner and outer. These of course correspond to the two normal vectors, and therefore are in opposite directions.
Also note that your image has pixels. The gradient is also pixelated. The way your ellipse is drawn, with a single pixel width means that your local gradient takes on only values that are a multiple of 45 degrees:
▄▄ ▄▀ ▌ ▀▄

Selecting the pixels with highest intensity in OpenCV

Can anyone help me to find out the top 1% (or say top 100 pixels)brightest pixels with their locations of a gray image in opencv. because cvMinMaxLoc() gives only brightest pixel location.
Any help is greatly appreciated.
this is a simple yet unneficient/stupid way to do it:
for i=1:100
get brightest pixel using cvMinMaxLoc
store location
set it to a value of zero
end
if you don't mind about efficiency this should work.
you should also check cvInRangeS to find other pixels of similar values defining low and high thresholds.
You need to calculate the brightness threshold from the histogram. Then you iterate through the pixels to get those positions that are bright enough to satisfy the threshold. The program below instead applies the threshold to the image and displays the result for demonstration purposes:
#!/usr/bin/env python3
import sys
import cv2
import matplotlib.pyplot as plt
if __name__ == '__main__':
if len(sys.argv) != 2 or any(s in sys.argv for s in ['-h', '--help', '-?']):
print('usage: {} <img>'.format(sys.argv[0]))
exit()
img = cv2.imread(sys.argv[1], cv2.IMREAD_GRAYSCALE)
hi_percentage = 0.01 # we want we the hi_percentage brightest pixels
# * histogram
hist = cv2.calcHist([img], [0], None, [256], [0, 256]).flatten()
# * find brightness threshold
# here: highest thresh for including at least hi_percentage image pixels,
# maybe you want to modify it for lowest threshold with for including
# at most hi_percentage pixels
total_count = img.shape[0] * img.shape[1] # height * width
target_count = hi_percentage * total_count # bright pixels we look for
summed = 0
for i in range(255, 0, -1):
summed += int(hist[i])
if target_count <= summed:
hi_thresh = i
break
else:
hi_thresh = 0
# * apply threshold & display result for demonstration purposes:
filtered_img = cv2.threshold(img, hi_thresh, 0, cv2.THRESH_TOZERO)[1]
plt.subplot(121)
plt.imshow(img, cmap='gray')
plt.subplot(122)
plt.imshow(filtered_img, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
C++ version based upon some of the other ideas posted:
// filter the brightest n pixels from a grayscale img, return a new mat
cv::Mat filter_brightest( const cv::Mat& src, int n ) {
CV_Assert( src.channels() == 1 );
CV_Assert( src.type() == CV_8UC1 );
cv::Mat result={};
// simple histogram
std::vector<int> histogram(256,0);
for(int i=0; i< int(src.rows*src.cols); ++i)
histogram[src.at<uchar>(i)]++;
// find max threshold value (pixels from [0-max_threshold] will be removed)
int max_threshold = (int)histogram.size() - 1;
for ( ; max_threshold >= 0 && n > 0; --max_threshold ) {
n -= histogram[max_threshold];
}
if ( max_threshold < 0 ) // nothing to do
src.copyTo(result);
else
cv::threshold(src, result, max_threshold, 0., cv::THRESH_TOZERO);
return result;
}
Usage example: get top 1%
auto top1 = filter_brightest( img, int((img.rows*img.cols) * .01) );
Try using cvThreshold instead.
Well the most logical way is to iterate the whole picture, then get the max and min value of the pixels.
Then chose a threshold which will give you the desired percent(1% in your case).
After that iterate again and save the i and j coordinates of each pixel above the given threshold.
This way you'll iterate the matrix only two times instead of 100(or 1% of the pixels times) and choosing the brightest and deleting it.
OpenCV mats are multidimensional arrays. Gray image is two dimensional array with values from 0 to 255.
You can iterate trough the matrix like this.
for(int i=0;i < mat.height();i++)
for(int j=0;j < mat.width();j++)
mat[i][j];