I've been trying to get 4 lines around the square so that I can obtain the vertices of the square. I'm going with this approach rather than finding corners directly using Harris or contours method due to accuracy. Using houghlines in built function in opencv I'm unable to get full length lines to get intersection points and I'm also getting too many irrelevant lines. I'd like to know if the parameters can be fine tuned to obtain my requirements? If yes how do I go about it? My question is exactly the same as this one here. However I'm not getting those lines itself even after changing those parameters. I've attached the original image along with the code and output:
Original Image:
Code:
#include <Windows.h>
#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2/imgcodecs/imgcodecs.hpp"
#include "opencv2/videoio/videoio.hpp"
using namespace cv;
using namespace std;
int main(int argc, const char** argv)
{
Mat image,src;
image = imread("c:/pics/output2_1.bmp");
src = image.clone();
cvtColor(image, image, CV_BGR2GRAY);
threshold(image, image, 0, 255, CV_THRESH_OTSU + CV_THRESH_BINARY_INV);
namedWindow("thresh", WINDOW_NORMAL);
resizeWindow("thresh", 600, 400);
imshow("thresh", image);
cv::Mat edges;
cv::Canny(image, edges, 0, 255);
vector<Vec2f> lines;
HoughLines(edges, lines, 1, CV_PI / 180, 100, 0, 0);
for (size_t i = 0; i < lines.size(); i++)
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
line(src, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
}
namedWindow("Edges Structure", WINDOW_NORMAL);
resizeWindow("Edges Structure", 600, 400);
imshow("Edges Structure", src);
waitKey(0);
return(0);
}
Output Image:
Update: There is a frame on this image, so I was able to reduce the irrelevant lines in the border of the image by removing that frame, however I'm still not getting complete lines covering the square.
There are many ways to do this, I will give an example of just one. However, I'm quickest in python, so my code example will be in that language. Should not be hard to translate it, though (please feel free to edit your post with your C++ solution after you've finished it for others).
For preprocessing, I highly suggest dilate()ing your edge image. This will make the lines thicker which will help fit the Hough lines better. What the Hough lines function does in the abstract is basically make a grid of lines passing through a ton of angles and distances, and if the lines go over any white pixels from Canny, then it gives that line a score for each point it goes through. However, the lines from Canny won't be perfectly straight, so you'll get a few different lines scoring. Making those Canny lines thicker will mean each line that is really close to fitting well will have better chances of scoring higher.
If you're going to use HoughLinesP, then your output will be line segments, where all you have is two points on the line.
Since the lines are mostly vertical and horizontal, you can easily split the lines based on their position. If the two y-coordinates of one line are near each other, then the line is mostly horizontal. If the two x-coordinates are near each other, then the line is mostly vertical. So you can segment your lines into vertical lines and horizontal lines that way.
def segment_lines(lines, delta):
h_lines = []
v_lines = []
for line in lines:
for x1, y1, x2, y2 in line:
if abs(x2-x1) < delta: # x-values are near; line is vertical
v_lines.append(line)
elif abs(y2-y1) < delta: # y-values are near; line is horizontal
h_lines.append(line)
return h_lines, v_lines
Then, you can obtain intersection points of two line segments from their endpoints using determinants.
def find_intersection(line1, line2):
# extract points
x1, y1, x2, y2 = line1[0]
x3, y3, x4, y4 = line2[0]
# compute determinant
Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/ \
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/ \
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
return Px, Py
So now if you loop through all your lines, you'll have intersection points from all your horizontal and vertical lines, but you have many lines, so you'll have many intersection points for the same corner of the box.
However, these are all in one vector, so not only do you need to average the points in each corner, you need to actually group them together, too. You can achieve this with k-means clustering, which is implemented in OpenCV as kmeans().
def cluster_points(points, nclusters):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
_, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
return centers
Finally, we can simply plot those centers (making sure we round first---since so far everything is a float) onto the image with circle() to make sure we've done it right.
And we have it; four points, at the corners of the box.
Here's my full code in python, including the code to generate the figures above:
import cv2
import numpy as np
def find_intersection(line1, line2):
# extract points
x1, y1, x2, y2 = line1[0]
x3, y3, x4, y4 = line2[0]
# compute determinant
Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/ \
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/ \
((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
return Px, Py
def segment_lines(lines, delta):
h_lines = []
v_lines = []
for line in lines:
for x1, y1, x2, y2 in line:
if abs(x2-x1) < delta: # x-values are near; line is vertical
v_lines.append(line)
elif abs(y2-y1) < delta: # y-values are near; line is horizontal
h_lines.append(line)
return h_lines, v_lines
def cluster_points(points, nclusters):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
_, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
return centers
img = cv2.imread('image.png')
# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
dilated = cv2.dilate(edges, np.ones((3,3), dtype=np.uint8))
cv2.imshow("Dilated", dilated)
cv2.waitKey(0)
cv2.imwrite('dilated.png', dilated)
# run the Hough transform
lines = cv2.HoughLinesP(dilated, rho=1, theta=np.pi/180, threshold=100, maxLineGap=20, minLineLength=50)
# segment the lines
delta = 10
h_lines, v_lines = segment_lines(lines, delta)
# draw the segmented lines
houghimg = img.copy()
for line in h_lines:
for x1, y1, x2, y2 in line:
color = [0,0,255] # color hoz lines red
cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)
for line in v_lines:
for x1, y1, x2, y2 in line:
color = [255,0,0] # color vert lines blue
cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)
cv2.imshow("Segmented Hough Lines", houghimg)
cv2.waitKey(0)
cv2.imwrite('hough.png', houghimg)
# find the line intersection points
Px = []
Py = []
for h_line in h_lines:
for v_line in v_lines:
px, py = find_intersection(h_line, v_line)
Px.append(px)
Py.append(py)
# draw the intersection points
intersectsimg = img.copy()
for cx, cy in zip(Px, Py):
cx = np.round(cx).astype(int)
cy = np.round(cy).astype(int)
color = np.random.randint(0,255,3).tolist() # random colors
cv2.circle(intersectsimg, (cx, cy), radius=2, color=color, thickness=-1) # -1: filled circle
cv2.imshow("Intersections", intersectsimg)
cv2.waitKey(0)
cv2.imwrite('intersections.png', intersectsimg)
# use clustering to find the centers of the data clusters
P = np.float32(np.column_stack((Px, Py)))
nclusters = 4
centers = cluster_points(P, nclusters)
print(centers)
# draw the center of the clusters
for cx, cy in centers:
cx = np.round(cx).astype(int)
cy = np.round(cy).astype(int)
cv2.circle(img, (cx, cy), radius=4, color=[0,0,255], thickness=-1) # -1: filled circle
cv2.imshow("Center of intersection clusters", img)
cv2.waitKey(0)
cv2.imwrite('corners.png', img)
Finally, just one question...why not use the Harris corner detector implemented in OpenCV as cornerHarris()? Because it works really well with very minimal code. I thresholded the grayscale image, and then gave a little blur to remove spurious corners, and, well...
This was produced with the following code:
import cv2
import numpy as np
img = cv2.imread('image.png')
# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
r, gray = cv2.threshold(gray, 120, 255, type=cv2.THRESH_BINARY)
gray = cv2.GaussianBlur(gray, (3,3), 3)
# run harris
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0.04)
# dilate the corner points for marking
dst = cv2.dilate(dst,None)
dst = cv2.dilate(dst,None)
# threshold
img[dst>0.01*dst.max()]=[0,0,255]
cv2.imshow('dst',img)
cv2.waitKey(0)
cv2.imwrite('harris.png', img)
I think with some minor adjustments the Harris corner detector can probably be much more accurate than extrapolating Hough line intersections.
Related
I am very new in the python openCV. I wanted to find the 7 fixed colored randomly curved lines from an image. The result should be boolean which will give me if the image contains the 7 fixed colored randomly curved lines or not. The sample input image is as below:
I also wanted to find out the non-continuous faint green colored trapezium from the image.
I have written the below code to filter out the specific color from the image but unable to detect the lines & unable to conclude if the image is containing the 7 different lines & trapezium. Below is my sample code for the same:
import cv2
import numpy as np
boundaries = [
(32, 230, 32), # 2 Green lines
(10, 230, 230), # 1 Yellow line
(230, 72, 32), # 1 Blue line
(255, 255, 255), # 2 White lines
(32, 72, 230) # 1 Red line
]
box = [(0, 100, 0), (100, 255, 100)]
image = cv2.imread('testImage5.png')
image = removeBlackBands(image)
# cv2.imshow('Cropped Image', image)
# cv2.waitKey(0)
for row in boundaries:
# create NumPy arrays from the boundaries
row = np.array(row, dtype="uint8")
mask = cv2.inRange(image, row, row)
cv2.GaussianBlur(mask, (5,5), 0)
cv2.imshow('Filtered', mask)
cv2.waitKey(0)
lines = cv2.HoughLinesP(mask, cv2.HOUGH_PROBABILISTIC, np.pi / 180, 50, 50, 100)
if lines is not None:
for x in range(0, len(lines)):
print("line ", x)
for x1, y1, x2, y2 in lines[x]:
print("x1 = {}, y1 = {}, x2 = {}, y2 = {}".format(x1, y1, x2, y2))
cv2.line(image,(x1,y1),(x2,y2),(0,0, 255),2, cv2.LINE_AA)
pts = np.array([[x1, y1], [x2, y2]], np.int32)
cv2.polylines(image, [pts], True, (0, 255, 0))
cv2.imshow('Processed.jpg', image)
cv2.waitKey(0)
# create NumPy arrays from the boundaries
lower = np.array(box[0], dtype="uint8")
upper = np.array(box[1], dtype="uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
output = cv2.cvtColor(output, cv2.COLOR_BGR2GRAY)
output = cv2.Canny(output, 100, 150)
# show the images
# cv2.imshow("output", output)
# cv2.waitKey(0)
cv2.destroyAllWindows()
Can somebody help me? Thanks in Advance..!!!
This is the function I wrote for the same.
def detectNodes(self, image, tolerance=0):
"""
Detect the nodes in the image
Algorithm used:
1. Pre-process the image to filter out the required color
2. Convert the pre-processed image to binary image
Args:
image(np.ndarray): Numpy Nd array of image
tolerance: (int): Margin of consideration while extracting color from image. Default: 0
Returns
True upon success, False otherwise
"""
noOfNodesDetected = 0
curveWidth = 2
noOfNodeDetectThreshold = 5
cropH = self.testData["nodalROI"].get("height")
cropW = self.testData["nodalROI"].get("width")
roiImage = ppu.crop(image, cropH, cropW) # Crop node ROI
for color in self.nodalColorBoundaries.keys():
filtered = ImageProc.colorFilter(roiImage, colors=self.nodalColorBoundaries[color], tolerance=tolerance)
bgrImage = ppu.convertColorSpace(filtered, "bgr_to_gray")
thresh = ppu.threshold(bgrImage, 1, "thresh_binary")
logging.info("The shape of image is [{}]".format((thresh.shape)))
height, width = thresh.shape
pointFraction = self.testData.get("pointsToFormEquationOfCurve", None)
points = [int(fraction * height) for fraction in pointFraction]
logging.info("Point using for formulating the equation are [{}]".format(points))
pointFractionEvaluation = self.testData.get("pointsForEvaluationOfCurve", None)
pointsForEvaluation_h = [int(fraction * height) for fraction in pointFractionEvaluation]
logging.info("Point using for Evaluating the equation are [{}]".format(pointsForEvaluation_h))
curve1 = []
curve2 = []
for point in points:
prevX = 0
flag = 0
for w in range(0, width):
if thresh[point][w] == 255:
if (abs(prevX - w)) > curveWidth:
if flag == 0:
curve1.append((point, w))
flag = 1
else:
curve2.append((point, w))
prevX = w
fitter = CurveFitter1D()
if curve2:
logging.error("Second curve detected with color {} having points {}".format(color, curve2))
if curve1:
x1 = [point[0] for point in curve1]
y1 = [point[1] for point in curve1]
logging.qvsdebug("Points using to find the Polynomial with color {} are {} ".format(color, curve1))
fitter._fit_polyfit(x1, y1, 4)
logging.qvsdebug("Coefficient of the Polynomial with color {} are {} ".format(
color, fitter._fitterNamespace.coefs))
else:
logging.error("Points not found with {}".format(color))
return False
pointsForEvaluation_w = [int(round(fitter._predY_polyfit(point))) for point in pointsForEvaluation_h]
logging.qvsdebug(
"Points using for the verification of Polynomial representing curve with color {} are {} ".format(
color, zip(pointsForEvaluation_h, pointsForEvaluation_w)))
counter = 0
for i in range(len(pointsForEvaluation_h)):
if pointsForEvaluation_w[i] + 2 >= width:
continue # Continue if control is reaching to width of iamge - 2
if any(thresh[pointsForEvaluation_h[i]][pointsForEvaluation_w[i] - 2:pointsForEvaluation_w[i] + 3]):
counter += 1
logging.info(
"Out of {} points {} points are detected on the curve for color {}".format(len(pointsForEvaluation_h),
counter, color))
nodeDetectThreshold = int(len(pointsForEvaluation_h) * 0.6)
if counter >= nodeDetectThreshold:
noOfNodesDetected += 1
if noOfNodesDetected >= noOfNodeDetectThreshold:
logging.info("Nodes found in this frame are [%d], minimum expected [%d]" % (
noOfNodesDetected, noOfNodeDetectThreshold))
return True
else:
logging.error("Nodes found in this frame are [%d], minimum expected is [%d]" % (
noOfNodesDetected, noOfNodeDetectThreshold))
return False
Context :
Page No 8 in this lecture says that the OpenCV HoughLines function returns an N x 2 array of line parameters rho and theta which is stored in the array called lines.
Then in order to actually create the lines from these angles, we have some formulae and later we use the line function. The formulae are explained below in the code.
Code :
//Assuming we start our program with the Input Image as shown below.
//This array will be used for storing rho and theta as N x 2 array
vector<Vec2f> lines;
//The input bw_roi is a canny image with detected edges
HoughLines(bw_roi, lines, 1, CV_PI/180, 70, 0, 0); '
//These formulae below do the line estimation based on rho and theta
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point2d pt1, pt2;
double m;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
//When we use 1000 below we get Observation 1 output.
//But if we use 200, we get Observation 2 output.
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
//This line function is independent of HoughLines function
//and is used for drawing any type of line in OpenCV
line(frame, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
}
Input Image:
Observation 1:
Observation 2:
Problem:
In the code shown above if we play around with the number we multiply with a, -a, b & -b we get lines of different lengths. The Observation 2 was obtained when I multiplied by 200 instead of 1000 (which lead to Observation 1).
For more information, please refer to comments in lines 18 and 19 of code shown above.
Question:
When we draw lines from HoughLines output, how can we have control of, from where our line begins and ends ?
For instance, I want the right lane (red line pointing towards right bottom from left top corner) in Observation 2 to begin from the right bottom of the screen and point towards the left top of the screen (like a mirror image of the left lane).
Given
a = cos(theta)
b = sin(theta)
x0 = a * rho
y0 = b * rho
you can write the formula for all points lying on the line defined by (rho, theta) as
x = x0 - c * b
y = y0 + c * a
where c is distance from the reference point (intersect with perpendicular line through origin).
In your case, you've evaluated it with c = 1000 and c = -1000 to get two points to draw a line between.
You can rewrite those as
c = (x0 - x) / b
c = (y - y0) / a
And then use substitution to calculate horizontal and vertical intercepts:
x = x0 - ((y - y0) / a) * b
or
y = y0 + ((x0 - x) / b) * a
NB: Take care to correctly handle the cases when a or b are 0.
Let's say you have an 800x600 image (to keep numbers simple). We can define the bottom edge of the image as the line y = 599. Calculate the value of x where your line intercepts it using the above formula.
If the intercept point is in the image (0 <= x < 800), there's your starting point.
If it's to the left (x < 0), find the intercept with line x = 0 to use as starting point.
If it's to the right (x >= 800), find the intercept with line x = 799 to use as starting point.
Then use similar technique to find the second point to be able to draw a line.
I have an Mat object derived using the canny edge detectors, I extracted contours from such image using the findContours function. Now what I'd like to do for each of such contours would be somehow check the colour on both sides.
For the "colour" bit I've discretized HSI color space, however I'm very confused on how I could "pick the colours" in both sides given a contour.
Is there a way to easily do this?
You can use the image that you apply the Canny edge detector to do this. Take the gradient of that image. Gradient is a vector. As shown in the wiki page image (shown below), the gradient points in the direction of the greatest rate of increase. If you take the negative gradient, then it points in the direction of the greatest rate of decrease. Therefore, if you sample the gradient of the image at contour points, positive and negative gradients at those points should point to the regions either side of contour points. So, you can sample points along these directions to get an idea about the colors you want.
Image gradient:
Sample python code shows how this is done for the simple image shown below. It uses Sobel to calculate the gradient.
Input image:
Canny edges and sampled points:
Green: point on contour
Red: point in the positive gradient direction
Blue: point in the negative gradient direction
import cv2
import numpy as np
from matplotlib import pyplot as plt
im = cv2.imread('grad.png', 0)
dx = cv2.Sobel(im, cv2.CV_32F, 1, 0)
dy = cv2.Sobel(im, cv2.CV_32F, 0, 1)
edge = cv2.Canny(im, 64, 192)
dx = dx / np.sqrt(dx*dx + dy*dy + 0.01)
dy = dy / np.sqrt(dx*dx + dy*dy + 0.01)
r = 20
y, x = np.nonzero(edge)
pos1 = (np.int32(x[128]+r*dx[y[128], x[128]]), np.int32(y[128]+r*dy[y[128], x[128]]))
pos2 = (np.int32(x[128]-r*dx[y[128], x[128]]), np.int32(y[128]-r*dy[y[128], x[128]]))
im2 = cv2.cvtColor(edge, cv2.COLOR_GRAY2BGR)
cv2.circle(im2, pos1, 10, (255, 0, 0), 1)
cv2.circle(im2, pos2, 10, (0, 0, 255), 1)
cv2.circle(im2, (x[128], y[128]), 10, (0, 255, 0), 1)
plt.imshow(im2)
I'm using OpenCV 3.2.
I'd like to extract and draw all lines in this image.
For this, I first obtain the contours of the image. For example, I'm using the Canny algorithm, with a double threshold 100 (low) and 200 (high).
Mat image = cv::imread(<image_path>, cv::IMREAD_GRAYSCALE);
cv::Mat contours;
cv::Canny(image, contours, 100, 200);
Then, I call the HoughLines function with a resolution of 1 pixel and π / 45 radians. I just want those lines which have a length of at least 60 pixels.
std::vector<cv::Vec2f> lines;
cv::HoughLines(canny, lines, 1, CV_PI/45, 60);
This returns me a vector lines with the rho p and theta θ parameters in the Hough space of the desired lines. As we know, the line going through a contour pixel (x_i, y_i) is:
p = x_i cos(θ) + y_i sin(θ)
We know p and θ, so we know all the pixels in this line. Two easy points to calculate are A with x_i = 0 and B with y_i = 0.
A = (0, p / sin(θ))
B = (p / cos(θ), 0)
Let's draw them with the line function in blue color.
cv::cvtColor(image, image, CV_GRAY2BGR);
for (unsigned int i = 0; i < lines.size(); ++i) {
float p = lines[i][0];
float theta = lines[i][1];
cv::Point a(0, static_cast<int>(p / std::sin(theta)));
cv::Point b(static_cast<int>(p / std::cos(theta)), 0);
cv::line(image, a, b, cv::Scalar(255, 0, 0));
}
The result is that it only draws me 6 lines, of a total of 14 obtained. As you can see, only those lines that intersect the row 0 and column 0 of the image are drawn. What is the same, those lines which have A and B points in the image boundary. The rest of the lines have these points outside the image.
How can I achieve to draw all the lines in an easy way? I can calculate all the pixels of the obtained lines and draw them (we know them), but I'd like to draw them by minimizing lines of code and using OpenCV api.
I implemented the Hough Lines Transform in OpenCV (c++) and I get strange artifacts in the Hough Space. The following picture shows the Hough Space. The distance rho is depicted in the rows while the 180 columns represent the angle from 0 to 179 degree. If you zoom in on column 45 and 135 you see a vertical line with alternating dark and bright pixels.
http://imgur.com/NDtMn6S
For higher thresholds the lines of the fence are detected fine but when I lower the threshold the artifacts can be seen as 45° or 135° rotated lines in the final picture:
Detected lines for medium threshold
At first I thought it was a mistake in my implementation of the Hough Lines method but get similar lines for medium thresholds using OpenCV's Hough Line method. I also encounter the same problem when using Canny instead of Sobel.
So the question is: why do I get these artifacts and how can I get rid of them? I wasn't able to find anything about this and any help would be appreciated.
This is the code I used with the OpenCV Hough Lines method:
// read in input image and convert to grayscale
Mat frame = imread("fence.png", CV_LOAD_IMAGE_COLOR);
Mat final_out;
frame.copyTo(final_out);
Mat img, gx, gy, mag, angle;
cvtColor(frame, img, CV_BGR2GRAY);
// get the thresholded maggnitude image
Sobel(img, gx, CV_64F, 1, 0);
Sobel(img, gy, CV_64F, 0, 1);
cartToPolar(gx, gy, mag, angle);
normalize(mag, mag, 0, 255, NORM_MINMAX);
mag.convertTo(mag, CV_8U);
threshold(mag, mag, 55, 255.0, THRESH_BINARY);
// apply the hough lines transform and draw the lines
vector<Vec2f> lines;
HoughLines(mag, lines, 1, CV_PI / 180, 240);
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
pt1.x = 0;
pt1.y = (rho - pt1.x * cos(theta))/sin(theta);
pt2.x = mag.cols;
pt2.y = (rho - pt2.x * cos(theta))/sin(theta);
line(final_out, pt1, pt2, Scalar(0,0,255), 1, CV_AA);
}
// show the image
imshow("final_image", final_out);
cvWaitKey();
Answering the question - you can't get rid of such artifact - it's mathematical by nature due to discrete nature of image and pixels' grid orthogonality. The only way is to exclude exact 45 degree from the analysis.
I found the source - the bright pixels of anomaly are produced by the next issue:
Red dots - exactly 45 degree bright anomaly - you can see they are doubled making stairs pattern - which doubles number of pixels involved in accumulation.
Blue dots - exactly 45 degree dim anomaly - making chess-board pattern
Green dots - 44 degree line - you can see it's alternate doubling and chess patterns - which mediates anomaly.
If you look on whole picture of Hough transform matrix you will see how brightness slowly shifting across whole picture - representing slowly changing this alternation ratio depending on angle. However, due to nature of the pixel grid, at exactly 45 degree it makes this anomaly very acute. I don't know how to deal with it yet...
Stumbled across this and maybe useful to future people.
The image is inverted; the algorithm is accumulating the white pixels which obviously there are more of along the diagonals of the image. The lines you are looking for are black, which means they are zero valued and not considered.