Any metrics could measure vignetting effect of images? - c++

I would like to detect the pictures suffered by Vignetting or not, but cannot find a way to measure it. I search by keywords like "Vignetting metrics, Vignetting detection, Vignetting classification", they all lead me to topics like "Create vignetting filters" or "Vignetting correction". Any metric could do that? Like score from 0 to 1, the lower the score, the more unlikely the images suffered from vignetting effect. One of the naive solution I come up is measure the Luminance channel of the image.
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace cv;
using namespace std;
int main()
{
auto img = imread("my_pic.jpg");
cvtcolor(img, img, cv::COLOR_BGR2LAB);
vector<Mat> lab_img;
split(img, lab_img);
auto const sum_val = sum(lab_img[0])[0] / lab_img[0].total();
//use sum_val as threshold
}
Another solution is trained a classifier by CNN, I could use the vignetting filter to generate images with/without vignetting effect. Please give me some suggestions, thanks.

Use a polar warp and some simple statistics on the picture. You'll get a plot of the radial intensities. You'll see the characteristic attenuation of a vignette, but also picture content. This 1D signal is easier to analyze than the entire picture.
This is not guaranteed to always work. I'm not saying it should. It's an approach.
Variations are conceivable that use medians, averages, ... but then you'd have to introduce a mask too, so you know what pixels are coming from the image and which ones are just out-of-bounds black (to be ignored). You can extend the source image to 4-channel, with the fourth channel being all-255. The warp will treat that as any other color channel, so you'll get a "valid"-mask out of it that you can use.
I am confronting you with Python because it's about the idea and the APIs, and I categorically refuse to do prototyping/research in C++.
(h,w) = im.shape[:2]
im = np.dstack([im, np.full((h,w), 255, dtype=np.uint8)]) # 4th channel will be "valid mask"
rmax = np.hypot(h, w) / 2
(cx, cy) = (w-1) / 2, (h-1) / 2
# dsize
dh = 360 * 2
dw = 1000
# need to explicitly initialize that because the warp does NOT initialize out-of-bounds pixels
warped = np.zeros((dh, dw, 4), dtype=np.uint8)
cv.warpPolar(dst=warped, src=im, dsize=(dw, dh), center=(cx,cy), maxRadius=int(rmax), flags=cv.INTER_LANCZOS4)
values = warped[..., 0:3]
mask = warped[..., 3]
values = cv.cvtColor(values, cv.COLOR_BGR2GRAY)
picture 1:
picture 2:
mvalues = np.ma.masked_array(values, mask=(mask == 0))
# numpy only has min/max/median for masked arrays
# need this for quantile/percentile
# this selects the valid pixels for every column
cols = (col.compressed() for col in mvalues.T)
cols = [col for col in cols if len(col) > 0]
plt.figure(figsize=(16, 6), dpi=150)
plt.xlim(0, dw)
for p in [0, 10, 25, 50, 75, 90, 100]:
plt.plot([np.percentile(col, p) for col in cols if len(col) > 0], 'k', linewidth=0.5, label=f'{p}%')
plt.plot(mvalues.mean(axis=0), 'red', linewidth=2, label='mean')
plt.legend()
plt.show()
Plot for first picture:
Plot for second picture:

Related

How to calculate the distance of two circles in a image by opencv

image with two circles
I have an image that include two fibers (presenting as two circles in the image). How can I calculate the distance of two fibers?
I find it hard to detect the position of the fiber. I have tried to use the HoughCircles function, but the parameters are hard to optimize and it cannot locate the circle precisely in most times. Should I subtract the background first or is there any other methods? MANY Thanks!
Unfortunately, you haven't shown your preprocessing steps. In my approach, I'll do the following:
Convert input image to grayscale (see cvtColor).
Median blurring, maintains the "edges" (see medianBlur).
Adaptive thresholding (see adaptiveTreshold).
Morphological opening to get rid of small noise (see morphologyEx).
Find circles by HoughCircles.
Not done here: Possible refinements of the found circles. Exclude too small or too large circles. Use all prior information you have on that! For example, how large can the circles be at all?
Here's my whole code:
// Read image.
cv::Mat img = cv::imread("images/i7aJJ.jpg", cv::IMREAD_COLOR);
// Convert to grayscale for processing.
cv::Mat blk;
cv::cvtColor(img, blk, cv::COLOR_BGR2GRAY);
// Median blurring to improve following thresholding.
cv::medianBlur(blk, blk, 11);
// Adaptive thresholding.
cv::adaptiveThreshold(blk, blk, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 51, -2);
// Morphological opening to get rid of small noise.
cv::morphologyEx(blk, blk, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)));
// Find circles using Hough transform.
std::vector<cv::Vec4f> circles;
cv::HoughCircles(blk, circles, cv::HOUGH_GRADIENT, 1.0, 300, 50, 25, 100);
// TODO: Refinement of found circles, if there are more than two.
// For example, calculate areas: Neglect too small or too large areas.
// Compare all areas, and keep the two with nearly matching areas and
// suitable areas.
// Draw circles in input image.
for (Vec4f& circle : circles) {
cv::circle(img, cv::Point(circle[0], circle[1]), circle[2], cv::Scalar(0, 0, 255), 4);
cv::circle(img, cv::Point(circle[0], circle[1]), 5, cv::Scalar(0, 255, 0), cv::FILLED);
}
// --- Assuming there are only the two right circles left from here. --- //
// Draw some debug output in input image.
const cv::Point c1 = cv::Point(circles[0][0], circles[0][1]);
const cv::Point c2 = cv::Point(circles[1][0], circles[1][1]);
cv::line(img, c1, c2, cv::Scalar(255, 0, 0), 2);
// Calculate distance, and put in input image.
double dist = cv::norm(c1 - c2);
cv::putText(img, std::to_string(dist), cv::Point((c1.x + c2.x) / 2 + 20, (c1.y + c2.y) / 2 + 20), cv::FONT_HERSHEY_COMPLEX, 1.0, cv::Scalar(255, 0, 0));
The final output looks like this:
The intermediate image right before the HoughCircles operation looke like this:
In general, I'm not that skeptical about HoughCircles. You "just" have to pay attention to your preprocessing.
Hope that helps!
It's possible using hough circle detection but you should provide more images if you want a more stable detection. I just do denoising and go straight to circle detection. Using a non-local means denoising is pretty good at preserving edges which is in turn good for the canny edge algorithm included in the hough circle algorithm.
My code is written in Python but can easily be translated into C++.
import cv2
from matplotlib import pyplot as plt
IM_PATH = 'your image path'
DS = 2 # downsample the image
orig = cv2.imread(IM_PATH, cv2.IMREAD_GRAYSCALE)
orig = cv2.resize(orig, (orig.shape[1] // DS, orig.shape[0] // DS))
img = cv2.fastNlMeansDenoising(orig, h=3, templateWindowSize=20 // DS + 1, searchWindowSize=40 // DS + 1)
plt.imshow(orig, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=1, minDist=200 // DS, param1=40 // DS, param2=40 // DS, minRadius=210 // DS, maxRadius=270 // DS)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=1, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
Important
Doing a bit of image processing is only the first step in a good (and stable!) object detection. You have to leverage every detail and property that you can get your hands on and apply some statistics to improve your results. For example:
Use Yves' approach as an addition and filter all detected circles that do not intersect the joints.
Is one circle always underneath the other? Filter out horizontally aligned pairs.
Can you reduce the ROI (are the circles always in a specific area in your image or can they be everywhere)?
Are both circles always the same size? Filter out pairs with different sizes.
...
If you can use multiple metrics you can apply a statistical model (ex. majority voting or knn) to find the best pair of circles.
Again: always think of what you know about your object, the environment and its behavior and take advantage of that knowledge.

Fingerprint enhancement

Fingerprint sensor (Persona) is being used to get a fingerprint image. I am trying to enhance this image. I am using OpenCV for this purpose. Here is my original image:
I have applied otsu transform on it and got this image:
Now I have applied Gabor filter from OpenCV on orientations of 0, 45, 90, 135. I have got this result:
Here is my code in Python OpenCV for application of gabor filter:
import numpy as np
import cv2
from matplotlib import pyplot as plt
//cv2.getGaborKernel(ksize, sigma, theta, lambda, gamma, psi, ktype)
// ksize - size of gabor filter (n, n)
// sigma - standard deviation of the gaussian function
// theta - orientation of the normal to the parallel stripes
// lambda - wavelength of the sunusoidal factor
// gamma - spatial aspect ratio
// psi - phase offset
// ktype - type and range of values that each pixel in the gabor kernel
//canhold
g_kernel = cv2.getGaborKernel((25, 25), 6.0, np.pi/4, 8.0, 0.5, 0, ktype=cv2.CV_32F)
g_kernel1 = cv2.getGaborKernel((30, 30), 6.0, (3*np.pi)/4, 8.0, 0.5, 0, ktype=cv2.CV_32F)
g_kernel2 = cv2.getGaborKernel((30, 30),4 , 0, 8, 0.5, 0, ktype=cv2.CV_32F)
g_kernel3 = cv2.getGaborKernel((30, 30),4 , np.pi, 8, 0.5, 0, ktype=cv2.CV_32F)
print np.pi/4
img = cv2.imread('C:/Users/admin123/Desktop/p.png')
img1 = cv2.imread('C:/Users/admin123/Desktop/p.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
// Otsu thresholding
ret2,img1 = cv2.threshold(img1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imshow('otsu', img1)
filtered_img = cv2.filter2D(img, cv2.CV_8UC3, g_kernel)
filtered_img1 = cv2.filter2D(img, cv2.CV_8UC3, g_kernel1)
filtered_img2 = cv2.filter2D(img, cv2.CV_8UC3, g_kernel2)
filtered_img3 = cv2.filter2D(img, cv2.CV_8UC3, g_kernel3)
cv2.imshow('0', filtered_img)
cv2.imshow('1', filtered_img1)
cv2.imshow('2', filtered_img2)
cv2.imshow('image', img)
cv2.addWeighted(filtered_img2,0.4,filtered_img1,0.8,0,img) #0 degree and 90
cv2.addWeighted(img,0.4,filtered_img,0.6,0,img) #0 degree and 90
cv2.addWeighted(img,0.4,filtered_img3,0.6,0,img)
cv2.addWeighted(img,0.4,img1,0.6,0.3,img)
cv2.imshow('per',img)
//threshold will convert it plain zero and white image
ret,thresh1 = cv2.threshold(img,150,255,cv2.THRESH_BINARY)#127 instead of 200
cv2.imshow('per1',thresh1)
h, w = g_kernel.shape[:2]
g_kernel = cv2.resize(g_kernel, (3*w, 3*h), interpolation=cv2.INTER_CUBIC)
g_kernel1 = cv2.resize(g_kernel1, (3*w, 3*h), interpolation=cv2.INTER_CUBIC)
cv2.imshow('gabor kernel (resized)', g_kernel)
cv2.imshow('gabor kernel1 (resized)', g_kernel1)
cv2.waitKey(0)
cv2.destroyAllWindows()
I want robust fingerprint recognition. For this I want image of this level to get accurate Minutiae points:
How can I get this much result from enhancement? What changes are required in code to get the enhanced result?
Well I dont have a python/opencv answer but I can point you the resource where you can fiddle around with a Matlab code. You can find the code here Simple Fingerprint Matcher
The code basically holds all the code for enhacement/minutiae extraction/matching. Though not very robust on matching but enhancements are pretty good.
I ran the code on the sample you uploaded, which came out as follows.
Note that the code uses two different approaches combined for fingerprint enhancement. One is based on Gabor filters and the other is called STFT (Short Time Fourier Transform), But it is likely that you will only need the Gabor filter part. Actually depends upon the image quality.
If you need the Gabor filter code in Matlab, you can find it herehttp://www.peterkovesi.com/matlabfns/#fingerprints
But I did modify the code to show up the images and process only a single finger.
The following is the main file calling in steps the extraction of enhanced fingerprint. The matlab function doing that is f_enhance.m
function [ binim, mask, cimg1, cimg2, oimg1, oimg2 ] = f_enhance( img )
enhimg = fft_enhance_cubs(img,6); % Enhance with Blocks 6x6
enhimg = fft_enhance_cubs(enhimg,12); % Enhance with Blocks 12x12
[enhimg,cimg2] = fft_enhance_cubs(enhimg,24); % Enhance with Blocks 24x24
blksze = 5; thresh = 0.085;
normim = ridgesegment(enhimg, blksze, thresh);
oimg1 = ridgeorient(normim, 1, 3, 3);
[enhimg,cimg1] = fft_enhance_cubs(img, -1);
[normim, mask] = ridgesegment(enhimg, blksze, thresh);
oimg2 = ridgeorient(normim, 1, 3, 3);
[freq, medfreq] = ridgefreq(normim, mask, oimg2, 32, 5, 5, 15);
binim = ridgefilter(normim, oimg2, medfreq.*mask, 0.5, 0.5, 1);
figure,imshow(binim,[]); % Normalize to grayscale
binim = ridgefilter(normim, oimg2, medfreq.*mask, 0.5, 0.5, 1) > 0;
figure;
figure,imshow(binim);
figure;
end
Either you revert to Matlab or you can always translate the code :)
Good luck
I may be late here. But this might be useful for other people later on.
Take a look at this reporitory:
https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python
It performs Fingerprint Enhancement using oriented Gabor filters in python.
installation:
pip install fingerprint_enhancer
usage:
import fingerprint_enhancer # Load the library
img = cv2.imread('image_path', 0) # read input image
out = fingerprint_enhancer.enhance_Fingerprint(img) # enhance the fingerprint image
cv2.imshow('enhanced_image', out); # display the result
cv2.waitKey(0)

Difference between two photos in tollerance variable

I have two photos:
and
I am getting differences between these photos. But these differences include changes of light, shaking of the camera, etc. I want to see only the man in the difference photo. I wrote a threshold value and I succeeded in it. But this threshold does not correct other photos. I can't show wrong examples because of my reputation in stackoverflow. You can run my code on other photos and you can see the disorders. My code is given below. How else can I do this threshold?
#include <Windows.h>
#include <opencv\highgui.h>
#include <iostream>
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main() {
Mat siyah;
Mat resim = imread("C:/Users/toshiba/Desktop/z.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat resim2 = imread("C:/Users/toshiba/Desktop/t.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (resim.empty() || resim2.empty())
{
cout << "Dosya Açılamadı " << "\n";
return 0;
}
for (int i = 0; i < resim.rows; i++)
{
for (int j = 0; j <resim.cols; j++)
{
if (resim.data[resim.channels()*(resim.cols*(i)+
(j))] - resim2.data[resim2.channels()*(resim2.cols*(i)+
(j))]>30) {
resim.data[resim.channels()*(resim.cols*(i)+
(j))] = 255;
}
else
resim.data[resim.channels()*(resim.cols*(i)+
(j))] = 0;
//inRange(resim, 150, 255, siyah);
}
}
//inRange(resim, 150, 255, siyah);
namedWindow("Resim", CV_WINDOW_NORMAL);
imshow("Resim", resim);
waitKey();
system("PAUSE");
waitKey();
return 0;
}
If your background is always the same and the pictures which include an object occur rarely enough then you can update your reference image very often such that the changes in lighting from your reference image to the image to be analyzed are always small. You could then measure/compute a threshold that most of the time will work when computing the image difference. I am not sure why your camera is moving - is it not fixed ?
I made the following code with Otsu thresholding and GrabCut algorithm. It doesn't use any pre-set threshold values, but I am still not sure how well it will perform for other images (maybe if you provide several more pictures to test with). The code is in Python, but it mostly consists of calling OpenCV functions and filling matrices, so should be easy to convert to C++ or whatever. The result for your image:
Using just Otsu alone on the difference gives the following mask:
The legs are fine but the rest is messed up. But there seem to be no false positives, so I took the mask as definite foreground, everything else as probable background and fed it to GrabCut.
import cv2
import numpy as np
#read the empty background image and the image with the guy in it,
#convert them into float32, so we don't get integers overflow
img_empty = cv2.imread("000_empty.png", 0).astype(np.float32)
img_guy = cv2.imread("001_guy.jpg", 0).astype(np.float32)
#absolute difference -> back to uint8 for thresholding etc.
diff = np.abs(img_empty - img_guy).astype(np.uint8)
#otsu thresholding
ret2, th = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#read our image again for GrabCut
img = cv2.imread("001_guy.jpg")
#fill GrabCut mask
mask = np.zeros(th.shape, np.uint8)
mask[th == 255] = cv2.GC_FGD #the value is GC_FGD (foreground) when our thresholded value is 255
mask[th == 0] = cv2.GC_PR_BGD #GC_PR_BGD (probable background) otherwise
#some internal stuff for GrabCut...
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#run GrabCut
cv2.grabCut(img, mask, (0, 0, 1365, 767), bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_MASK)
#convert the `mask` we got from GrabCut into a binary mask,
#then apply it to the original image
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
#save the results
cv2.imwrite("003_grabcut.jpg", img)

Cleaning a scanned image in opencv

I'm trying to denoise the image then extract the skeleton of an image containing a handwritten line. I want the line to be continuous and solid but the method I use fails to do that and relatively slow. Here's the original image:
Original Image
Morphed
On the morphed image, you can see a small island at the lower right.
The thinned image from above shows the line is broken near the end.
Any other method to achieve desired result?
My code looks like this:
int morph_elem = 2;
int morph_size = 10;
int morph_operator = 0;
Mat origImage = imread(origImgPath, CV_LOAD_IMAGE_COLOR);
medianBlur(origImage, origImage, 3);
cvtColor(origImage, origImage, COLOR_RGB2GRAY);
threshold(origImage, origImage, 0, 255, THRESH_OTSU);
Mat element = getStructuringElement(morph_elem, Size(2 * morph_size + 1, 2 * morph_size + 1), cv::Point(morph_size, morph_size));
morphologyEx(origImage, origImage, MORPH_OPEN, element);
thin(origImage, true, true, true);
To reduce the line break away try using adaptiveThreshold, play around with the methods and sizes and see what works best.
To remove the little island simply do findContours and then mask out unwanted specs using drawContours with color=(255,255,255) and thickness=-1 after you've filtered it with something like wantedContours = [x for x in contours if contourArea(x) < 50].
You want to perform a series of closing and opening operations on the image. It is best you understand how they work to make the appropriate choice.
See this post: Remove spurious small islands of noise in an image - Python OpenCV

OpenCV: how to detect lines of a specific colour?

I am working on a small OpenCV project to detect lines of a certain colour from a mobile phone camera.
In short would like to:
Transform the input image into an image of a certain colour (e.g. Red from a specific upper and lower range)
Apply Hough line transformation to the resulting image so that it detects only lines of that specific colour
Superimpose on the original image the lines detected
Those are the functions that I'd like to use but not quiet sure how to fill the missing bits.
This is the processImage function called from an smartphone app when processing images from an instance of CvVideoCamera
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::Mat red_image = ??
// Apply houghes transformation to detect lines between a minimum length and a maximum length (I was thinking of using the CV_HOUGH_PROBABILISTIC method..)
// Comment.. see below..
I am unable to understand the documentation here as the C++
method signature does not have a method field
vector<Vec2f> lines;
From the official documentation:
C++: void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
Taken from sample code, haven't understood properly how it works..
(e.g. what's the usage of theta? How does giving a different angle
reflect into line detection?)
for( size_t i = 0; i < lines.size(); i++ )
{
Here I should only consider lines above a certain size.. (no idea how)
}
Here I should then add the resulting lines to original image (no idea how) so that they can be shown on the screen..
Any help would be greatly appreciated.
You can use HSV color space to extract color tone information.
Here's some code with comments, if there are any questions feel free to ask:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/coloredLines.png");
// convert to HSV color space
cv::Mat hsvImage;
cv::cvtColor(input, hsvImage, CV_BGR2HSV);
// split the channels
std::vector<cv::Mat> hsvChannels;
cv::split(hsvImage, hsvChannels);
// hue channels tells you the color tone, if saturation and value aren't too low.
// red color is a special case, because the hue space is circular and red is exactly at the beginning/end of the circle.
// in literature, hue space goes from 0 to 360 degrees, but OpenCV rescales the range to 0 up to 180, because 360 does not fit in a single byte. Alternatively there is another mode where 0..360 is rescaled to 0..255 but this isn't as common.
int hueValue = 0; // red color
int hueRange = 15; // how much difference from the desired color we want to include to the result If you increase this value, for example a red color would detect some orange values, too.
int minSaturation = 50; // I'm not sure which value is good here...
int minValue = 50; // not sure whether 50 is a good min value here...
cv::Mat hueImage = hsvChannels[0]; // [hue, saturation, value]
// is the color within the lower hue range?
cv::Mat hueMask;
cv::inRange(hueImage, hueValue - hueRange, hueValue + hueRange, hueMask);
// if the desired color is near the border of the hue space, check the other side too:
// TODO: this won't work if "hueValue + hueRange > 180" - maybe use two different if-cases instead... with int lowerHueValue = hueValue - 180
if (hueValue - hueRange < 0 || hueValue + hueRange > 180)
{
cv::Mat hueMaskUpper;
int upperHueValue = hueValue + 180; // in reality this would be + 360 instead
cv::inRange(hueImage, upperHueValue - hueRange, upperHueValue + hueRange, hueMaskUpper);
// add this mask to the other one
hueMask = hueMask | hueMaskUpper;
}
// now we have to filter out all the pixels where saturation and value do not fit the limits:
cv::Mat saturationMask = hsvChannels[1] > minSaturation;
cv::Mat valueMask = hsvChannels[2] > minValue;
hueMask = (hueMask & saturationMask) & valueMask;
cv::imshow("desired color", hueMask);
// now perform the line detection
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(hueMask, lines, 1, CV_PI / 360, 50, 50, 10);
// draw the result as big green lines:
for (unsigned int i = 0; i < lines.size(); ++i)
{
cv::line(input, cv::Point(lines[i][0], lines[i][1]), cv::Point(lines[i][2], lines[i][3]), cv::Scalar(0, 255, 0), 5);
}
cv::imwrite("C:/StackOverflow/Output/coloredLines_mask.png", hueMask);
cv::imwrite("C:/StackOverflow/Output/coloredLines_detection.png", input);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
using this input image:
Will extract this "red" color (adjust hueValue and hueRange to detect different colors):
and HoughLinesP detects those lines from the mask (should work with HoughLines similarly):
Here's another set of images with non-lines too...
About your different questions:
There are two functions HoughLines and HoughLinesP. HoughLines does not extract a line length, but you can compute it in a post-processing by checking again, which pixels of the edge-mask (HoughLines input) correspond to the extracted line.
parameters:
image - edge image (should be clear?)
lines - lines given by angle and position, no length or sth. they are interpreted infinitely long
rho - the accumulator resolution. The bigger, the more robust in case of slightly distorted lines it should be, but the less accurate in the extracted lines' position/angle
threshold - the bigger the less false positives, but you might miss some lines
theta - angle resolution: the smaller, the more different lines (depending on the orientation) can be detected. If your line's orientation does not fit in the angle steps, the line might not be detected. For example if you CV_PI/180 will detect in 1° resolution, if your line has a 0.5° (e.g. 33.5°) orientation, it might be missed.
I'm not so extremely sure about all the parameters, maybe you'll have to look at the literature about hough line detection, or someone else can add some hints here.
If you instead use cv::HoughLinesP, line segments with start- and end-point will be detected, which is easier to interpret and you can compute the line length from cv::norm(cv::Point(lines[i][0], lines[i][1]) - cv::Point(lines[i][2], lines[i][3]))
I will not show the code but the steps with some tricks.
Assume that you want to detect road lanes (which are lines with white or light yellow color and have some specific properties).
Original Image( I add some extra lines to make noise )
Step 1: Remove parts of image that not need to be used, save the CPU usage (simple but useful)
Step 2: Convert to gray image
Step 3:threshold
Using threshold according to the color of your line, the color will become white and others will become black
Step 4: Using Contours to find the bound of objects
Step 5: Using Fitline with Contours in previous step are input to return equations of lines
Fitline returns (x0,y0) and vector v = (a,b)
Step 6: With the equations of lines you can draw in any line you want