opencv how to get middle line of a contour - c++

I have an image from which I get the contour of using findContours. This products something that looks like the following: (showing the "inner and outer contour").
Is there a way for me to get the "midpoint" of these two contours? ie some kind of polyline that would fit exactly in between the two lines seen in the image, such that the distance at any point on the resultant time is the same from it to the top contour as is from it to the bottom contour?
More complicated example would be something as follows:
Please note, that it doesnt matter too much what happens at intersections, so long as nothing traces back on itself, so the result of the more complicated example would need multiple lines.

There is a way to get the "midpoint" of the two contours, but I don't think there is an existing OpenCV solution.
You may use the following stages:
Convert image to Grayscale, and apply binary threshold.
You may use cvtColor(... COLOR_BGR2GRAY) and threshold(...) OpenCV functions.
Fill the pixels outsize the area between lines with white color.
You may use floodFill OpenCV function.
Apply "distance transform" to the binary image.
You may use distanceTransform OpenCV function.
Use CV_DIST_L2 for euclidean distance.
Apply Dijkstra's algorithm for finding the shortest paths between most left and most right nodes.
Representing "distance transform" result (image) as weighted graph and applying Dijkstra's algorithm is the most challenging stage.
I implemented the solution in MATLAB.
The MATLAB implemented is used as a "proof of concept".
I know you were expecting C++ implementation, but it requires a lot of work.
The MATLAB implementation uses im2graph function, I downloaded from here.
Here is the MATLAB implementation:
origI = imread('two_contours.png'); % Read input image
I = rgb2gray(origI); % Convert RGB to Grayscale.
BW = imbinarize(I); % Convert from Grayscale to binary image.
% Fill pixels outsize the area between lines.
BW2 = imfill(BW, ([1, size(I,2)/2; size(I,1), size(I,2)/2]));
% Apply "distance transform" (find compute euclidean distance from closest white pixel)
D = bwdist(BW2);
% Mark all pixels outsize the area between lines with zero.
D(BW2 == 1) = 0;
figure;imshow(D, []);impixelinfo % Display D matrix as image
[M, N] = size(D);
% Find starting point and end point - assume we need to find a path from left side to right side.
x0 = 1;
[~, y0] = max(D(:, x0));
x1 = N;
[~, y1] = max(D(:, x1));
% https://www.mathworks.com/matlabcentral/fileexchange/46088-dijkstra-lowest-cost-for-images
StartNode = y0;
EndNode = M*N - (M-y1-1);
conn = 8;%4 or 8 - connected neighborhood for linking pixels
% Use 100 - D, because graphshortestpath searches for minimum weight (and we are looking for maximum weight path).
CostMat = 100 - D;
G = im2graph(CostMat, conn);
%Find "shortest" path from StartNode to EndNode
[dist, path, pred] = graphshortestpath(G, StartNode, EndNode);
% Mark white path in image J image
J = origI;R = J(:,:,1);G = J(:,:,2);B = J(:,:,3);
R(path) = 255;G(path) = 255;B(path) = 255;
J = cat(3, R, G, B);
figure;imshow(J);impixelinfo % Display J image
Result:
D - Result of distance transform:
J - Original image with "path" marked with white color:
Update:
For the new example you can define three paths.
The solution becomes more complicated.
The example is not generalized to solve all the cases.
There must be a simpler solution, I just can't think of one.
tmpI = imread('three_contours.png'); % Read input image
origI = permute(tmpI, [2, 1, 3]); % Transpose image
I = rgb2gray(origI); % Convert RGB to Grayscale.
BW = imbinarize(I); % Convert from Grayscale to binary image.
% Fill pixels outsize the area between lines.
%BW2 = imfill(BW, ([1, size(I,2)/2; size(I,1), size(I,2)/2]));
BW2 = imfill(BW, ([1, 1; size(I,1), size(I,2); size(I,2)/2, 1]));
% Apply "distance transform" (find compute euclidean distance from closest white pixel)
D = bwdist(BW2);
% Mark all pixels outsize the area between lines with zero.
D(BW2 == 1) = 0;
figure;imshow(D, []);impixelinfo % Display D matrix as image
[M, N] = size(D);
% Find starting point and end point - assume we need to find a path from left side to right side.
x0 = 1;
[~, y0a] = max(D(1:M/2, x0));
% Y coordinate of second point
[~, y0b] = max(D(M/2:M, x0));
y0b = y0b + M/2;
x1 = N;
[~, y1] = max(D(:, x1));
% https://www.mathworks.com/matlabcentral/fileexchange/46088-dijkstra-lowest-cost-for-images
StartNodeA = y0a;
StartNodeB = y0b;
EndNode = M*N - (M-y1-1);
conn = 8;%4 or 8 - connected neighborhood for linking pixels
% Use 100 - D, because graphshortestpath searches for minimum weight (and we are looking for maximum weight path).
D(D==0) = -10000; % Increase the "cost" where D is zero
CostMat = 1000 - D;
G = im2graph(CostMat, conn);
%Find "shortest" path from StartNode to EndNode
[dist, pathA, pred] = graphshortestpath(G, StartNodeA, EndNode);
[dist, pathB, pred] = graphshortestpath(G, StartNodeB, EndNode);
[dist, pathC, pred] = graphshortestpath(G, StartNodeA, StartNodeB);
% Mark white path in image J image
J = origI;R = J(:,:,1);G = J(:,:,2);B = J(:,:,3);
R(pathA) = 255;
G(pathB) = 255;
B(pathC) = 255;
J = cat(3, R, G, B);
J = permute(J, [2, 1, 3]); % Transpose image
figure;imshow(J);impixelinfo % Display J image
Three lines:

Related

Transform images with bezier curves

I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.

ordfilt2: Find requires variable sizing

I want to generate c++ code from the following Matlab function (Harris corner detection) that detects corners from an image.My constraint is that I have to generate a static library in C++ without variable-sizing support.
So, I have disabled variable size support from settings and also selected target platform as unspecified 32 bit processor.
In this way I'll be able to use it in Vivado HLS for an FPGA project.
However, when I generate the code, the line containing ordfilt2 function throws an error that FIND requires variable sizing.
Please, help me if there is a workaround to this problem.I have seen a similar question posted here Matlab error "Find requires variable sizing" . But I am not sure how this applies to my case.Thanks.
Here's the code:
function [cim] = harris(im , thresh)
dx = [-1 0 1; -1 0 1; -1 0 1]; % Derivative masks
dy = dx';
Ix = conv2(im, dx, 'same'); % Image derivatives
Iy = conv2(im, dy, 'same');
% Generate Gaussian filter of size 6*sigma (+/- 3sigma) and of
% minimum size 1x1.
sigma = 1.5;
g = fspecial('gaussian',max(1,fix(6*sigma)), sigma);
Ix2 = conv2(Ix.^2, g, 'same'); % Smoothed squared image derivatives
Iy2 = conv2(Iy.^2, g, 'same');
Ixy = conv2(Ix.*Iy, g, 'same');
cim = (Ix2.*Iy2 - Ixy.^2)./(Ix2 + Iy2 + eps); % Harris corner measure
% Extract local maxima by performing a grey scale morphological
% dilation and then finding points in the corner strength image that
% match the dilated image and are also greater than the threshold.
radius = 1.5;
sze = 2*radius+1; % Size of mask.
mx = ordfilt2(cim,sze^2,ones(sze)); % Grey-scale dilate.
cim = (cim==mx)&(cim>thresh); % Find maxima.
end

How to filter given width of lines in a image?

I need to filter given width of lines in a image.
I am coding a program which will detect lines of road image. And I found something like that but can't understand logic of it. My function has to do that:
I will send image and width of line in terms of pixel size(e.g 30 pixel width), the function will filter just these lines in image.
I found that code:
void filterWidth(Mat image, int tau) // tau=width of line I want to filter
int aux = 0;
for (int j = 0; j < quad.rows; ++j)
{
unsigned char *ptRowSrc = quad.ptr<uchar>(j);
unsigned char *ptRowDst = quadDst.ptr<uchar>(j);
for (int i = tau; i < quad.cols - tau; ++i)
{
if (ptRowSrc[i] != 0)
{
aux = 2 * ptRowSrc[i];
aux += -ptRowSrc[i - tau];
aux += -ptRowSrc[i + tau];
aux += -abs((int)(ptRowSrc[i - tau] - ptRowSrc[i + tau]));
aux = (aux < 0) ? (0) : (aux);
aux = (aux > 255) ? (255) : (aux);
ptRowDst[i] = (unsigned char)aux;
}
}
}
What is the mathematical explanation of that code? And how does that work?
Read up about convolution filters. This code is a particular case of a 1 dimensional convolution filter (it only convolves with other pixels on the currently processed line).
The value of aux is started with 2 * the current pixel value, then pixels on either side of it at distance tau are being subtracted from that value. Next the absolute difference of those two pixels is also subtracted from it. Finally it is capped to the range 0...255 before being stored in the output image.
If you have an image:
0011100
This convolution will cause the centre 1 to gain the value:
2 * 1
- 0
- 0
- abs(0 - 0)
= 2
The first '1' will become:
2 * 1
- 0
- 1
- abs(0 - 1)
= 0
And so will the third '1' (it's a mirror image).
And of course the 0 values will always stay zero or become negative, which will be capped back to 0.
This is a rather weird filter. It takes the pixel values three by three on the same line, with a tau spacing. Let these values by Vl, V and Vr.
The filter computes - Vl + 2 V - Vr, which can be seen as a second derivative, and deducts |Vl - Vr|, which can be seen as a first derivative (also called gradient). The second derivative gives a maximum response in case of a maximum configuration (Vl < V > Vr); the first derivative gives a minimum response in case of a symmetric configuration (Vl = Vr).
So the global filter will give a maximum response for a symmetric maximum (like with a light road on a dark background, vertical, with a width less than 2.tau).
By rearranging the terms, you can see that the filter also yields the smallest of the left and right gradients, V - Vm and V - Vp (clamped to zero).

Automatic separation of two images that have been multiplied together

I am searching for an algorithm or C++/Matlab library that can be used to separate two images multiplied together. A visual example of this problem is given below.
Image 1 can be anything (such as a relatively complicated scene). Image 2 is very simple, and can be mathematically generated. Image 2 always has similar morphology (i.e. downward trend). By multiplying Image 1 by Image 2 (using point-by-point multiplication), we get a transformed image.
Given only the transformed image, I would like to estimate Image 1 or Image 2. Is there an algorithm that can do this?
Here are the Matlab code and images:
load('trans.mat');
imageA = imread('room.jpg');
imageB = abs(response); % loaded from MAT file
[m,n] = size(imageA);
image1 = rgb2gray( imresize(im2double(imageA), [m n]) );
image2 = imresize(im2double(imageB), [m n]);
figure; imagesc(image1); colormap gray; title('Image 1 of Room')
colorbar
figure; imagesc(image2); colormap gray; title('Image 2 of Response')
colorbar
% This is image1 and image2 multiplied together (point-by-point)
trans = image1 .* image2;
figure; imagesc(trans); colormap gray; title('Transformed Image')
colorbar
UPDATE
There are a number of ways to approach this problem. Here are the results of my experiments. Thank you to all who responded to my question!
1. Low-pass filtering of image
As noted by duskwuff, taking the low-pass filter of the transformed image returns an approximation of Image 2. In this case, the low-pass filter has been Gaussian. You can see that it is possible to identify multiplicative noise in the image using the low-pass filter.
2. Homomorphic Filtering
As suggested by EitenT I examined homomorphic filtering. Knowing the name of this type of image filtering, I managed to find a number of references that I think would be useful in solving similar problems.
S. P. Banks, Signal processing, image processing, and pattern recognition. New York: Prentice Hall, 1990.
A. Oppenheim, R. Schafer, and J. Stockham, T., “Nonlinear filtering of multiplied and convolved signals,” IEEE Transactions on Audio and Electroacoustics, vol. 16, no. 3, pp. 437 – 466, Sep. 1968.
Blind image Deconvolution: theory and applications. Boca Raton: CRC Press, 2007.
Chapter 5 of the Blind image deconvolution book is particularly good, and contains many references to homomorphic filtering. This is perhaps the most generalized approach that will work well in many different applications.
3. Optimization using fminsearch
As suggested by Serg, I used an objective function with fminsearch. Since I know the mathematical model of the noise, I was able to use this as input to an optimization algorithm. This approach is entirely problem-specific, and may not be always useful in all situations.
Here is a reconstruction of Image 2:
Here is a reconstruction of Image 1, formed by dividing by the reconstruction of Image 2:
Here is the image containing the noise:
Source code
Here is the source code for my problem. As shown by the code, this is a very specific application, and will not work well in all situations.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
[R, ~, ~] = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
figure;
imagesc(abs(R)); title('abs(response)')
figure;
imagesc(imag(R)); title('imag(response)')
imageA = imread('room.jpg');
% images should be of the same size
[m,n] = size(R);
image1 = rgb2gray( imresize(im2double(imageA), [m n]) );
% here is the multiplication (with the image in complex space)
trans = ((image1.*1i)) .* (R(end:-1:1, :));
figure;
imagesc(abs(trans)); colormap(gray);
% take the imaginary part of the response
imagLogR = imag(log(trans));
% The beginning and end points are not usable
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
% This is the derivative of the imaginary part of R
% d/dtau(imag((log(R)))
% Do we need to remove spurious values from the matrix?
figure;
imagesc(abs(log(Mderiv)));
disp('Running iteration');
% Apply curve-fitting to get back the values
% by cycling over the cols
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
% make the vector the same size as the other
qout2 = [qout(1); qout; qout(end)];
% get the reconstructed response
[RR, ~, ~] = get_response(N, qout2, dt, wSize, Glim, ginv);
RR = RR(end:-1:1,:);
figure; imagesc(abs(RR)); colormap gray
title('Reconstructed Image 2')
colorbar;
% here is the reconstructed image of the room
% NOTE the division in the imagesc function
check0 = image1 .* abs(R(end:-1:1, :));
figure; imagesc(check0./abs(RR)); colormap gray
title('Reconstructed Image 1')
colorbar;
figure; imagesc(check0); colormap gray
title('Original image with noise pattern')
colorbar;
function [response, L, inte] = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
L = zeros(N2, N);
inte = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
L(:,ti) = Lambda;
inte(:,ti) = integrand;
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
function sse = curve_fit_to_get_q(q, dt, rows, data)
% q = trial q value
% dt = timestep
% rows = number of rows
% data = actual dataset
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
% calculate for the complex part
termi = ((ratio.^(gamma)) - 1) .* omega;
% for now, just reverse termi
termi = termi(end:-1:1);
%
% Do non-linear curve-fitting
% termi is a column-vector with the generated noise pattern
% data is the log-transformed image
% sse is the value that is returned to fminsearchbnd
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);
function output = deriv_3pt(x, dt)
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
This is going to be a difficult, unreliable process, as you're fundamentally trying to extract information (the separation of the two images) which has been destroyed. Bringing it back perfectly is impossible; the best you can do is guess.
If the second image is always going to be relatively "smooth", you may be able to reconstruct it (or, at least, an approximation of it) by applying a strong low-pass filter to the transformed image. With that in hand, you can invert the multiplication, or equivalently use a complementary high-pass filter to get the first image. It won't be quite the same, but it'll at least be something.
I would try constrained optimization (fmincon in Matlab).
If you understand the source / nature of the 2-nd image, you probably can define a multivariate function that generates similar noise patterns. The objective function can be the correlation between the generated noise image, and the last image.

OpenCV detect if points lie along line/plane

I am working on a form of autocalibration for an optics device which is currently performed manually. The first part of the calibration is to determine whether a light beam has illuminated the set of 'calibration' points.
I am using OpenCV and have thresholded and cropped the image to leave only the possible relevant points. I know want to determine if these points lie along a stright (horizontal) line; if they a sufficient number do the beam is in the correct position! (The points lie in a straight line but the beam is often bent so hitting most of the points suffices, there are 21 points which show up as white circles when thresholded).
I have tried using a histogram but on the thresholded image the results are not correct and am now looking at Hough lines, but this detects straight lines from edges wwhere as I want to establish if detected points lie on a line.
This is the threshold code I use:
cvThreshold(output, output, 150, 256, CV_THRESH_BINARY);
The histogram results with anywhere from 1 to 640 bins (image width) is two lines at 0 and about 2/3rds through of near max value. Not the distribution expected or obtained without thresholding.
Some pictures to try to illistrate the point (note the 'noisy' light spots which are a feature of the system setup and cannot be overcome):
12 points in a stright line next to one another (beam in correct position)
The sort of output wanted (for illistration, if the points are on the line this is all I need to know!)
Any help would be greatly appreciated. One thought was to extract the co-ordinates of the points and compare them but I don't know how to do this.
Incase it helps anyone here is a very basic (the first draft) of some simple linaear regression code I used.
// Calculate the averages of arrays x and y
double xa = 0, ya = 0;
for(int i = 0; i < n; i++)
{
xa += x[i];
ya += y[i];
}
xa /= n;
ya /= n;
// Summation of all X and Y values
double sumX = 0;
double sumY = 0;
// Summation of all X*Y values
double sumXY = 0;
// Summation of all X^2 and Y^2 values
double sumXs = 0;
double sumYs = 0;
for(int i = 0; i < n; i++)
{
sumX = sumX + x[i];
sumY = sumY + y[i];
sumXY = sumXY + (x[i] * y[i]);
sumXs = sumXs + (x[i] * x[i]);
sumYs = sumYs + (y[i] * y[i]);
}
// (X^2) and (Y^2) sqaured
double Xs = sumX * sumX;
double Ys = sumY * sumY;
// Calculate slope, m
slope = (n * sumXY - sumX * sumY) / (n* sumXs - Xs);
// Calculate intercept
intercept = ceil((sumY - slope * sumX) / n);
// Calculate regression index, r^2
double r_top = (n * sumXY - sumX * sumY);
double r_bottom = sqrt((n* sumXs - Xs) * (n* sumYs - Ys));
double r = 0;
// Check line is not perfectly vertical or horizontal
if(r_top == 0 || r_bottom == 0)
r = 0;
else
r = r_top/ r_bottom;
There are more efficeint ways of doing this (see CodeCogs or AGLIB) but as quick fix this code seems to work.
To detect Circles in OpenCV I dropped the Hough Transform and adapeted codee from this post:
Detection of coins (and fit ellipses) on an image
It is then a case of refining the co-ordinates (removing any outliers etc) to determine if the circles lie on a horizontal line from the slope and intercept values of the regression.
Obtain the x,y coordinates of the thresholded points, then perform a linear regression to find a best-fit line. With that line, you can determine the r^2 value which effectively gives you the quality of fit. Based on that fitness measure, you can determine your calibration success.
Here is a good discussion.
you could do something like this, altough it is an aproximation:
var dw = decide a medium dot width in pixels
maxdots = 0;
for each line of the image {
var dots = 0;
scan by incrementing x by dw {
if (color==dotcolor) dots++;
}
if (dots>maxdots) maxdots=dots;
}
maxdots would be the best result...