OpenCV detect if points lie along line/plane - c++

I am working on a form of autocalibration for an optics device which is currently performed manually. The first part of the calibration is to determine whether a light beam has illuminated the set of 'calibration' points.
I am using OpenCV and have thresholded and cropped the image to leave only the possible relevant points. I know want to determine if these points lie along a stright (horizontal) line; if they a sufficient number do the beam is in the correct position! (The points lie in a straight line but the beam is often bent so hitting most of the points suffices, there are 21 points which show up as white circles when thresholded).
I have tried using a histogram but on the thresholded image the results are not correct and am now looking at Hough lines, but this detects straight lines from edges wwhere as I want to establish if detected points lie on a line.
This is the threshold code I use:
cvThreshold(output, output, 150, 256, CV_THRESH_BINARY);
The histogram results with anywhere from 1 to 640 bins (image width) is two lines at 0 and about 2/3rds through of near max value. Not the distribution expected or obtained without thresholding.
Some pictures to try to illistrate the point (note the 'noisy' light spots which are a feature of the system setup and cannot be overcome):
12 points in a stright line next to one another (beam in correct position)
The sort of output wanted (for illistration, if the points are on the line this is all I need to know!)
Any help would be greatly appreciated. One thought was to extract the co-ordinates of the points and compare them but I don't know how to do this.

Incase it helps anyone here is a very basic (the first draft) of some simple linaear regression code I used.
// Calculate the averages of arrays x and y
double xa = 0, ya = 0;
for(int i = 0; i < n; i++)
{
xa += x[i];
ya += y[i];
}
xa /= n;
ya /= n;
// Summation of all X and Y values
double sumX = 0;
double sumY = 0;
// Summation of all X*Y values
double sumXY = 0;
// Summation of all X^2 and Y^2 values
double sumXs = 0;
double sumYs = 0;
for(int i = 0; i < n; i++)
{
sumX = sumX + x[i];
sumY = sumY + y[i];
sumXY = sumXY + (x[i] * y[i]);
sumXs = sumXs + (x[i] * x[i]);
sumYs = sumYs + (y[i] * y[i]);
}
// (X^2) and (Y^2) sqaured
double Xs = sumX * sumX;
double Ys = sumY * sumY;
// Calculate slope, m
slope = (n * sumXY - sumX * sumY) / (n* sumXs - Xs);
// Calculate intercept
intercept = ceil((sumY - slope * sumX) / n);
// Calculate regression index, r^2
double r_top = (n * sumXY - sumX * sumY);
double r_bottom = sqrt((n* sumXs - Xs) * (n* sumYs - Ys));
double r = 0;
// Check line is not perfectly vertical or horizontal
if(r_top == 0 || r_bottom == 0)
r = 0;
else
r = r_top/ r_bottom;
There are more efficeint ways of doing this (see CodeCogs or AGLIB) but as quick fix this code seems to work.
To detect Circles in OpenCV I dropped the Hough Transform and adapeted codee from this post:
Detection of coins (and fit ellipses) on an image
It is then a case of refining the co-ordinates (removing any outliers etc) to determine if the circles lie on a horizontal line from the slope and intercept values of the regression.

Obtain the x,y coordinates of the thresholded points, then perform a linear regression to find a best-fit line. With that line, you can determine the r^2 value which effectively gives you the quality of fit. Based on that fitness measure, you can determine your calibration success.
Here is a good discussion.

you could do something like this, altough it is an aproximation:
var dw = decide a medium dot width in pixels
maxdots = 0;
for each line of the image {
var dots = 0;
scan by incrementing x by dw {
if (color==dotcolor) dots++;
}
if (dots>maxdots) maxdots=dots;
}
maxdots would be the best result...

Related

Transform images with bezier curves

I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.

Faster mean calculation (openCV, C++)

I am calculating the mean value of multiple squared regions inside an image in C++. Therefore I am shifting a squared region over the image and calculating the mean calculation with the openCV "mean" function, but replaced it with a std. mean calculation (see below), which was unexpectedly faster. Nevertheless it takes ~8ms on an Android device, since the mean calculation is called about 400 times (each mean calculation takes ~0.025ms)
uchar rectSize = 10;
Rect roi(0,0,rectSize, rectSize);
int pxNumber = rectSize * rectSize;
uchar value;
//Shifting the region to the bottom
for(uchar y=0; y<NumberOfRectangles_Y; y++)
{
p = outBitMat.ptr<uchar>(y);
roi.x = rectSize;
//Shifting the region to the right
for(uchar x=0; x<NumberOfRectangles_X; x++, ++p)
{
meanCalc(normalized(roi),rectSize, pxNumber, value);
roi.x += rectSize;
}
roi.y += rectSize;
}
void meanCalc(const cv::Mat& normalized, uchar& rectSize, int& pxNumber, uchar& value)
{
for(uchar y=0; y < rectSize; y++)
{
p = normalized.ptr<uchar>(y);
for(uchar x=0; x < rectSize; x++, ++p)
{
sum += *p;
}
}
value = sum / (float)pxNumber;
}
Is there a way to speed this mean calculation of each rectangular window inside the image up? Can I do some kind of pre-pixel ordering to only calculate the mean once and being faster?
Thanks in advance
Update
Based on the answer of User 6502 and the use of a sum table I resulted in the following:
Mat tab;
integral(image,tab);
int* input = (int*)(tab.data);
value = (input[yStart*tabWidth + xStart] + input[(yStart+rectSize)*tabWidth + xStart+rectSize]
- input[yStart*tabWidth + xStart+rectSize] - input[(yStart+rectSize)*tabWidth + xStart]) / (double)pxNumber;
Whereby this function needs almost the same times. Can it be that the sum table is only useful, when calculating a lot of overlapping regions. Because in my case, I only touch each pixel for calculation only once.
You can compute the mean over any rectangle in constant time (independently of the rectangle size) by pre-computing a "summed area table".
You need to compute a table where the element (i, j) is the sum of all original data in the rectangle from (0, 0) to (i, j) and this can be done in a single pass over the data.
Once you have the table the sum of values between (x0, y0) and (x1, y1) can be computed in constant time with:
tab(x0, y0) + tab(x1, y1) - tab(x0, y1) - tab(x1, y0)
To understand how the algorithm works it's easier to think first to the one-dimensional case: to compute in constant time the sum of values from v[x0] to v[x1] you can pre-compute a table with
st[0] = v[0];
for (int i=1; i<n; i++) st[i] = st[i-1] + v[i];
and then you can take the difference st[x1] - st[x0] to know the sum of any interval of the original data.
The algorithm can indeed be easily extended to n dimensions.
Moreover it may be not evident at a first sight but the precomputation of the summed area table can be implemented for a multi-core architecture to take advantage of parallel execution.
For 2d a simple decomposition is to consider that computing the 2d sum table is the same as computing a 1d sum table on each row and then computing an 1d sum table for each column on the result.
You can use optimized version your function menCalc from Simd Library:
SIMD_API void SimdValueSum(const uint8_t * src, size_t stride,
size_t width, size_t height, uint64_t * sum);
It is much faster because uses different SIMD like SSE, AVX and so on.

Improving C++ algorithm for finding all points within a sphere of radius r

Language/Compiler: C++ (Visual Studio 2013)
Experience: ~2 months
I am working in a rectangular grid in 3D-space (size: xdim by ydim by zdim) where , "xgrid, ygrid, and zgrid" are 3D arrays of the x,y, and z-coordinates, respectively. Now, I am interested in finding all points that lie within a sphere of radius "r" centered about the point "(vi,vj,vk)". I want to store the index locations of these points in the vectors "xidx,yidx,zidx". For a single point this algorithm works and is fast enough but when I wish to iterate over many points within the 3D-space I run into very long run times.
Does anyone have any suggestions on how I can improve the implementation of this algorithm in C++? After running some profiling software I found online (very sleepy, Luke stackwalker) it seems that the "std::vector::size" and "std::vector::operator[]" member functions are bogging down my code. Any help is greatly appreciated.
Note: Since I do not know a priori how many voxels are within the sphere, I set the length of vectors xidx,yidx,zidx to be larger than necessary and then erase all the excess elements at the end of the function.
void find_nv(int vi, int vj, int vk, vector<double> &xidx, vector<double> &yidx, vector<double> &zidx, double*** &xgrid, double*** &ygrid, double*** &zgrid, int r, double xdim,double ydim,double zdim, double pdim)
{
double xcor, ycor, zcor,xval,yval,zval;
vector<double>xyz(3);
xyz[0] = xgrid[vi][vj][vk];
xyz[1] = ygrid[vi][vj][vk];
xyz[2] = zgrid[vi][vj][vk];
int counter = 0;
// Confine loop to be within boundaries of sphere
int istart = vi - r;
int iend = vi + r;
int jstart = vj - r;
int jend = vj + r;
int kstart = vk - r;
int kend = vk + r;
if (istart < 0) {
istart = 0;
}
if (iend > xdim-1) {
iend = xdim-1;
}
if (jstart < 0) {
jstart = 0;
}
if (jend > ydim - 1) {
jend = ydim-1;
}
if (kstart < 0) {
kstart = 0;
}
if (kend > zdim - 1)
kend = zdim - 1;
//-----------------------------------------------------------
// Begin iterating through all points
//-----------------------------------------------------------
for (int k = 0; k < kend+1; ++k)
{
for (int j = 0; j < jend+1; ++j)
{
for (int i = 0; i < iend+1; ++i)
{
if (i == vi && j == vj && k == vk)
continue;
else
{
xcor = pow((xgrid[i][j][k] - xyz[0]), 2);
ycor = pow((ygrid[i][j][k] - xyz[1]), 2);
zcor = pow((zgrid[i][j][k] - xyz[2]), 2);
double rsqr = pow(r, 2);
double sphere = xcor + ycor + zcor;
if (sphere <= rsqr)
{
xidx[counter]=i;
yidx[counter]=j;
zidx[counter] = k;
counter = counter + 1;
}
else
{
}
//cout << "counter = " << counter - 1;
}
}
}
}
// erase all appending zeros that are not voxels within sphere
xidx.erase(xidx.begin() + (counter), xidx.end());
yidx.erase(yidx.begin() + (counter), yidx.end());
zidx.erase(zidx.begin() + (counter), zidx.end());
return 0;
You already appear to have used my favourite trick for this sort of thing, getting rid of the relatively expensive square root functions and just working with the squared values of the radius and center-to-point distance.
One other possibility which may speed things up (a) is to replace all the:
xyzzy = pow (plugh, 2)
calls with the simpler:
xyzzy = plugh * plugh
You may find the removal of the function call could speed things up, however marginally.
Another possibility, if you can establish the maximum size of the target array, is to use an real array rather than a vector. I know they make the vector code as insanely optimal as possible but it still won't match a fixed-size array for performance (since it has to do everything the fixed size array does plus handle possible expansion).
Again, this may only offer very marginal improvement at the cost of more memory usage but trading space for time is a classic optimisation strategy.
Other than that, ensure you're using the compiler optimisations wisely. The default build in most cases has a low level of optimisation to make debugging easier. Ramp that up for production code.
(a) As with all optimisations, you should measure, not guess! These suggestions are exactly that: suggestions. They may or may not improve the situation, so it's up to you to test them.
One of your biggest problems, and one that is probably preventing the compiler from making a lot of optimisations is that you are not using the regular nature of your grid.
If you are really using a regular grid then
xgrid[i][j][k] = x_0 + i * dxi + j * dxj + k * dxk
ygrid[i][j][k] = y_0 + i * dyi + j * dyj + k * dyk
zgrid[i][j][k] = z_0 + i * dzi + j * dzj + k * dzk
If your grid is axis aligned then
xgrid[i][j][k] = x_0 + i * dxi
ygrid[i][j][k] = y_0 + j * dyj
zgrid[i][j][k] = z_0 + k * dzk
Replacing these inside your core loop should result in significant speedups.
You could do two things. Reduce the number of points you are testing for inclusion and simplify the problem to multiple 2d tests.
If you take the sphere an look at it down the z axis you have all the points for y+r to y-r in the sphere, using each of these points you can slice the sphere into circles that contain all the points in the x/z plane limited to the circle radius at that specific y you are testing. Calculating the radius of the circle is a simple solve the length of the base of the right angle triangle problem.
Right now you ar testing all the points in a cube, but the upper ranges of the sphere excludes most points. The idea behind the above algorithm is that you can limit the points tested at each level of the sphere to the square containing the radius of the circle at that height.
Here is a simple hand draw graphic, showing the sphere from the side view.
Here we are looking at the slice of the sphere that has the radius ab. Since you know the length ac and bc of the right angle triangle, you can calculate ab using Pythagoras theorem. Now you have a simple circle that you can test the points in, then move down, it reduce length ac and recalculate ab and repeat.
Now once you have that you can actually do a little more optimization. Firstly, you do not need to test every point against the circle, you only need to test one quarter of the points. If you test the points in the upper left quadrant of the circle (the slice of the sphere) then the points in the other three points are just mirror images of that same point offset either to the right, bottom or diagonally from the point determined to be in the first quadrant.
Then finally, you only need to do the circle slices of the top half of the sphere because the bottom half is just a mirror of the top half. In the end you only tested a quarter of the point for containment in the sphere. This should be a huge performance boost.
I hope that makes sense, I am not at a machine now that I can provide a sample.
simple thing here would be a 3D flood fill from center of the sphere rather than iterating over the enclosing square as you need to visited lesser points. Moreover you should implement the iterative version of the flood-fill to get more efficiency.
Flood Fill

How to fit the 2D scatter data with a line with C++

I used to work with MATLAB, and for the question I raised I can use p = polyfit(x,y,1) to estimate the best fit line for the scatter data in a plate. I was wondering which resources I can rely on to implement the line fitting algorithm with C++. I understand there are a lot of algorithms for this subject, and for me I expect the algorithm should be fast and meantime it can obtain the comparable accuracy of polyfit function in MATLAB.
This page describes the algorithm easier than Wikipedia, without extra steps to calculate the means etc. : http://faculty.cs.niu.edu/~hutchins/csci230/best-fit.htm . Almost quoted from there, in C++ it's:
#include <vector>
#include <cmath>
struct Point {
double _x, _y;
};
struct Line {
double _slope, _yInt;
double getYforX(double x) {
return _slope*x + _yInt;
}
// Construct line from points
bool fitPoints(const std::vector<Point> &pts) {
int nPoints = pts.size();
if( nPoints < 2 ) {
// Fail: infinitely many lines passing through this single point
return false;
}
double sumX=0, sumY=0, sumXY=0, sumX2=0;
for(int i=0; i<nPoints; i++) {
sumX += pts[i]._x;
sumY += pts[i]._y;
sumXY += pts[i]._x * pts[i]._y;
sumX2 += pts[i]._x * pts[i]._x;
}
double xMean = sumX / nPoints;
double yMean = sumY / nPoints;
double denominator = sumX2 - sumX * xMean;
// You can tune the eps (1e-7) below for your specific task
if( std::fabs(denominator) < 1e-7 ) {
// Fail: it seems a vertical line
return false;
}
_slope = (sumXY - sumX * yMean) / denominator;
_yInt = yMean - _slope * xMean;
return true;
}
};
Please, be aware that both this algorithm and the algorithm from Wikipedia ( http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line ) fail in case the "best" description of points is a vertical line. They fail because they use
y = k*x + b
line equation which intrinsically is not capable to describe vertical lines. If you want to cover also the cases when data points are "best" described by vertical lines, you need a line fitting algorithm which uses
A*x + B*y + C = 0
line equation. You can still modify the current algorithm to produce that equation:
y = k*x + b <=>
y - k*x - b = 0 <=>
B=1, A=-k, C=-b
In terms of the above code:
B=1, A=-_slope, C=-_yInt
And in "then" block of the if checking for denominator equal to 0, instead of // Fail: it seems a vertical line, produce the following line equation:
x = xMean <=>
x - xMean = 0 <=>
A=1, B=0, C=-xMean
I've just noticed that the original article I was referring to has been deleted. And this web page proposes a little different formula for line fitting: http://hotmath.com/hotmath_help/topics/line-of-best-fit.html
double denominator = sumX2 - 2 * sumX * xMean + nPoints * xMean * xMean;
...
_slope = (sumXY - sumY*xMean - sumX * yMean + nPoints * xMean * yMean) / denominator;
The formulas are identical because nPoints*xMean == sumX and nPoints*xMean*yMean == sumX * yMean == sumY * xMean.
I would suggest coding it from scratch. It is a very simple implementation in C++. You can code up both the intercept and gradient for least-squares fit (the same method as polyfit) from your data directly from the formulas here
http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line
These are closed form formulas that you can easily evaluate yourself using loops. If you were using higher degree fits then I would suggest a matrix library or more sophisticated algorithms but for simple linear regression as you describe above this is all you need. Matrices and linear algebra routines would be overkill for such a problem (in my opinion).
Equation of line is Ax + By + C=0.
So it can be easily( when B is not so close to zero ) convert to y = (-A/B)*x + (-C/B)
typedef double scalar_type;
typedef std::array< scalar_type, 2 > point_type;
typedef std::vector< point_type > cloud_type;
bool fit( scalar_type & A, scalar_type & B, scalar_type & C, cloud_type const& cloud )
{
if( cloud.size() < 2 ){ return false; }
scalar_type X=0, Y=0, XY=0, X2=0, Y2=0;
for( auto const& point: cloud )
{ // Do all calculation symmetric regarding X and Y
X += point[0];
Y += point[1];
XY += point[0] * point[1];
X2 += point[0] * point[0];
Y2 += point[1] * point[1];
}
X /= cloud.size();
Y /= cloud.size();
XY /= cloud.size();
X2 /= cloud.size();
Y2 /= cloud.size();
A = - ( XY - X * Y ); //!< Common for both solution
scalar_type Bx = X2 - X * X;
scalar_type By = Y2 - Y * Y;
if( fabs( Bx ) < fabs( By ) ) //!< Test verticality/horizontality
{ // Line is more Vertical.
B = By;
std::swap(A,B);
}
else
{ // Line is more Horizontal.
// Classical solution, when we expect more horizontal-like line
B = Bx;
}
C = - ( A * X + B * Y );
//Optional normalization:
// scalar_type D = sqrt( A*A + B*B );
// A /= D;
// B /= D;
// C /= D;
return true;
}
You can also use or go over this implementation there is also documentation here.
Fitting a Line can be acomplished in different ways.
Least Square means minimizing the sum of the squared distance.
But you could take another cost function as example the (not squared) distance. But normaly you use the squred distance (Least Square).
There is also a possibility to define the distance in different ways. Normaly you just use the "y"-axis for the distance. But you could also use the total/orthogonal distance. There the distance is calculated in x- and y-direction. This can be a better fit if you have also errors in x direction (let it be the time of measurment) and you didn't start the measurment on the exact time you saved in the data. For Least Square and Total Least Square Line fit exist algorithms in closed form. So if you fitted with one of those you will get the line with the minimal sum of the squared distance to the datapoints. You can't fit a better line in the sence of your defenition. You could just change the definition as examples taking another cost function or defining distance in another way.
There is a lot of stuff about fitting models into data you could think of, but normaly they all use the "Least Square Line Fit" and you should be fine most times. But if you have a special case it can be necessary to think about what your doing. Taking Least Square done in maybe a few minutes. Thinking about what Method fits you best to the problem envolves understanding the math, which can take indefinit time :-).
Note: This answer is NOT AN ANSWER TO THIS QUESTION but to this one "Line closest to a set of points" that has been flagged as "duplicate" of this one (incorrectly in my opinion), no way to add new answers to it.
The question asks for:
Find the line whose distance from all the points is minimum ? By
distance I mean the shortest distance between the point and the line.
The most usual interpretation of distance "between the point and the line" is the euclidean distance and the most common interpretation of "from all points" is the sum of distances (in absolute or squared value).
When the target is minimize the sum of squared euclidean distances, the linear regression (LST) is not the algorithm to use. In addition, linear regression can not result in a vertical line. The algorithm to be used is the "total least squares". See by example wikipedia for the problem description and this answer in math stack exchange for details about the formulation.
to fit a line y=param[0]x+param[1] simply do this:
// loop over data:
{
sum_x += x[i];
sum_y += y[i];
sum_xy += x[i] * y[i];
sum_x2 += x[i] * x[i];
}
// means
double mean_x = sum_x / ninliers;
double mean_y = sum_y / ninliers;
float varx = sum_x2 - sum_x * mean_x;
float cov = sum_xy - sum_x * mean_y;
// check for zero varx
param[0] = cov / varx;
param[1] = mean_y - param[0] * mean_x;
More on the topic http://easycalculation.com/statistics/learn-regression.php
(formulas are the same, they just multiplied and divided by N, a sample sz.). If you want to fit plane to 3D data use a similar approach -
http://www.mymathforum.com/viewtopic.php?f=13&t=8793
Disclaimer: all quadratic fits are linear and optimal in a sense that they reduce the noise in parameters. However, you might interested in the reducing noise in the data instead. You might also want to ignore outliers since they can bia s your solutions greatly. Both problems can be solved with RANSAC. See my post at:

Optimized float Blur variations

I am looking for optimized functions in c++ for calculating areal averages of floats. the function is passed a source float array, a destination float array (same size as source array), array width and height, "blurring" area width and height.
The function should "wrap-around" edges for the blurring/averages calculations.
Here is example code that blur with a rectangular shape:
/*****************************************
* Find averages extended variations
*****************************************/
void findaverages_ext(float *floatdata, float *dest_data, int fwidth, int fheight, int scale, int aw, int ah, int weight, int xoff, int yoff)
{
printf("findaverages_ext scale: %d, width: %d, height: %d, weight: %d \n", scale, aw, ah, weight);
float total = 0.0;
int spos = scale * fwidth * fheight;
int apos;
int w = aw;
int h = ah;
float* f_temp = new float[fwidth * fheight];
// Horizontal
for(int y=0;y<fheight ;y++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (int kx = 0; kx <= w; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Wrap
for (int kx = (fwidth-w); kx < fwidth; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Store first window
f_temp[y*fwidth] = (total / (w*2+1));
for(int x=1;x<fwidth ;x++) // x width changes with y
{
// Substract pixel leaving window
if (x-w-1 >= 0)
total -= floatdata[y*fwidth + x-w-1];
// Add pixel entering window
if (x+w < fwidth)
total += floatdata[y*fwidth + x+w];
else
total += floatdata[y*fwidth + x+w-fwidth];
// Store average
apos = y * fwidth + x;
f_temp[apos] = (total / (w*2+1));
}
}
// Vertical
for(int x=0;x<fwidth ;x++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel
for (int ky = 0; ky <= h; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Wrap
for (int ky = fheight-h; ky < fheight; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Store first if not out of bounds
dest_data[spos + x] = (total / (h*2+1));
for(int y=1;y< fheight ;y++) // y width changes with x
{
// Substract pixel leaving window
if (y-h-1 >= 0)
total -= f_temp[(y-h-1)*fwidth + x];
// Add pixel entering window
if (y+h < fheight)
total += f_temp[(y+h)*fwidth + x];
else
total += f_temp[(y+h-fheight)*fwidth + x];
// Store average
apos = y * fwidth + x;
dest_data[spos+apos] = (total / (h*2+1));
}
}
delete f_temp;
}
What I need is similar functions that for each pixel finds the average (blur) of pixels from shapes different than rectangular.
The specific shapes are: "S" (sharp edges), "O" (rectangular but hollow), "+" and "X", where the average float is stored at the center pixel on destination data array. Size of blur shape should be variable, width and height.
The functions does not need to be pixelperfect, only optimized for performance. There could be separate functions for each shape.
I am also happy if anyone can tip me of how to optimize the example function above for rectangluar blurring.
What you are trying to implement are various sorts of digital filters for image processing. This is equivalent to convolving two signals where the 2nd one would be the filter's impulse response. So far, you regognized that a "rectangular average" is separable. By separable I mean, you can split the filter into two parts. One that operates along the X axis and one that operates along the Y axis -- in each case a 1D filter. This is nice and can save you lots of cycles. But not every filter is separable. Averaging along other shapres (S, O, +, X) is not separable. You need to actually compute a 2D convolution for these.
As for performance, you can speed up your 1D averages by properly implementing a "moving average". A proper "moving average" implementation only requires a fixed amount of little work per pixel regardless of the averaging "window". This can be done by recognizing that neighbouring pixels of the target image are computed by an average of almost the same pixels. You can reuse these sums for the neighbouring target pixel by adding one new pixel intensity and subtracting an older one (for the 1D case).
In case of arbitrary non-separable filters your best bet performance-wise is "fast convolution" which is FFT-based. Checkout www.dspguide.com. If I recall correctly, there is even a chapter on how to properly do "fast convolution" using the FFT algorithm. Although, they explain it for 1-dimensional signals, it also applies to 2-dimensional signals. For images you have to perform 2D-FFT/iFFT transforms.
To add to sellibitze's answer, you can use a summed area table for your O, S and + kernels (not for the X one though). That way you can convolve a pixel in constant time, and it's probably the fastest method to do it for kernel shapes that allow it.
Basically, a SAT is a data structure that lets you calculate the sum of any axis-aligned rectangle. For the O kernel, after you've built a SAT, you'd take the sum of the outer rect's pixels and subtract the sum of the inner rect's pixels. The S and + kernels can be implemented similarly.
For the X kernel you can use a different approach. A skewed box filter is separable:
You can convolve with two long, thin skewed box filters, then add the two resulting images together. The center of the X will be counted twice, so will you need to convolve with another skewed box filter, and subtract that.
Apart from that, you can optimize your box blur in many ways.
Remove the two ifs from the inner loop by splitting that loop into three loops - two short loops that do checks, and one long loop that doesn't. Or you could pad your array with extra elements from all directions - that way you can simplify your code.
Calculate values like h * 2 + 1 outside the loops.
An expression like f_temp[ky*fwidth + x] does two adds and one multiplication. You can initialize a pointer to &f_temp[ky*fwidth] outside the loop, and just increment that pointer in the loop.
Don't do the division by h * 2 + 1 in the horizontal step. Instead, divide by the square of that in the vertical step.