Gaussian blur not uniform - c++

I have been trying to implement a simple Gaussian blur algorithm, for my image editing program. However, I have been having some trouble making this work, and I think the problem lies in the below snippet:
for( int j = 0; j < pow( kernel_size, 2 ); j++ )
{
int idx = ( i + kx + ( ky * img.width ));
//Try and overload this whenever possible
valueR += ( img.p_pixelArray[ idx ].r * kernel[ j ] );
valueG += ( img.p_pixelArray[ idx ].g * kernel[ j ] );
valueB += ( img.p_pixelArray[ idx ].b * kernel[ j ] );
if( kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
kx = -kernel_limit;
ky = -kernel_limit;
A brief explanation of the code above: kernel size is the size of the kernel (or matrix) generated by the Gaussian blur formula. kx and ky are variables to be used for iterating over the kernel. i is the parent loop, that nests this one, and goes over every pixel in the image. Each value variable simply holds a float R, G, or B value, and is used afterwards to obtain the final result. The if-else is used to increase kx and ky. idx is used to find the correct pixel. kernel limit is a variable set to
(*kernel size* - 1) / 2
So I can have kx going from -1 ( with a 3x3 kernel ) to +1, and the same thing with ky. I think the problem lies with the line
int idx = ( i + kx + ( ky * img.width ));
But I am not sure. The image I get is:
As can be seen, the color is blurred in a diagonal direction, and looks more like some kind of motion blur than Gaussian blur. If someone could help out, I would be very grateful.
EDIT:
The way I fill the kernel is as follows:
for( int i = 0; i < pow( kernel_size, 2 ); i++ )
{
// This. Is. Lisp.
kernel[i] = (( 1 / ( 2 * pi * pow( sigma, 2 ))) * pow (e, ( -((( pow( kx, 2 ) + pow( ky, 2 )) / 2 * pow( sigma, 2 ))))));
if(( kx + 1 ) == kernel_size )
{
kx = 0;
ky++;
}
else
{
kx++;
}
}

Few problems:
Your Gaussian misses brackets (even though you already have plenty..) around 2 * pow( sigma, 2 ). Now you multiply by variance instead of divide.
But what your problem is, is that your gaussian is centered at kx = ky = 0, as you let it run from 0 to kernel_size, instead of from -kernel_limit to kernel_limit. This results in the diagonal blurring. Something like the following should work better
kx = -kernel_limit;
ky = -kernel_limit;
int kernel_size_sq = kernel_size * kernel_size;
for( int i = 0; i < kernel_size_sq; i++ )
{
double sigma_sq = sigma * sigma;
double kx_sq = kx * kx;
double ky_sq = ky * ky;
kernel[i] = 1.0 / ( 2 * pi * sigma_sq) * exp(-(kx_sq + ky_sq) / (2 * sigma_sq));
if(kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
Also note how I got rid of your lisp-ness and some improvements: use some intermediate variables for clarity (compiler will optimize them away if anyway you ask it to); simple multiplication is faster than pow(x, 2); pow(e, x) == exp(x).

Related

Algorithm for adjustment of image levels

I need to implement in C++ algorithm for adjusting image levels that works similar to Levels function in Photoshop or GIMP. I.e. inputs are: color RGB image to be adjusted adjust, while point, black point, midtone point, output from/to values. But I didn't find yet any info on how to perform this adjustment. Probably someone recommend me algorithm description or materials to study.
To the moment I've came up with following code myself, but it doesn't give expected result, similar to what I can see, for example in the GIMP, image becomes too lightened. Below is my current fragment of the code:
const int normalBlackPoint = 0;
const int normalMidtonePoint = 127;
const int normalWhitePoint = 255;
const double normalLowRange = normalMidtonePoint - normalBlackPoint + 1;
const double normalHighRange = normalWhitePoint - normalMidtonePoint;
int blackPoint = 53;
int midtonePoint = 110;
int whitePoint = 168;
int outputFrom = 0;
int outputTo = 255;
double outputRange = outputTo - outputFrom + 1;
double lowRange = midtonePoint - blackPoint + 1;
double highRange = whitePoint - midtonePoint;
double fullRange = whitePoint - blackPoint + 1;
double lowPart = lowRange / fullRange;
double highPart = highRange / fullRange;
int dim(256);
cv::Mat lut(1, &dim, CV_8U);
for(int i = 0; i < 256; ++i)
{
double p = i > normalMidtonePoint
? (static_cast<double>(i - normalMidtonePoint) / normalHighRange) * highRange * highPart + lowPart
: (static_cast<double>(i + 1) / normalLowRange) * lowRange * lowPart;
int v = static_cast<int>(outputRange * p ) + outputFrom - 1;
if(v < 0) v = 0;
else if(v > 255) v = 255;
lut.at<uchar>(i) = v;
}
....
cv::Mat sourceImage = cv::imread(inputFileName, CV_LOAD_IMAGE_COLOR);
if(!sourceImage.data)
{
std::cerr << "Error: couldn't load image " << inputFileName << "." << std::endl;
continue;
}
#if 0
const int forwardConversion = CV_BGR2YUV;
const int reverseConversion = CV_YUV2BGR;
#else
const int forwardConversion = CV_BGR2Lab;
const int reverseConversion = CV_Lab2BGR;
#endif
cv::Mat convertedImage;
cv::cvtColor(sourceImage, convertedImage, forwardConversion);
// Extract the L channel
std::vector<cv::Mat> convertedPlanes(3);
cv::split(convertedImage, convertedPlanes);
cv::LUT(convertedPlanes[0], lut, convertedPlanes[0]);
//dst.copyTo(convertedPlanes[0]);
cv::merge(convertedPlanes, convertedImage);
cv::Mat resImage;
cv::cvtColor(convertedImage, resImage, reverseConversion);
cv::imwrite(outputFileName, resImage);
Pseudocode for Photoshop's Levels Adjustment
First, calculate the gamma correction value to use for the midtone adjustment (if desired). The following roughly simulates Photoshop's technique, which applies gamma 9.99-1.00 for midtone values 0-128, and 1.00-0.01 for 128-255.
Apply gamma correction:
Gamma = 1
MidtoneNormal = Midtones / 255
If Midtones < 128 Then
MidtoneNormal = MidtoneNormal * 2
Gamma = 1 + ( 9 * ( 1 - MidtoneNormal ) )
Gamma = Min( Gamma, 9.99 )
Else If Midtones > 128 Then
MidtoneNormal = ( MidtoneNormal * 2 ) - 1
Gamma = 1 - MidtoneNormal
Gamma = Max( Gamma, 0.01 )
End If
GammaCorrection = 1 / Gamma
Then, for each channel value R, G, B (0-255) for each pixel, do the following in order.
Apply the input levels:
ChannelValue = 255 * ( ( ChannelValue - ShadowValue ) /
( HighlightValue - ShadowValue ) )
Apply the midtones:
If Midtones <> 128 Then
ChannelValue = 255 * ( Pow( ( ChannelValue / 255 ), GammaCorrection ) )
End If
Apply the output levels:
ChannelValue = ( ChannelValue / 255 ) *
( OutHighlightValue - OutShadowValue ) + OutShadowValue
Where:
All channel and adjustment parameter values are integers, 0-255 inclusive
Shadow/Midtone/HighlightValue are the input adjustment values (defaults 0, 128, 255)
OutShadow/HighlightValue are the output adjustment values (defaults 0, 255)
You should optimize things and make sure values are kept in bounds (like 0-255 for each channel)
For a more accurate simulation of Photoshop, you can use a non-linear interpolation curve if Midtones < 128. Photoshop also chops off the darkest and lightest 0.1% of the values by default.
Ignoring the midtone/Gamma, the Levels function is a simple linear scaling.
All input values are first linearly scaled so that all values less or equal to the "black point" are set to 0, and all values greater than or equal white point are set to 255.
Then all values are linearly scaled from 0/255 to the output range.
For the mid-point—it depends what you actually mean by that.
In GIMP, there is a Gamma value. The Gamma value is a simple exponent of the input values (after restricting to the black/white points).
For Gamma == 1, the values are not changed.
For gamma < 1, the values are darkened.

Meanshift algorithm for tracking objects issue computing centroid update of search window

I have been trying to implement the meanshift algorithm for tracking objects, and have gone through the concepts involved.
As per now I have managed to successfully generate a backprojected stream from my camera with a single channel hue roi histogram and a single channel hue video stream which seems fine, I know there is a meanshift function within the opencv library but I am trying to implement one myself using the data structures provided in opencv, calculating the moments and computing the mean centroid of the search window.
But for some reason I am unable to locate the problem within my code as it keeps on converging to the upper left corner of my video stream for any input roi (region of interest) to be tracked. Following is a code snippet of the function for calculating the centroid of the search window where I feel the problem lies but not sure what it is, I would really appreciate if someone can point me in the right direction:
void moment(Mat &backproj, Rect &win){
int x_c, y_c, x_c_new, y_c_new;
int idx_row, idx_col;
double m00 = 0.0 , m01 = 0.0 , m10 = 0.0 ;
double res = 1.0, TOL = 0.003 ;
//Set the center of search window as the center of the probabilistic image:
y_c = (int) backproj.rows / 2 ;
x_c = (int) backproj.cols / 2 ;
//Centroid search solver until residual below certain tolerance:
while (res > TOL){
win.width = (int) 80;
win.height = (int) 60;
//First array element at position (x,y) "lower left corner" of the search window:
win.x = (int) (x_c - win.width / 2) ;
win.y = (int) (y_c - win.height / 2);
//Modulo correction since modulo of negative integer is negative in C:
if (win.x < 0)
win.x = win.x % backproj.cols + backproj.cols ;
if (win.y < 0)
win.y = win.y % backproj.rows + backproj.rows ;
for (int i = 0; i < win.height; i++ ){
//Traverse along y-axis (height) i.e. rows ensuring wrap around top/bottom boundaries:
idx_row = (win.y + i) % (int)backproj.rows ;
for (int j = 0; j < win.width; j++ ){
//Traverse along x-axis (width) i.e. cols ensuring wrap around left/right boundaries:
idx_col = (win.x + j) % (int)backproj.cols ;
//Compute Moments:
m00 += (double) backproj.at<uchar>(idx_row, idx_col) ;
m10 += (double) backproj.at<uchar>(idx_row, idx_col) * i ;
m01 += (double) backproj.at<uchar>(idx_row, idx_col) * j ;
}
}
//Compute new centroid coordinates of the search window:
x_c_new = (int) ( m10 / m00 ) ;
y_c_new = (int) ( m01 / m00 );
//Compute the residual:
res = sqrt( pow((x_c_new - x_c), 2.0) + pow((y_c_new - y_c), 2.0) ) ;
//Set new search window centroid coordinates:
x_c = x_c_new;
y_c = y_c_new;
}
}
It's my second ever query on stackoverflow so please excuse me for any guidelines that I forgot to follow.
EDIT
changed m00 , m01 , m10 to block level variables within WHILE-LOOP instead of function level variables, thanks to Daniel Strul for pointing it out but the problem still remains. Now the search window jumps around the frame boundaries instead of focusing on the roi.
void moment(Mat &backproj, Rect &win){
int x_c, y_c, x_c_new, y_c_new;
int idx_row, idx_col;
double m00 , m01 , m10 ;
double res = 1.0, TOL = 0.003 ;
//Set the center of search window as the center of the probabilistic image:
y_c = (int) backproj.rows / 2 ;
x_c = (int) backproj.cols / 2 ;
//Centroid search solver until residual below certain tolerance:
while (res > TOL){
m00 = 0.0 , m01 = 0.0 , m10 = 0.0
win.width = (int) 80;
win.height = (int) 60;
//First array element at position (x,y) "lower left corner" of the search window:
win.x = (int) (x_c - win.width / 2) ;
win.y = (int) (y_c - win.height / 2);
//Modulo correction since modulo of negative integer is negative in C:
if (win.x < 0)
win.x = win.x % backproj.cols + backproj.cols ;
if (win.y < 0)
win.y = win.y % backproj.rows + backproj.rows ;
for (int i = 0; i < win.height; i++ ){
//Traverse along y-axis (height) i.e. rows ensuring wrap around top/bottom boundaries:
idx_row = (win.y + i) % (int)backproj.rows ;
for (int j = 0; j < win.width; j++ ){
//Traverse along x-axis (width) i.e. cols ensuring wrap around left/right boundaries:
idx_col = (win.x + j) % (int)backproj.cols ;
//Compute Moments:
m00 += (double) backproj.at<uchar>(idx_row, idx_col) ;
m10 += (double) backproj.at<uchar>(idx_row, idx_col) * i ;
m01 += (double) backproj.at<uchar>(idx_row, idx_col) * j ;
}
}
//Compute new centroid coordinates of the search window:
x_c_new = (int) ( m10 / m00 ) ;
y_c_new = (int) ( m01 / m00 );
//Compute the residual:
res = sqrt( pow((x_c_new - x_c), 2.0) + pow((y_c_new - y_c), 2.0) ) ;
//Set new search window centroid coordinates:
x_c = x_c_new;
y_c = y_c_new;
}
}
The reason your algorithms always converges to the upper left corner independently of the input data is that m00, m10 and m01 are never reset to zero:
On iteration 0, for each moment variable m00, m10 and m01, you compute the right value m0
Between iteration 0 and iteration 1 , the moments variables are not reset and keep their previous value
Thus, on iteration 1, for each moment variable m00, m10 and m01, you actually sum the new moment with the old one and obtain ( m0 + m1 )
On iteration 2, you carry on summing the new moments on top of the previous ones and obtain ( m0 + m1 + m2 )
And so on, iteration by iteration.
At the very least, the moment variables should be reset at the beginning of each iteration.
Ideally, they should not be function-level variables but should rather be block-level variables, as they have no use outside the loop iterations (except for debugging purpose):
while (res > TOL){
...
double m00 = 0.0, m01 = 0.0, m10 = 0.0;
for (int i = 0; i < win.height; i++ ){
...
EDIT 1
The reason for the second problem you encounter (the ROI jumping all around the place) is that the computations of the moments are based on the relative coordinates i and j.
Thus, what you compute is [ avg(j) , avg(i) ], wher as what you really want is [ avg(y) , avg(x) ]. To solve this issue, I had proposed a first solution. I"ve replaced it by a much simpler solution below.
EDIT 2
The simplest solution is to add the coordinates of the ROI corner right at the end of each iteration:
x_c_new = win.x + (int) ( m10 / m00 ) ;
y_c_new = win.y + (int) ( m01 / m00 );

How can I calculate the curvature of an extracted contour by opencv?

I did use the findcontours() method to extract contour from the image, but I have no idea how to calculate the curvature from a set of contour points. Can somebody help me? Thank you very much!
While the theory behind Gombat's answer is correct, there are some errors in the code as well as in the formulae (the denominator t+n-x should be t+n-t). I have made several changes:
use symmetric derivatives to get more precise locations of curvature maxima
allow to use a step size for derivative calculation (can be used to reduce noise from noisy contours)
works with closed contours
Fixes:
* return infinity as curvature if denominator is 0 (not 0)
* added square calculation in denominator
* correct checking for 0 divisor
std::vector<double> getCurvature(std::vector<cv::Point> const& vecContourPoints, int step)
{
std::vector< double > vecCurvature( vecContourPoints.size() );
if (vecContourPoints.size() < step)
return vecCurvature;
auto frontToBack = vecContourPoints.front() - vecContourPoints.back();
std::cout << CONTENT_OF(frontToBack) << std::endl;
bool isClosed = ((int)std::max(std::abs(frontToBack.x), std::abs(frontToBack.y))) <= 1;
cv::Point2f pplus, pminus;
cv::Point2f f1stDerivative, f2ndDerivative;
for (int i = 0; i < vecContourPoints.size(); i++ )
{
const cv::Point2f& pos = vecContourPoints[i];
int maxStep = step;
if (!isClosed)
{
maxStep = std::min(std::min(step, i), (int)vecContourPoints.size()-1-i);
if (maxStep == 0)
{
vecCurvature[i] = std::numeric_limits<double>::infinity();
continue;
}
}
int iminus = i-maxStep;
int iplus = i+maxStep;
pminus = vecContourPoints[iminus < 0 ? iminus + vecContourPoints.size() : iminus];
pplus = vecContourPoints[iplus > vecContourPoints.size() ? iplus - vecContourPoints.size() : iplus];
f1stDerivative.x = (pplus.x - pminus.x) / (iplus-iminus);
f1stDerivative.y = (pplus.y - pminus.y) / (iplus-iminus);
f2ndDerivative.x = (pplus.x - 2*pos.x + pminus.x) / ((iplus-iminus)/2*(iplus-iminus)/2);
f2ndDerivative.y = (pplus.y - 2*pos.y + pminus.y) / ((iplus-iminus)/2*(iplus-iminus)/2);
double curvature2D;
double divisor = f1stDerivative.x*f1stDerivative.x + f1stDerivative.y*f1stDerivative.y;
if ( std::abs(divisor) > 10e-8 )
{
curvature2D = std::abs(f2ndDerivative.y*f1stDerivative.x - f2ndDerivative.x*f1stDerivative.y) /
pow(divisor, 3.0/2.0 ) ;
}
else
{
curvature2D = std::numeric_limits<double>::infinity();
}
vecCurvature[i] = curvature2D;
}
return vecCurvature;
}
For me curvature is:
where t is the position inside the contour and x(t) resp. y(t) return the related x resp. y value. See here.
So, according to my definition of curvature, one can implement it this way:
std::vector< float > vecCurvature( vecContourPoints.size() );
cv::Point2f posOld, posOlder;
cv::Point2f f1stDerivative, f2ndDerivative;
for (size_t i = 0; i < vecContourPoints.size(); i++ )
{
const cv::Point2f& pos = vecContourPoints[i];
if ( i == 0 ){ posOld = posOlder = pos; }
f1stDerivative.x = pos.x - posOld.x;
f1stDerivative.y = pos.y - posOld.y;
f2ndDerivative.x = - pos.x + 2.0f * posOld.x - posOlder.x;
f2ndDerivative.y = - pos.y + 2.0f * posOld.y - posOlder.y;
float curvature2D = 0.0f;
if ( std::abs(f2ndDerivative.x) > 10e-4 && std::abs(f2ndDerivative.y) > 10e-4 )
{
curvature2D = sqrt( std::abs(
pow( f2ndDerivative.y*f1stDerivative.x - f2ndDerivative.x*f1stDerivative.y, 2.0f ) /
pow( f2ndDerivative.x + f2ndDerivative.y, 3.0 ) ) );
}
vecCurvature[i] = curvature2D;
posOlder = posOld;
posOld = pos;
}
It works on non-closed pointlists as well. For closed contours, you may would like to change the boundary behavior (for the first iterations).
UPDATE:
Explanation for the derivatives:
A derivative for a continuous 1 dimensional function f(t) is:
But we are in a discrete space and have two discrete functions f_x(t) and f_y(t) where the smallest step for t is one.
The second derivative is the derivative of the first derivative:
Using the approximation of the first derivative, it yields to:
There are other approximations for the derivatives, if you google it, you will find a lot.
Here's a python implementation mainly based on Philipp's C++ code. For those interested, more details on the derivation can be found in Chapter 10.4.2 of:
Klette & Rosenfeld, 2004: Digital Geometry
def getCurvature(contour,stride=1):
curvature=[]
assert stride<len(contour),"stride must be shorther than length of contour"
for i in range(len(contour)):
before=i-stride+len(contour) if i-stride<0 else i-stride
after=i+stride-len(contour) if i+stride>=len(contour) else i+stride
f1x,f1y=(contour[after]-contour[before])/stride
f2x,f2y=(contour[after]-2*contour[i]+contour[before])/stride**2
denominator=(f1x**2+f1y**2)**3+1e-11
curvature_at_i=np.sqrt(4*(f2y*f1x-f2x*f1y)**2/denominator) if denominator > 1e-12 else -1
curvature.append(curvature_at_i)
return curvature
EDIT:
you can use convexityDefects from openCV, here's a link
a code example to find fingers based in their contour (variable res) source
def calculateFingers(res,drawing): # -> finished bool, cnt: finger count
# convexity defect
hull = cv2.convexHull(res, returnPoints=False)
if len(hull) > 3:
defects = cv2.convexityDefects(res, hull)
if type(defects) != type(None): # avoid crashing. (BUG not found)
cnt = 0
for i in range(defects.shape[0]): # calculate the angle
s, e, f, d = defects[i][0]
start = tuple(res[s][0])
end = tuple(res[e][0])
far = tuple(res[f][0])
a = math.sqrt((end[0] - start[0]) ** 2 + (end[1] - start[1]) ** 2)
b = math.sqrt((far[0] - start[0]) ** 2 + (far[1] - start[1]) ** 2)
c = math.sqrt((end[0] - far[0]) ** 2 + (end[1] - far[1]) ** 2)
angle = math.acos((b ** 2 + c ** 2 - a ** 2) / (2 * b * c)) # cosine theorem
if angle <= math.pi / 2: # angle less than 90 degree, treat as fingers
cnt += 1
cv2.circle(drawing, far, 8, [211, 84, 0], -1)
return True, cnt
return False, 0
in my case, i used about the same function to estimate the bending of board while extracting the contour
OLD COMMENT:
i am currently working in about the same, great information in this post, i'll come back with a solution when i'll have it ready
from Jonasson's answer, Shouldn't be here a tuple on the right side too?, i believe it won't unpack:
f1x,f1y=(contour[after]-contour[before])/stride
f2x,f2y=(contour[after]-2*contour[i]+contour[before])/stride**2

Trying to use a rotation matrix, and failing

I have been trying to use a rotation matrix to rotate an image. Below is the code I've been using. I have been trying to do so for days now, and everytime it seems there is something wrong, but I can't see what I am doing wrong. For example, my image is getting slanted, instead of rotating...
The code below is divided in two parts: the actual rotation, and moving the picture upwards to make it appear in the correct spot (it needs to have all its point above 0 to be saved properly). It takes as input an array of pixels (containing position information (x, y), and colour information (r, g, b)), an image (used solely to get its pixel count, aka the array size, and the width), and a value in radians for the rotation.
The part responsible for the rotation itself is the one above the line, while the part below the line is responsible for calculating the lowest point in the image, and moving all pixels up or to the right so the all fit (I need still to implement a function to change the image size when an image is rotated by 45 degrees, or similar).
void Rotate( Pixel *p_pixelsToRotate, prg::Image* img, float rad )
{
int imgLength = img->getPixelCount();
int width = img->getWidth();
int x { 0 }, y { 0 };
for( int i = 0; i < imgLength; i++ )
{
x = p_pixelsToRotate[i].x;
y = p_pixelsToRotate[i].y;
p_pixelsToRotate[i].x = round( cos( rad ) * x - sin( rad ) * y );
p_pixelsToRotate[i].y = round( sin( rad ) * x + sin( rad ) * y );
}
===========================================================================
Pixel* P1 = &p_pixelsToRotate[ width - 1 ]; // Definitions of these are in the supporting docs
Pixel* P3 = &p_pixelsToRotate[ imgLength - 1 ];
int xDiff = 0;
int yDiff = 0;
if( P1->x < 0 || P3->x < 0 )
{
(( P1->x < P3->x )) ? ( xDiff = abs( P1->x )) : ( xDiff = abs( P3->x ));
}
if( P1->y < 0 || P3->y < 0 )
{
(( P1->y < P3->y )) ? ( yDiff = abs( P1->y )) : ( yDiff = abs( P3->y ));
}
for( int i = 0; i < imgLength; i++ )
{
p_pixelsToRotate[i].x += xDiff;
p_pixelsToRotate[i].y += yDiff;
}
}
I would prefer fixing this myself, but have been unable to do so for more than a week now. I don't see why the function is not rotating the position information for the array of input pixel. If someone could have a look, and maybe spot why my logic isn't working, I would be immensely grateful. Thank you.
Seems you just made a mistake in the rotation matrix itself:
p_pixelsToRotate[i].y = round( sin( rad ) * x + sin( rad ) * y );
^^^---------------change to cos
For one thing, this is a mistake:
p_pixelsToRotate[i].x = round( cos( rad ) * x - sin( rad ) * y );
p_pixelsToRotate[i].y = round( sin( rad ) * x + >>>sin<<<( rad ) * y );
The >>>sin<<< should be cos. This would explain getting a shear rather than a rotation.
Other comments: Storing pixel coordinates in bitmap data is an extremely expensive way to solve the problem of bitmap rotation. The better way is inverse transform sampling. With a source image X and wishing to rotate it with transform R to get Y, you are currently thinking
Y = R X
where X and Y have the pixel coordinates explicitly stored. To use inverse sampling, think instead of the same equation multiplied on both sides by the inverse of R.
R^(-1) Y = X
where the coordinates are implicit. That is, to produce Y[j][i], transform (j,i) with the inverse R^(-1) to get a coordinate (x,y) in the X image. Use this to sample the nearest pixel X[round(x)][round(y)] in X and assign that as Y[j][i].
(Actually, rather than simple rounding, a more sophisticated algorithm will take a weighted average of the X pixels around (x,y) to get a smoother result. How to choose the weights is a big additional topic.)
After you have this working, you can go a step farther. Instead of doing a full matrix-vector multiplication for each pixel, some algebra will show that the previous sampling coordinate can be updated to get an adjacent one (next to the right or left, up or down) with just a couple of additions. This speeds things up considerably.
The inverse of a rotation is trivial to compute! Just negate the rotation angle.
A last note is that your use of ternary operators o ? o : o to select assignments is truly terrible style. Instead of this:
(( P1->x < P3->x )) ? ( xDiff = abs( P1->x )) : ( xDiff = abs( P3->x ));
say
xDiff = ( P1->x < P3->x ) ? abs( P1->x ) : abs( P3->x );

Taylor McLaughlin Series to estimate the distance of two points

Distance from point to point: dist = sqrt(dx * dx + dy * dy);
But sqrt is too slow and I can't accept that. I found a method called Taylor McLaughlin Series to estimate the distance of two points on the book. But I can't comprehend the following code. Thanks for anyone who helps me.
#define MIN(a, b) ((a < b) ? a : b)
int FastDistance2D(int x, int y)
{
// This function computes the distance from 0,0 to x,y with 3.5% error
// First compute the absolute value of x, y
x = abs(x);
y = abs(y);
// Compute the minimum of x, y
int mn = MIN(x, y);
// Return the distance
return x + y - (mn >> 1) - (mn >> 2) + (mn >> 4);
}
I have consulted related data about McLaughlin Series, but I still can't comprehend how the return value use McLaughlin Series to estimate the value. Thanks for everyone~
This task is almost duplicate of another one:
Very fast 3D distance check?
And there was link to great article:
http://www.azillionmonkeys.com/qed/sqroot.html
In the article you can find different aproaches for approximation of root. For example maybe this one is suitable for you:
int isqrt (long r) {
float tempf, x, y, rr;
int is;
rr = (long) r;
y = rr*0.5;
*(unsigned long *) &tempf = (0xbe6f0000 - *(unsigned long *) &rr) >> 1;
x = tempf;
x = (1.5*x) - (x*x)*(x*y);
if (r > 101123) x = (1.5*x) - (x*x)*(x*y);
is = (int) (x*rr + 0.5);
return is + ((signed int) (r - is*is)) >> 31;
}
If you can calculate root operation fast, then you can calculate distance in regular way:
return isqrt(a*a+b*b)
And one more link:
http://www.flipcode.com/archives/Fast_Approximate_Distance_Functions.shtml
u32 approx_distance( s32 dx, s32 dy )
{
u32 min, max;
if ( dx < 0 ) dx = -dx;
if ( dy < 0 ) dy = -dy;
if ( dx < dy )
{
min = dx;
max = dy;
} else {
min = dy;
max = dx;
}
// coefficients equivalent to ( 123/128 * max ) and ( 51/128 * min )
return ((( max << 8 ) + ( max << 3 ) - ( max << 4 ) - ( max << 1 ) +
( min << 7 ) - ( min << 5 ) + ( min << 3 ) - ( min << 1 )) >> 8 );
}
You are right sqrt is quite a slow function. But do you really need to compute the distance?
In a lot of cases you can use the distance² instead.
E.g.
If you want to find out what distance is shorter, you can compare the squares of the distance as well as the real distances.
If you want to check if a 100 > distance you can as well check 10000 > distanceSquared
Using the ditance squared in your program instead of the distance you can often avoid calculating the sqrt.
It depends on your application if this is an option for you, but it is always worth to be considered.