Create a Fast Sin() function to improve fps ? Fast sin() function? - c++

I am rendering 500x500 points in real-time.
I have to compute the position of points using atan() and sin() functions. By using atan() and sin() I am getting 24 fps (frames per second).
float thetaC = atan(value);
float h = (value) / (sin(thetaC)));
If I don't use sin() I am getting 52 fps.
and if I dont use atan() I am 30 fps.
So, the big problem is with sin(). How can I use Fast Sin version. Can I create a Look Up Table for that ? I don't have any specific values to create LUT. what can I do in this situation ?
PS: I have also tried fast sin function of ASM but not getting any difference.
Thanks.

Hang on a second....
You have a triangle, you're computing the hypoteneuse. First, you're taking atan(value) to get the angle, and then using value again with sin to compute h. So we have the scenario where one side of the triangle is 1:
/|
h / | value
/ |
/C__|
1
All you really need to do is calculate h = sqrt(value*value + 1); ... But then, sqrt isn't the fastest function around either.
Perhaps I've missed the point or you've left something out. I've always used lookup tables for sin and cos, and found them to be fast. If you don't know the values ahead of time then you need to approximate, but this means a multiplication, truncation to integer (and possibly sign conversion) in order to get the array index. If you can convert your units to work in integers (effectively making your floats into fixed-point), it makes the lookup even quicker.

It depends on the accuracy that you need. The maximum derivative of sin is 1, so if if x1 and x2 are within epsilon of one another, then sin(x1) and sin(x2) are also within epsilon. If you just need accuracy to within, say 0.001, then you can create a lookup table of 1000 * PI = 3142 points, and just look up the value closest to the one you need. This can be faster than what the native code does, since the native code (probably) uses a lookup table for polynomial coefficients, and then interpolates, and since this table can be small enough to stay in cache easily.
If you need complete accuracy over the whole range, then there's probably nothing better that you can do.
If you wanted, you could also create a lookup table over (1/sin(x)), since that's your actual function of interest. Either way, you'll want to be careful around sin(x) = 0, since a small error in sin(x) can cause a big error in 1/sin(x). Defining your error tolerance is important for figuring out what shortcuts you can take.
You'd create the lookup table with something like:
float *table = malloc(1000 * sizeof(float));
for(int i = 0; i < 1000; i++){
table[i] = sin(i/1000.0);
}
and would access it something like
float fastSin(float x){
int index = x * 1000.0;
return table[index];
}
This code isn't complete (and will crash for anything outside of 0 < x < 1, because of array bounds), but should get you started.

For sin (but not atan) you can actually get simpler than a table--just create
float sin_arr[31416]; //Or as much precision as you need
for (int i=0; i<31416; ++i)
sin_arr[i] = sin( i / 10000.0 );
//...
float h = (value) / sin_arr[ (int)(thetaC*10000.0) % 31416 ];
My guess is that this will give you a speed improvement.

Related

Calculation sine and cosine in one shot

I have a scientific code that uses both sine and cosine of the same argument (I basically need the complex exponential of that argument). I was wondering if it were possible to do this faster than calling sine and cosine functions separately.
Also I only need about 0.1% precision. So is there any way I can find the default trig functions and truncate the power series for speed?
One other thing I have in mind is, is there any way to perform the remainder operation such that the result is always positive? In my own algorithm I used x=fmod(x,2*pi); but then I would need to add 2pi if x is negative (smaller domain means I can use a shorter power series)
EDIT: LUT turned out to be the best approach for this, however I am glad I learned about other approximation techniques. I will also advise using an explicit midpoint approximation. This is what I ended up doing:
const int N = 10000;//about 3e-4 error for 1000//3e-5 for 10 000//3e-6 for 100 000
double *cs = new double[N];
double *sn = new double[N];
for(int i =0;i<N;i++){
double A= (i+0.5)*2*pi/N;
cs[i]=cos(A);
sn[i]=sin(A);
}
The following part approximates (midpoint) sincos(2*pi*(wc2+t[j]*(cotp*t[j]-wc)))
double A=(wc2+t[j]*(cotp*t[j]-wc));
int B =(int)N*(A-floor(A));
re += cs[B]*f[j];
im += sn[B]*f[j];
Another approach could have been using the chebyshev decomposition. You can use the orthogonality property to find the coefficients. Optimized for exponential, it looks like this:
double fastsin(double x){
x=x-floor(x/2/pi)*2*pi-pi;//this line can be improved, both inside this
//function and before you input it into the function
double x2 = x*x;
return (((0.00015025063885163012*x2-
0.008034350857376128)*x2+ 0.1659789684145034)*x2-0.9995812174943602)*x;} //7th order chebyshev approx
If you seek fast evaluation with good (but not high) accuracy with powerseries you should use an expansion in Chebyshev polynomials: tabulate the coefficients (you'll need VERY few for 0.1% accuracy) and evaluate the expansion with the recursion relations for these polynomials (it's really very easy).
References:
Tabulated coefficients: http://www.ams.org/mcom/1980-34-149/S0025-5718-1980-0551302-5/S0025-5718-1980-0551302-5.pdf
Evaluation of chebyshev expansion: https://en.wikipedia.org/wiki/Chebyshev_polynomials
You'll need to (a) get the "reduced" argument in the range -pi/2..+pi/2 and consequently then (b) handle the sign in your results when the argument actually should have been in the "other" half of the full elementary interval -pi..+pi. These aspects should not pose a major problem:
determine (and "remember" as an integer 1 or -1) the sign in the original angle and proceed with the absolute value.
use a modulo function to reduce to the interval 0..2PI
Determine (and "remember" as an integer 1 or -1) whether it is in the "second" half and, if so, subtract pi*3/2, otherwise subtract pi/2. Note: this effectively interchanges sine and cosine (apart from signs); take this into account in the final evaluation.
This completes the step to get an angle in -pi/2..+pi/2
After evaluating sine and cosine with the Cheb-expansions, apply the "flags" of steps 1 and 3 above to get the right signs in the values.
Just create a lookup table. The following will let you lookup the sin and cos of any radian value between -2PI and 2PI.
// LOOK UP TABLE
var LUT_SIN_COS = [];
var N = 14400;
var HALF_N = N >> 1;
var STEP = 4 * Math.PI / N;
var INV_STEP = 1 / STEP;
// BUILD LUT
for(var i=0, r = -2*Math.PI; i < N; i++, r += STEP) {
LUT_SIN_COS[2*i] = Math.sin(r);
LUT_SIN_COS[2*i + 1] = Math.cos(r);
}
You index into the lookup table by:
var index = ((r * INV_STEP) + HALF_N) << 1;
var sin = LUT_SIN_COS[index];
var cos = LUT_SIN_COS[index + 1];
Here's a fiddle that displays the % error you can expect from different sized LUTS http://jsfiddle.net/77h6tvhj/
EDIT Here's an ideone (c++) with a ~benchmark~ vs the float sin and cos. http://ideone.com/SGrFVG For whatever a benchmark on ideone.com is worth the LUT is 5 times faster.
One way to go would be to learn how to implement the CORDIC algorithm. It is not difficult and pretty interesting intelectually. This gives you both the cosine and the sine. Wikipedia gives a MATLAB example that should be easy to adapt in C++.
Note that you can augment speed and reduce precision simply by lowering the parameter n.
About your second question, it has already been asked here (in C). It seems that there is no simple way.
You can also calculate sine using a square root, given the angle and the cosine.
The example below assumes the angle ranges from 0 to 2π:
double c = cos(angle);
double s = sqrt(1.0-c*c);
if(angle>pi)s=-s;
For single-precision floats, Microsoft uses 11-degree polynomial approximation for sine, 10-degree for cosine: XMScalarSinCos.
They also have faster version, XMScalarSinCosEst, that uses lower-degree polynomials.
If you aren’t on Windows, you’ll find same code + coefficients on geometrictools.com under Boost license.

Quadrature routines for probability densities

I want to integrate a probability density function from (-\infty, a] because the cdf is not available in closed form. But I'm not sure how to do this in C++.
This task is pretty simple in Mathematica; All I need to do is define the function,
f[x_, lambda_, alpha_, beta_, mu_] :=
Module[{gamma},
gamma = Sqrt[alpha^2 - beta^2];
(gamma^(2*lambda)/((2*alpha)^(lambda - 1/2)*Sqrt[Pi]*Gamma[lambda]))*
Abs[x - mu]^(lambda - 1/2)*
BesselK[lambda - 1/2, alpha Abs[x - mu]] E^(beta (x - mu))
];
and then call the NIntegrate Routine to numerically integrate it.
F[x_, lambda_, alpha_, beta_, mu_] :=
NIntegrate[f[t, lambda, alpha, beta, mu], {t, -\[Infinity], x}]
Now I want to achieve the same thing in C++. I using the routine gsl_integration_qagil from the gsl numerics library. It is designed to integrate functions on the semi infinite intervals (-\infty, a] which is just what I want. But unfortunately I can't get it to work.
This is the density function in C++,
density(double x)
{
using namespace boost::math;
if(x == _mu)
return std::numeric_limits<double>::infinity();
return pow(_gamma, 2*_lambda)/(pow(2*_alpha, _lambda-0.5)*sqrt(_pi)*tgamma(_lambda))* pow(abs(x-_mu), _lambda - 0.5) * cyl_bessel_k(_lambda-0.5, _alpha*abs(x - _mu)) * exp(_beta*(x - _mu));
}
Then I try and integrate to obtain the cdf by calling the gsl routine.
cdf(double x)
{
gsl_integration_workspace * w = gsl_integration_workspace_alloc (1000);
double result, error;
gsl_function F;
F.function = &density;
double epsabs = 0;
double epsrel = 1e-12;
gsl_integration_qagil (&F, x, epsabs, epsrel, 1000, w, &result, &error);
printf("result = % .18f\n", result);
printf ("estimated error = % .18f\n", error);
printf ("intervals = %d\n", w->size);
gsl_integration_workspace_free (w);
return result;
}
However gsl_integration_qagil returns an error, number of iterations was insufficient.
double mu = 0.0f;
double lambda = 3.0f;
double alpha = 265.0f;
double beta = -5.0f;
cout << cdf(0.01) << endl;
If I increase the size of the workspace then the bessel function will not evaluate.
I was wondering if there was anyone that could give me any insight to my problem. A call to the corresponding Mathematica function F above with x = 0.01 returns 0.904384.
Could it be that the density is concentrated around a very small interval (i.e. outside of [-0.05, 0.05] the density is almost 0, a plot is given below). If so what can be done about this. Thanks for reading.
Re: integrating to +/- infinity:
I would use Mathematica to find an empirical bound for |x - μ| >> K, where K represents the "width" around the mean, and K is a function of alpha, beta, and lambda -- for example F is less than and approximately equal to a(x-μ)-2 or ae-b(x-μ)2 or whatever. These functions have known integrals out to infinity, for which you can evaluate empirically. Then you can integrate numerically out to K, and use the bounded approximation to get from K to infinity.
Figuring out K may be a bit tricky; I'm not very familiar with Bessel functions so I can't help you much there.
In general, I've found that for numerical calculation that's not obvious, the best way is to do as much analytical math as you can before you do numerical evaluation. (Kind of like an autofocus camera -- get it close to where you want, then let the camera do the rest.)
I haven't tried the C++ code, but by checking out the function in Mathematica, it does seem extremely peaked around mu, with the spread of the peak determined by the parameters lambda,alpha,beta.
What I would do would be to do a preliminary search of the pdf: look to the right and to the left of x=mu until you find the first value below a given tolerance. Use these as the bounds for your cdf, instead of negative infinity.
Pseudo code follows:
x_mu
step = 0.000001
adaptive_step(y_value) -> returns a small step size if close to 0, and larger if far.
while (pdf_current > tolerance):
step = adaptive_step(pdf_current)
xtest = xtest - step
pdf_current = pdf(xtest)
left_bound = xtest
//repeat for left bound
Given how tightly peaked this function seems to be, tightening the bounds would probably save you a lot of computer time that's currently wasted calculating zeros. Also, you'd be able to use the bounded integration routine, rather than -\infty,b .
Just a thought...
PS: Mathematica gives me F[0.01, 3, 265, -5, 0] = 0.884505
I found a complete description on this glsl there http://linux.math.tifr.res.in/manuals/html/gsl-ref-html/gsl-ref_16.html, you may find usefull informations.
Since I'm not GSL expert I did not focus on your problem from the math point of view, but rather I've to remind you some key aspect about floating point programming.
You can't accurately represent numbers using IEEE 754 standard. MathLab do hide the fact by using an infinite number representation logic, in order to give you rouding error-free results , this is the reason why it's slow compared to native code.
I strongly recommand this link for anyone involved in scientific calculus using a FPU:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Assuming you've enjoyed that article, I've noticed this on the GSL link above: "The routines will fail to converge if the error bounds are too stringent".
Your bounds may be too stringent if the difference between the upper and the lower is less than the minimum representable value of double, that is
std::numeric_limits::epsilon();.
In addition remember, from the 2nd link, for any C/C++ compiler implementation the default rounding mode is "truncate", this introduce subtle calculus errors leeding to the wrong results. I did have the problem with a simple Liang Barsky line clipper, 1st order ! So imagine the mess in this line:
return pow(_gamma, 2*_lambda)/(pow(2*_alpha, _lambda-0.5)*sqrt(_pi)*tgamma(_lambda))* pow(abs(x-_mu), _lambda - 0.5) * cyl_bessel_k(_lambda-0.5, _alpha*abs(x - _mu)) * exp(_beta*(x - _mu));
As a general rule, it is wise in C/C++, to add additional variable holding intermediate results, so you can debug step by step, then see any rounding error, you shouldn't try to input expression like this one in any native programing langage. One can't optimize variables better than a compiler.
Finally, as a general rule, you should multiply everything then divide, unless you are confident about the dynamic behavior of your calculus.
Good luck.

Fast equivalent to sin() for DSP referenced in STK

I'm using bits of Perry Cook's Synthesis Toolkit (STK) to generate saw and square waves. STK includes this BLIT-based sawtooth oscillator:
inline STKFloat BlitSaw::tick( void ) {
StkFloat tmp, denominator = sin( phase_ );
if ( fabs(denominator) <= std::numeric_limits<StkFloat>::epsilon() )
tmp = a_;
else {
tmp = sin( m_ * phase_ );
tmp /= p_ * denominator;
}
tmp += state_ - C2_;
state_ = tmp * 0.995;
phase_ += rate_;
if ( phase_ >= PI )
phase_ -= PI;
lastFrame_[0] = tmp;
return lastFrame_[0];
}
The square wave oscillator is broadly similar. At the top, there's this comment:
// A fully optimized version of this code would replace the two sin
// calls with a pair of fast sin oscillators, for which stable fast
// two-multiply algorithms are well known.
I don't know where to start looking for these "fast two-multiply algorithms" and I'd appreciate some pointers. I could use a lookup table instead, but I'm keen to learn what these 'fast sin oscillators' are. I could also use an abbreviated Taylor series, but thats way more than two multiplies. Searching hasn't turned up anything much, although I did find this approximation:
#define AD_SIN(n) (n*(2.f- fabs(n)))
Plotting it out shows that it's not really a close approximation outside the range of -1 to 1, so I don't think I can use it when phase_ is in the range -pi to pi:
Here, Sine is the blue line and the purple line is the approximation.
Profiling my code reveals that the calls to sin() are far and away the most time-consuming calls, so I really would like to optimise this piece.
Thanks
EDIT Thanks for the detailed and varied answers. I will explore these and accept one at the weekend.
EDIT 2 Would the anonymous close voter please kindly explain their vote in the comments? Thank you.
Essentially the sinusoidal oscilator is one (or more) variables that change with each DSP step, rather than getting recalculated from scratch.
The simplest are based on the following trig identities: (where d is constant, and thus so is cos(d) and sin(d) )
sin(x+d) = sin(x) cos(d) + cos(x) sin(d)
cos(x+d) = cos(x) cos(d) - sin(x) sin(d)
However this requires two variables (one for sin and one for cos) and 4 multiplications to update. However this will still be far faster than calculating a full sine at each step.
The solution by Oli Charlesworth is based on solutions to this general equation
A_{n+1} = a A_{n} + A_{n-1}
Where looking for a solution of the form A_n = k e^(i theta n) gives an equation for theta.
e^(i theta (n+1) ) = a e^(i theta n ) + b e^(i theta (n-1) )
Which simplifies to
e^(i theta) - e^(-i theta ) = a
2 cos(theta) = a
Giving
A_{n+1} = 2 cos(theta) A_{n} + A_{n-1}
Whichever approach you use you'll either need to use one or two of these oscillators for each frequency, or use another trig identity to derive the higher or lower frequencies.
How accurate do you need this?
This function, f(x)=0.398x*(3.1076-|x|), does a reasonably good job for x between -pi and pi.
Edit
An even better approximation is f(x)=0.38981969947653056*(pi-|x|), which keeps the absolute error to 0.038158444604 or less for x between -pi and pi.
A least squares minimization will yield a slightly different function.
It's not possible to generate one-off sin calls with just two multiplies (well, not a useful approximation, at any rate). But it is possible to generate an oscillator with low complexity, i.e. where each value is calculated in terms of the preceding ones.
For instance, consider that the following difference equation will give you a sinusoid:
y[n] = 2*cos(phi)*y[n-1] - y[n-2]
(where cos(phi) is a constant)
(From the original author of the VST BLT code).
As a matter of fact, I was porting the VST BLT oscillators to C#, so I was googling for good sin oscillators. Here's what I came up with. Translation to C++ is straightforward. See the notes at the end about accuumulated round-off errors.
public class FastOscillator
{
private double b1;
private double y1, y2;
private double fScale;
public void Initialize(int sampleRate)
{
fScale = AudioMath.TwoPi / sampleRate;
}
// frequency in Hz. phase in radians.
public void Start(float frequency, double phase)
{
double w = frequency * fScale;
b1 = 2.0 * Math.Cos(w);
y1 = Math.Sin(phase - w);
y2 = Math.Sin(phase - w * 2);
}
public double Tick()
{
double y0 = b1 * y1 - y2;
y2 = y1;
y1 = y0;
return y0;
}
}
Note that this particular oscillator implementation will drift over time, so it needs to be re-initialzed periodically. In this particular implementation, the magnitude of the sin wave decays over time. The original comments in the STK code suggested a two-multiply oscillator. There are, in fact, two-multiply oscillators that are reasonably stable over time. But in retrospect, the need to keep the sin(phase), and sin(m*phase) oscillators tightly in synch probably means that they have to be resynched anyway. Round-off errors between phase and m*phase mean that even if the oscillators were stable, they would drift eventually, running a significant risk of producing large spikes in values near the zeros of the BLT functions. May as well use a one-multiply oscillator.
These particular oscillators should probably be re-initialized every 30 to 100 cycles (or so). My C# implementation is frame based (i.e. it calculates an float[] array of results in a void Tick(int count, float[] result) method. The oscillators are re-synched at the end of each Tick call. Something like this:
void Tick(int count, float[] result)
{
for (int i = 0; i < count; ++i)
{
...
result[i] = bltResult;
}
// re-initialize the oscillators to avoid accumulated drift.
this.phase = (this.phase + this.dPhase*count) % AudioMath.TwoPi;
this.sinOsc.Initialize(frequency,this.phase);
this.mSinOsc.Initialize(frequency*m,this.phase*m);
}
Probably missing from the STK code. You might want to investigate this. The original code provided to the STK did this. Gary Scavone tweaked the code a bit, and I think the optimization was lost. I do know that the STK implementations suffer from DC drift, which can be almost entirely eliminated when implemented properly.
There's a peculiar hack that prevents DC drift of the oscillators, even when sweeping the frequency of the oscillators. The trick is that the oscillators should be started with an initial phase adjustment of dPhase/2. That just so happens to start the oscillators off with zero DC drift, without having to figure out wat the correct initial state for various integrators in each of the BLT oscillators.
Strangely, if the adjustment is re-adjusted whenever the frequency of the oscillator changes, then this also prevents wild DC drift of the output when sweeping the frequency of the oscillator. Whenever the frequency changes, subtract dPhase/2 from the previous phase value, recalculate dPhase for the new frequency, and then add dPhase/2.I rather suspect this could be formally proven; but I have not been able to so. All I know is that It Just Works.
For a block implementation, the oscillators should actually be initialized as follows, instead of carrying the phase adjustment in the current this.phase value.
this.sinOsc.Initialize(frequency,phase+dPhase*0.5);
this.mSinOsc.Initialize(frequency*m,(phase+dPhase*0.5)*m);
You might want to take a look here:
http://devmaster.net/forums/topic/4648-fast-and-accurate-sinecosine/
There's some sample code that calculates a very good appoximation of sin/cos using only multiplies, additions and the abs() function. Quite fast too. The comments are also a good read.
It essentiall boils down to this:
float sine(float x)
{
const float B = 4/pi;
const float C = -4/(pi*pi);
const float P = 0.225;
float y = B * x + C * x * abs(x);
return P * (y * abs(y) - y) + y;
}
and works for a range of -PI to PI
If you can, you should consider memorization based techniques. Essentially store sin(x) and cos(x) values for a bunch values. To calculate sin(y), find a and b for which precomputed values exist such that a<=y<=b. Now using sin(a), sin(b), cos(a), cos(b), y-a and y-b approximately calculate sin(y).
The general idea of getting periodically sampled results from the sine or cosine function is to use a trig recursion or an initialized (barely) stable IIR filter (which can end up being pretty much the same computations). There are bunches of these in the DSP literature, of varying accuracy and stability. Choose carefully.

fastest way to compute angle with x-axis

What is the fastest way to calculate angle between a line and the x-axis?
I need to define a function, which is Injective at the PI:2PI interval (I have to angle between point which is the uppermost point and any point below).
PointType * top = UPPERMOST_POINT;
PointType * targ = TARGET_POINT;
double targetVectorX = targ->x - top->x;
double targetVectorY = targ->y - top->y;
first try
//#1
double magnitudeTarVec = sqrt(targetVectorX*targetVectorX + targetVectorY*targetVectorY);
angle = tarX / magTar;
second try
//#2 slower
angle = atan2(targetVectorY, targetVectorX);
I do not need the angle directly (radians or degrees), just any value is fine as far as by comparing these values of this kind from 2 points I can tell which angle is bigger. (for example angle in example one is between -1 and 1 (it is cosine argument))
Check for y being zero as atan2 does; then The quotient x/y will be plenty fine. (assuming I understand you correctly).
I just wrote Fastest way to sort vectors by angle without actually computing that angle about the general question of finding functions monotonic in the angle, without any code or connections to C++ or the likes. Based on the currently accepted answer there I'd now suggest
double angle = copysign( // magnitude of first argument with sign of second
1. - targetVectorX/(fabs(targetVectorX) + fabs(targetVectorY)),
targetVectorY);
The great benefit compared to the currently accepted answer here is the fact that you won't have to worry about infinite values, since all non-zero vectors (i.e. targetVectorX and targetVectorY are not both equal to zero at the same time) will result in finite pseudoangle values. The resulting pseudoangles will be in the range [−2 … 2] for real angles [−π … π], so the signs and the discontinuity are just like you'd get them from atan2.

Very fast 3D distance check?

Is there a way to do a quick and dirty 3D distance check where the results are rough, but it is very very fast? I need to do depth sorting. I use STL sort like this:
bool sortfunc(CBox* a, CBox* b)
{
return a->Get3dDistance(Player.center,a->center) <
b->Get3dDistance(Player.center,b->center);
}
float CBox::Get3dDistance( Vec3 c1, Vec3 c2 )
{
//(Dx*Dx+Dy*Dy+Dz*Dz)^.5
float dx = c2.x - c1.x;
float dy = c2.y - c1.y;
float dz = c2.z - c1.z;
return sqrt((float)(dx * dx + dy * dy + dz * dz));
}
Is there possibly a way to do it without a square root or possibly without multiplication?
You can leave out the square root because for all positive (or really, non-negative) numbers x and y, if sqrt(x) < sqrt(y) then x < y. Since you're summing squares of real numbers, the square of every real number is non-negative, and the sum of any positive numbers is positive, the square root condition holds.
You cannot eliminate the multiplication, however, without changing the algorithm. Here's a counterexample: if x is (3, 1, 1) and y is (4, 0, 0), |x| < |y| because sqrt(1*1+1*1+3*3) < sqrt(4*4+0*0+0*0) and 1*1+1*1+3*3 < 4*4+0*0+0*0, but 1+1+3 > 4+0+0.
Since modern CPUs can compute a dot product faster than they can actually load the operands from memory, it's unlikely that you would have anything to gain by eliminating the multiply anyway (I think the newest CPUs have a special instruction that can compute a dot product every 3 cycles!).
I would not consider changing the algorithm without doing some profiling first. Your choice of algorithm will heavily depend on the size of your dataset (does it fit in cache?), how often you have to run it, and what you do with the results (collision detection? proximity? occlusion?).
What I usually do is first filter by Manhattan distance
float CBox::Within3DManhattanDistance( Vec3 c1, Vec3 c2, float distance )
{
float dx = abs(c2.x - c1.x);
float dy = abs(c2.y - c1.y);
float dz = abs(c2.z - c1.z);
if (dx > distance) return 0; // too far in x direction
if (dy > distance) return 0; // too far in y direction
if (dz > distance) return 0; // too far in z direction
return 1; // we're within the cube
}
Actually you can optimize this further if you know more about your environment. For example, in an environment where there is a ground like a flight simulator or a first person shooter game, the horizontal axis is very much larger than the vertical axis. In such an environment, if two objects are far apart they are very likely separated more by the x and y axis rather than the z axis (in a first person shooter most objects share the same z axis). So if you first compare x and y you can return early from the function and avoid doing extra calculations:
float CBox::Within3DManhattanDistance( Vec3 c1, Vec3 c2, float distance )
{
float dx = abs(c2.x - c1.x);
if (dx > distance) return 0; // too far in x direction
float dy = abs(c2.y - c1.y);
if (dy > distance) return 0; // too far in y direction
// since x and y distance are likely to be larger than
// z distance most of the time we don't need to execute
// the code below:
float dz = abs(c2.z - c1.z);
if (dz > distance) return 0; // too far in z direction
return 1; // we're within the cube
}
Sorry, I didn't realize the function is used for sorting. You can still use Manhattan distance to get a very rough first sort:
float CBox::ManhattanDistance( Vec3 c1, Vec3 c2 )
{
float dx = abs(c2.x - c1.x);
float dy = abs(c2.y - c1.y);
float dz = abs(c2.z - c1.z);
return dx+dy+dz;
}
After the rough first sort you can then take the topmost results, say the top 10 closest players, and re-sort using proper distance calculations.
Here's an equation that might help you get rid of both sqrt and multiply:
max(|dx|, |dy|, |dz|) <= distance(dx,dy,dz) <= |dx| + |dy| + |dz|
This gets you a range estimate for the distance which pins it down to within a factor of 3 (the upper and lower bounds can differ by at most 3x). You can then sort on, say, the lower number. You then need to process the array until you reach an object which is 3x farther away than the first obscuring object. You are then guaranteed to not find any object that is closer later in the array.
By the way, sorting is overkill here. A more efficient way would be to make a series of buckets with different distance estimates, say [1-3], [3-9], [9-27], .... Then put each element in a bucket. Process the buckets from smallest to largest until you reach an obscuring object. Process 1 additional bucket just to be sure.
By the way, floating point multiply is pretty fast nowadays. I'm not sure you gain much by converting it to absolute value.
I'm disappointed that the great old mathematical tricks seem to be getting lost. Here is the answer you're asking for. Source is Paul Hsieh's excellent web site: http://www.azillionmonkeys.com/qed/sqroot.html . Note that you don't care about distance; you will do fine for your sort with square of distance, which will be much faster.
In 2D, we can get a crude approximation of the distance metric without a square root with the formula:
distanceapprox (x, y) =
which will deviate from the true answer by at most about 8%. A similar derivation for 3 dimensions leads to:
distanceapprox (x, y, z) =
with a maximum error of about 16%.
However, something that should be pointed out, is that often the distance is only required for comparison purposes. For example, in the classical mandelbrot set (z←z2+c) calculation, the magnitude of a complex number is typically compared to a boundary radius length of 2. In these cases, one can simply drop the square root, by essentially squaring both sides of the comparison (since distances are always non-negative). That is to say:
√(Δx2+Δy2) < d is equivalent to Δx2+Δy2 < d2, if d ≥ 0
I should also mention that Chapter 13.2 of Richard G. Lyons's "Understanding Digital Signal Processing" has an incredible collection of 2D distance algorithms (a.k.a complex number magnitude approximations). As one example:
Max = x > y ? x : y;
Min = x < y ? x : y;
if ( Min < 0.04142135Max )
|V| = 0.99 * Max + 0.197 * Min;
else
|V| = 0.84 * Max + 0.561 * Min;
which has a maximum error of 1.0% from the actual distance. The penalty of course is that you're doing a couple branches; but even the "most accepted" answer to this question has at least three branches in it.
If you're serious about doing a super fast distance estimate to a specific precision, you could do so by writing your own simplified fsqrt() estimate using the same basic method as the compiler vendors do, but at a lower precision, by doing a fixed number of iterations. For example, you can eliminate the special case handling for extremely small or large numbers, and/or also reduce the number of Newton-Rapheson iterations. This was the key strategy underlying the so-called "Quake 3" fast inverse square root implementation -- it's the classic Newton algorithm with exactly one iteration.
Do not assume that your fsqrt() implementation is slow without benchmarking it and/or reading the sources. Most modern fsqrt() library implementations are branchless and really damned fast. Here for example is an old IBM floating point fsqrt implementation. Premature optimization is, and always will be, the root of all evil.
Note that for 2 (non-negative) distances A and B, if sqrt(A) < sqrt(B), then A < B. Create a specialized version of Get3DDistance() (GetSqrOf3DDistance()) that does not call sqrt() that would be used only for the sortfunc().
If you worry about performance, you should also take care of the way you send your arguments:
float Get3dDistance( Vec3 c1, Vec3 c2 );
implies two copies of Vec3 structure. Use references instead:
float Get3dDistance( Vec3 const & c1, Vec3 const & c2 );
You could compare squares of distances instead of the actual distances, since d2 = (x1-x2)2 + (y1-y2)2+ (z1-z2)2. It doesn't get rid of the multiplication, but it does eliminate the square root operation.
How often are the input vectors updated and how often are they sorted? Depending on your design, it might be quite efficient to extend the "Vec3" class with a pre-calculated distance and sort on that instead. Especially relevant if your implementation allows you to use vectorized operations.
Other than that, see the flipcode.com article on approximating distance functions for a discussion on yet another approach.
Depending slightly on the number of points that you are being used to compare with, what is below is pretty much guaranteed to be the get the list of points in approximate order assuming all points change at all iteration.
1) Rewrite the array into a single list of Manhattan distances with
out[ i ] = abs( posn[ i ].x - player.x ) + abs( posn[ i ].y - player.y ) + abs( posn[ i ].z - player.z );
2) Now you can use radix sort on floating point numbers to order them.
Note that in practice this is going to be a lot faster than sorting the list of 3d positions because it significantly reduces the memory bandwidth requirements in the sort operation which all of the time is going to be spend and in which unpredictable accesses and writes are going to occur. This will run on O(N) time.
If many of the points are stationary at each direction there are far faster algorithms like using KD-Trees, although implementation is quite a bit more complex and it is much harder to get good memory access patterns.
If this is simply a value for sorting, then you can swap the sqrt() for a abs(). If you need to compare distances against set values, get the square of that value.
E.g. instead of checking sqrt(...) against a, you can compare abs(...) against a*a.
You may want to consider caching the distance between the player and the object as you calculate it, and then use that in your sortfunc. This would depend upon how many times your sort function looks at each object, so you might have to profile to be sure.
What I'm getting at is that your sort function might do something like this:
compare(a,b);
compare(a,c);
compare(a,d);
and you would calculate the distance between the player and 'a' every time.
As others have mentioned, you can leave out the sqrt in this case.
If you could center your coordinates around the player, use spherical coordinates? Then you could sort by the radius.
That's a big if, though.
If your operation happens a lot, it might be worth to put it into some 3D data structure. You probably need the distance sorting to decide which object is visible, or some similar task. In order of complexity you can use:
Uniform (cubic) subdivision
Divide the used space into cells, and assign the objects to the cells. Fast access to element, neighbours are trivial, but empty cells take up a lot of space.
Quadtree
Given a threshold, divide used space recursively into four quads until less then threshold number of object is inside. Logarithmic access element if objects don't stack upon each other, neighbours are not hard to find, space efficient solution.
Octree
Same as Quadtree, but divides into 8, optimal even if objects are above each other.
Kd tree
Given some heuristic cost function, and a threshold, split space into two halves with a plane where the cost function is minimal. (Eg.: same amount of objects at each side.) Repeat recursively until threshold reached. Always logarithmic, neighbours are harder to get, space efficient (and works in all dimensions).
Using any of the above data structures, you can start from a position, and go from neighbour to neighbour to list the objects in increasing distance. You can stop at desired cut distance. You can also skip cells that cannot be seen from the camera.
For the distance check, you can do one of the above mentioned routines, but ultimately they wont scale well with increasing number of objects. These can be used to display data that takes hundreds of gigabytes of hard disc space.