Given r^2, is there an efficient way to compute r^3? - c++

double r2 = dx * dx + dy * dy;
double r3 = r2 * sqrt(r2);
Can the second line be replaced by something faster? Something that does not involve sqrt?

How about
double r3 = pow(r2,1.5);
If sqrt is implemented as a special case of pow, that will save you a multiplication. Not much in the grand scheme of things mind!
If you are really looking for greater efficiency, consider whether you really need r^3. If, for example, you are only testing it (or something derived from it) to see whether it exceeds a certain threshold, then test r2 instead e.g.
const double r3_threshold = 9;
//don't do this
if (r3 > r3_threshold)
....
//do do this
const double r2_threshold = pow(r3_threshold,2./3.);
if (r2 > r2_threshold)
....
That way pow will be called only once, maybe even at compile time.
EDIT If you do need to recompute the threshold each time, I think the answer concerning Q_rsqrt is worth a look and probably deserves to outrank this one

Use fast inverse sqrt (take the Q_rsqrt function).
You have:
float r2;
// ... r2 gets a value
float invsqrt = Q_rsqrt(r2);
float r3 = r2*r2*invsqrt; // x*x/sqrt(x) = x*sqrt(x)
NOTE: For double types there is a constant like 0x5f3759df which can help you write a function that handles also double data types.
LATER EDIT: Seems like the method has been already discussed here.
LATER EDIT2: The constant for double was in the wikipedia link:
Lomont pointed out that the "magic number" for 64 bit IEEE754 size
type double is 0x5fe6ec85e7de30da, but in fact it is close to
0x5fe6eb50c7aa19f9.

I think another way to look at your question would be "how to calculate (or approximate) sqrt(n)". From there your question would be trivial (n * sqrt(n)). Of course, you'd have to define how much error you could live with. Wikipedia gives you many options:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots

Related

Gecode: constraining integer variables using a float value

I use Gecode through its C++ API in a kind of learning context with positive and negative examples.
In this context I have two BoolVarArray: positive_bags_ and negative_bags_.
And what I want to do seems very simple: I want to constrain these bags with a minimal growth rate constraint based on a user parameter gmin.
Thereby, the constraint should look like: sum(positive_bags_) >= gmin * sum(negative_bags_).
It works using the rel function defined like this: rel(*this, sum(positive_bags_) >= gmin * sum(negative_bags_)) but my problem is that in my case gmin is a float but is casted by rel as an integer.
Therefore I can only constrain positive_bags_ to be 2, 3, ... times bigger than negative_bags_ but I need for my experiments to define gmin as 1.5 for example.
I checked the documentation and did not find a definition of linear that use both Boolean/Integer and Float variables.
Is there some way to define this constraint using a float gmin?
Thanks in advance!
If your factor gmincan be expressed as a reasonably small rational n/d (3/2 in your example), then you could use
d * sum(positive_bags_) >= n * sum(negative_bags_)
as your constraint. If there is no small rational that is suitable, then you need to channel your variables to FloatVars and use the FloatVar linear constraint.
If implicit type-casting is an issue you can try:
(float) sum(positive_bags_) >= (gmin * (float) sum(negative_bags_))
Assuming gmin is a float.
Implicit casting will convert your float to an int. If you want to control what type of rounding you want to apply, wrap the result into <math.h>'s roundf or a rounding function of your choice depending on the type.

If I need to divide several numbers by the same value, is it better that I calculate the inverse first?

Every now and then in code it comes up that I need to divide several numbers by the same value:
double d = divisor();
double a = firstNum() / d;
double b = secondNum() / d;
double c = thirdNum() / d;
Since multiplication is faster than division I will often write this as
double di = 1 / divisor();
double a = firstNum() * di;
double b = secondNum() * di;
double c = thirdNum() * di;
I'm wondering if I'm really saving any time by doing this. Would my compiler be smart enough to do this automatically? Is it worth making my code a little less readable?
The compiler is not allowed to transform the first fragment to the second, or vice versa, because floating point arithmetic is finicky and these fragments are not precisely equivalent.
Whether or not you are saving anything by doing it yourself depends on the hardware and other factors. Only testing with your compiler on your hardware within your larger software can tell. Chances are, if you are wondering which one is faster, then the difference is not noticeable.
If you know for sure that the more readable code is so much slower that it fails to fulfil performance requirements, you may consider changing it to less readable faster code.

Source code for trigonometric functions calculations

For program that needs to be deterministic and provide the same result on different platforms (compilers), the built-in trigonometric functions can't be used, since the algorithm to compute it is different on different systems. It was tested, that the result values are different.
(Edit: the results need to be exactly the same to the last bit as it is used in game simulation that is ran on all the clients. These clients need to have the state of the simulation exactly the same to make it work. Any small error could result in bigger and bigger error over time and also the crc of the game state is used as check of synchronisation).
So the only solution that I came up with was to use our own custom code to calculate these values, the problem is, that (surprisingly) it is very hard to find any easy to use source code for all the set of the trigonometric functions.
This is my modification of the code I got (https://codereview.stackexchange.com/questions/5211/sine-function-in-c-c) for the sin function. It is deterministic on all platforms and the value is almost the same as the value of standard sin (both tested).
#define M_1_2_PI 0.159154943091895335769 // 1 / (2 * pi)
double Math::sin(double x)
{
// Normalize the x to be in [-pi, pi]
x += M_PI;
x *= M_1_2_PI;
double notUsed;
x = modf(modf(x, &notUsed) + 1, &notUsed);
x *= M_PI * 2;
x -= M_PI;
// the algorithm works for [-pi/2, pi/2], so we change the values of x, to fit in the interval,
// while having the same value of sin(x)
if (x < -M_PI_2)
x = -M_PI - x;
else if (x > M_PI_2)
x = M_PI - x;
// useful to pre-calculate
double x2 = x*x;
double x4 = x2*x2;
// Calculate the terms
// As long as abs(x) < sqrt(6), which is 2.45, all terms will be positive.
// Values outside this range should be reduced to [-pi/2, pi/2] anyway for accuracy.
// Some care has to be given to the factorials.
// They can be pre-calculated by the compiler,
// but the value for the higher ones will exceed the storage capacity of int.
// so force the compiler to use unsigned long longs (if available) or doubles.
double t1 = x * (1.0 - x2 / (2*3));
double x5 = x * x4;
double t2 = x5 * (1.0 - x2 / (6*7)) / (1.0* 2*3*4*5);
double x9 = x5 * x4;
double t3 = x9 * (1.0 - x2 / (10*11)) / (1.0* 2*3*4*5*6*7*8*9);
double x13 = x9 * x4;
double t4 = x13 * (1.0 - x2 / (14*15)) / (1.0* 2*3*4*5*6*7*8*9*10*11*12*13);
// add some more if your accuracy requires them.
// But remember that x is smaller than 2, and the factorial grows very fast
// so I doubt that 2^17 / 17! will add anything.
// Even t4 might already be too small to matter when compared with t1.
// Sum backwards
double result = t4;
result += t3;
result += t2;
result += t1;
return result;
}
But I didn't find anything suitable for other functions, like asin, atan, tan (other than the sin/cos) etc.
These functions doesn't have be as precise as the standard ones, but at least 8 figures would be nice.
"It was tested, that the result values are different."
How different is different enough to matter? You claim to want 8 significant (decimal?) digits of agreement. I don't believe that you've found less than that in any implementation that conforms to ISO/IEC 10967-3:2006 ยง5.3.2.
Do you understand how trivial a trigonometric error of one part per billion represents? It would be under 3 kilometers on a circle the size of the earth's orbit. Unless you are planning voyages to Mars, and using sub-standard implementation, your claimed "different" ain't going to matter.
added in response to comment:
What Every Programmer Should Know About Floating-Point Arithmetic. Read it. Seriously.
Since you claim that:
precision isn't as important as bit for bit equality
you need only 8 significant digits
then you should truncate your values to 8 significant digits.
I guess the easiest would be to pick a liberal runtime library which implements the required math functions:
FreeBSD
go, will need transliteration, but I think all functions have a non-assembly implementation
MinGW-w64
...
And just use their implementations. Note the ones listed above are either public domain or BSD licensed or some other liberal license. Make sure to abide by the licenses if you use the code.
You can use Taylor series (actually it seems that it is what you are using, maybe without knowing)
Take a look on wikipedia (or everywhere else):
https://en.wikipedia.org/wiki/Taylor_series
You have here the list for the most common functions (exp, log, cos, sin etc ...) https://en.wikipedia.org/wiki/Taylor_series#List_of_Maclaurin_series_of_some_common_functions
but with some mathematical knowledge you can find/calculate quite everything (ok clearly not everything but ...)
Some examples (there are many others)
Notes:
the more terms you add the more precision you have.
I don't think it's the most efficient way to calculate what you need but it's a quite "simple" one (the idea I mean)
A factorial(n) function could be really useful if you decide to use that
I hope it will help.
I'd suggest looking into using lookup tables and linear/bicubic interpolation.
that way you control exactly the values at each point, and you don't have to perform a awful lot of multiplications.
Taylor expansions for sin/cos functions sucks anyway
spring rts fought ages against this kind of desync error: try posting on their forum, not many old developers remain but those that do should still remember the issues and the fixes.
in this thread http://springrts.com/phpbb/viewtopic.php?f=1&t=8265 they talk specifically on libm determinism (but different os might have different libc with subtle optimization differences, so you need to take the approach and throw the library)

C++, Variables reverting to initial values

I have a C++ class, shapeObject, that is somewhat complicated but this question is only regarding two particular fields of the class, dX and dY.
header:
class shapeObject{
public:
shapeObject();
...
private:
float dX, dY;
...
};
cpp file:
shapeObject::shapeObject(){
...
dX = 0;
dY = 0;
...
}
These fields are only modified by one function, which adds or subtracts a small float value to dX or dY. However, the next time the value of dX or dY is read, they have reverted back to their original value of 0.0 .
Since the fields are not being modified anywhere but that one function, and that function is never setting the values to 0.0, I'm not sure why they are reverting to the original value. Any ideas of why this could be happening?
My psychic debugging skills indicate some possibilities:
You're shadowing the members when you assign them float dX = small_float_value; instead of dX = small_float_value;
You're working with different copies of shapeObject and the modified one gets thrown away by accident (or its copy constructor doesn't do the obvious thing).
The values only appear to still be zero in printing but are in fact the small float value you want.
Somehow the small float value gets truncated to zero.
When adding or subtracting a small value try using cast by float like dx=dx+(float)0.001;
Better try using double instead of float.Float has some precision problem.
Perhaps somewhere the small float value is being cast to an int before being added/subtracted so the resulting change is zero?
Need to see more code to do anything more than stab in the dark.

Hash function for floats

I'm currently implementing a hash table in C++ and I'm trying to make a hash function for floats...
I was going to treat floats as integers by padding the decimal numbers, but then I realized that I would probably reach the overflow with big numbers...
Is there a good way to hash floats?
You don't have to give me the function directly, but I'd like to see/understand different concepts...
Notes:
I don't need it to be really fast, just evenly distributed if possible.
I've read that floats should not be hashed because of the speed of computation, can someone confirm/explain this and give me other reasons why floats should not be hashed? I don't really understand why (besides the speed)
It depends on the application but most of time floats should not be hashed because hashing is used for fast lookup for exact matches and most floats are the result of calculations that produce a float which is only an approximation to the correct answer. The usually way to check for floating equality is to check if it is within some delta (in absolute value) of the correct answer. This type of check does not lend itself to hashed lookup tables.
EDIT:
Normally, because of rounding errors and inherent limitations of floating point arithmetic, if you expect that floating point numbers a and b should be equal to each other because the math says so, you need to pick some relatively small delta > 0, and then you declare a and b to be equal if abs(a-b) < delta, where abs is the absolute value function. For more detail, see this article.
Here is a small example that demonstrates the problem:
float x = 1.0f;
x = x / 41;
x = x * 41;
if (x != 1.0f)
{
std::cout << "ooops...\n";
}
Depending on your platform, compiler and optimization levels, this may print ooops... to your screen, meaning that the mathematical equation x / y * y = x does not necessarily hold on your computer.
There are cases where floating point arithmetic produces exact results, e.g. reasonably sized integers and rationals with power-of-2 denominators.
If your hash function did the following you'd get some degree of fuzziness on the hash lookup
unsigned int Hash( float f )
{
unsigned int ui;
memcpy( &ui, &f, sizeof( float ) );
return ui & 0xfffff000;
}
This way you'll mask off the 12 least significant bits allowing for a degree of uncertainty ... It really depends on yout application however.
You can use the std hash, it's not bad:
std::size_t myHash = std::cout << std::hash<float>{}(myFloat);
unsigned hash(float x)
{
union
{
float f;
unsigned u;
};
f = x;
return u;
}
Technically undefined behavior, but most compilers support this. Alternative solution:
unsigned hash(float x)
{
return (unsigned&)x;
}
Both solutions depend on the endianness of your machine, so for example on x86 and SPARC, they will produce different results. If that doesn't bother you, just use one of these solutions.
You can of course represent a float as an int type of the same size to hash it, however this naive approach has some pitfalls you need to be careful of...
Simply converting to a binary representation is error prone since values which are equal wont necessarily have the same binary representation.
An obvious case: -0.0 wont match 0.0 for example. *
Further, simply converting to an int of the same size wont give very even distribution, which is often important (implementing a hash/set that uses buckets for example).
Suggested steps for implementation:
filter out non-finite cases (nan, inf) and (0.0, -0.0 whether you need to do this explicitly or not depends on the method used).
convert to an int of the same size(that is - use a union for example to represent the float as an int, not simply cast to an int).
re-distribute the bits, (intentionally vague here!), this is basically a speed vs quality tradeoff. But if you have many values in a small range you probably don't want them to in a similar range too.
*: You may wan't to check for (nan and -nan) too. How to handle those exactly depends on your use case (you may want to ignore sign for all nan's as CPython does).
Python's _Py_HashDouble is a good reference for how you might hash a float, in production code (ignore the -1 check at the end, since that's a special value for Python).
If you're interested, I just made a hash function that uses floating point and can hash floats. It also passes SMHasher ( which is the main bias-test for non-crypto hash functions ). It's a lot slower than normal non-cryptographic hash functions due to the float calculations.
I'm not sure if tifuhash will become useful for all applications, but it's interesting to see a simple floating point function pass both PractRand and SMHasher.
The main state update function is very simple, and looks like:
function q( state, val, numerator, denominator ) {
// Continued Fraction mixed with Egyptian fraction "Continued Egyptian Fraction"
// with denominator = val + pos / state[1]
state[0] += numerator / denominator;
state[0] = 1.0 / state[0];
// Standard Continued Fraction with a_i = val, b_i = (a_i-1) + i + 1
state[1] += val;
state[1] = numerator / state[1];
}
Anyway, you can get it on npm
Or you can check out the github
Using is simple:
const tifu = require('tifuhash');
const message = 'The medium is the message.';
const number = 333333333;
const float = Math.PI;
console.log( tifu.hash( message ),
tifu.hash( number ),
tifu.hash( float ),
tifu.hash( ) );
There's a demo of some hashes on runkit here https://runkit.com/593a239c56ebfd0012d15fc9/593e4d7014d66100120ecdb9
Side note: I think that in future using floating point,possibly big arrays of floating point calculations, could be a useful way to make more computationally-demanding hash functions in future. A weird side effect I discovered of using floating point is that the hashes are target dependent, and I surmise maybe they could be use to fingerprint the platforms they were calculated on.
Because of the IEEE byte ordering the Java Float.hashCode() and Double.hashCode() do not give good results. This problem is wellknown and can be adressed by this scrambler:
class HashScrambler {
/**
* https://sites.google.com/site/murmurhash/
*/
static int murmur(int x) {
x ^= x >> 13;
x *= 0x5bd1e995;
return x ^ (x >> 15);
}
}
You then get a good hash function, which also allows you to use Float and Double in hash tables. But you need to write your own hash table that allows a custom hash function.
Since in a hash table you need also test for equality, you need an exact equality to make it work. Maybe the later is what President James K. Polk intends to adress?