Floating point arithmetic in Fortran - fortran

I've inherited some Fortran code that I'm trying to make sense of. It uses REAL variables in many places that, I think, it shouldn't - but maybe I'm misunderstanding how this works in Fortran (as compared to C++ which I'm much more familiar with), hence this question.
So the variables in question are essentially 'categorical values', 'factors' or 'enums' depending on how you look at it/want to call it. They are of data type REAL and can only take on a finite number of pre-determined, integer values. So say variable a can only be of value 1, 2 or 3. These values are read in from external files; in these external files, they are represented as integers, so it's not a case of 'rounding issues in external data sources' or something.
However in the code, it never does a straight comparison, always a greater than/lower than check. So, instead of doing
if (a == 1) then
it does
if (a > 0.9 .and. a < 1.1) then
You can imagine that this gets very confusing/tiresome to read, especially when it needs to check if a value is one of multiple categories.
So I think this is a case where someone at some point heard 'never compare REAL values' (because of the nature of the finite precision of storing floating point values, this same problem exists in every programming language), but then didn't really understand when that applies (I guess the first error is that categorical values should have been represented as integer values but that situation is what it is for now).
OTOH maybe I'm just misunderstanding how REAL and INTEGER values are represented and work in Fortran? Could there ever be a case where
b = 1.5
a = REAL(INT(b))
if (a > 0.9 .and. a < 1.1) then
would make sense?

ONLY in the case you're not performing any operation with the real values (just assigning a value and comparing equality with the same literal you assigned, with same kind parameter), you won't need tolerance.
The thing is, for a real variable a such as:
Real a
a = 2
You can be sure that
a == 2
Will be always .true. . But e.g., for another real value represented by b:
a / b * b == 2
(Or any other operation) is not guaranteed to be .true.

If for some reason the variable has to stay REAL then you may use
the intrinsic Fortran function NINT (nearest integer) in the comparisons:
if( nint(a) == 1 ) then
....

Related

Comparing double in C++, peer review

I have always had the problem of comparing double values for equality. There are functions around like some fuzzy_compare(double a, double b), but I often enough did not manage to find them in time. So I thought on building a wrapper class for double just for the comparison operator:
typedef union {
uint64_t i;
double d;
} number64;
bool Double::operator==(const double value) const {
number64 a, b;
a.d = this->value;
b.d = value;
if ((a.i & 0x8000000000000000) != (b.i & 0x8000000000000000)) {
if ((a.i & 0x7FFFFFFFFFFFFFFF) == 0 && (b.i & 0x7FFFFFFFFFFFFFFF) == 0)
return true;
return false;
}
if ((a.i & 0x7FF0000000000000) != (b.i & 0x7FF0000000000000))
return false;
uint64_t diff = (a.i & 0x000FFFFFFFFFFFF) - (b.i & 0x000FFFFFFFFFFFF) & 0x000FFFFFFFFFFFF;
return diff < 2; // 2 here is kind of some epsilon, but integer and independent of value range
}
The idea behind it is:
First, compare the sign. If it's different, the numbers are different. Except if all other bits are zero. That is comparing +0.0 with -0.0, which should be equal. Next, compare the exponent. If these are different, the numbers are different. Last, compare the mantissa. If the difference is low enough, the values are equal.
It seems to work, but just to be sure, I'd like a peer review. It could well be that I overlooked something.
And yes, this wrapper class needs all the operator overloading stuff. I skipped that because they're all trivial. The equality operator is the main purpose of this wrapper class.
This code has several problems:
Small values on different sides of zero always compare unequal, no matter how (not) far apart.
More importantly, -0.0 compares unequal with +epsilon but +0.0 compares equal with +epsilon (for some epsilon). That's really bad.
What about NaNs?
Values with different exponents compare unequal, even if one floating point "step" apart (e.g. the double before 1 compares unequal to 1, but the one after 1 compares equal...).
The last point could ironically be fixed by not distinguishing between exponent and mantissa: The binary representations of all positive floats are exactly in the order of their magnitude!
It appears that you want to just check whether two floats are a certain number of "steps" apart. If so, maybe this boost function might help. But I would also question whether that's actually reasonable:
Should the smallest positive non-denormal compare equal to zero? There are still many (denormal) floats between them. I doubt this is what you want.
If you operate on values that are expected to be of magnitude 1e16, then 1 should compare equal to 0, even though half of all positive doubles are between 0 and 1.
It is usually most practical to use a relative + absolute epsilon. But I think it will be most worthwhile to check out this article, which discusses the topic of comparing floats more extensively than I could fit into this answer:
https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
To cite its conclusion:
Know what you’re doing
There is no silver bullet. You have to choose wisely.
If you are comparing against zero, then relative epsilons and ULPs based comparisons are usually meaningless. You’ll need to use an absolute epsilon, whose value might be some small multiple of FLT_EPSILON and the inputs to your calculation. Maybe.
If you are comparing against a non-zero number then relative epsilons or ULPs based comparisons are probably what you want. You’ll probably want some small multiple of FLT_EPSILON for your relative epsilon, or some small number of ULPs. An absolute epsilon could be used if you knew exactly what number you were comparing against.
If you are comparing two arbitrary numbers that could be zero or non-zero then you need the kitchen sink. Good luck and God speed.
Above all you need to understand what you are calculating, how stable the algorithms are, and what you should do if the error is larger than expected. Floating-point math can be stunningly accurate but you also need to understand what it is that you are actually calculating.
You store into one union member and then read from another. That causes aliasing problem (undefined behaviour) because the C++ language requires that objects of different types do not alias.
There are a few ways to remove the undefined behaviour:
Get rid of the union and just memcpy the double into uint64_t. The portable way.
Mark union member i type with [[gnu::may_alias]].
Insert a compiler memory barrier between storing into union member d and reading from member i.
Frame the question this way:
We have two numbers, a and b, that have been computed with floating-point arithmetic.
If they had been computed exactly with real-number mathematics, we would have a and b.
We want to compare a and b and get an answer that tells us whether a equals b.
In other words, you are trying to correct for errors that occurred while computing a and b. In general, that is impossible, of course, because we do not know what a and b are. We only have the approximations a and b.
The code you propose falls back to another strategy:
If a and b are close to each other, we will accept that a equals b. (In other words: If a is close to b, it is possible that a equals b, and the differences we have are only because of calculation errors, so we will accept that a equals b without further evidence.)
There are two problems with this strategy:
This strategy will incorrectly accept that a equals b even when it is not true, just because a and b are close.
We need to decide how close to require a and b to be.
Your code attempts to address the latter: It is establishing some tests about whether a and b are close enough. As others have pointed out, it is severely flawed:
It treats numbers as different if they have different signs, but floating-point arithmetic can cause a to be negative even if a is positive, and vice versa.
It treats numbers as different if they have different exponents, but floating-point arithmetic can cause a to have a different exponent from a.
It treats numbers as different if they differ by more than a fixed number of ULP (units of least precision), but floating-point arithmetic can, in general, cause a to differ from a by any amount.
It assumes an IEEE-754 format and needlessly uses aliasing with behavior not defined by the C++ standard.
The approach is fundamentally flawed because it needlessly fiddles with the floating-point representation. The actual way to determine from a and b whether a and b might be equal is to figure out, given a and b, what sets of values a and b have and whether there is any value in common in those sets.
In other words, given a, the value of a might be in some interval, (a−eal, a+ear) (that is, all the numbers from a minus some error on the left to a plus some error on the right), and, given b, the value of b might be in some interval, (b−ebl, b+ebr). If so, what you want to test is not some floating-point representation properties but whether the two intervals (a−eal, a+ear) and (b−ebl, b+ebr) overlap.
To do that, you need to know, or at least have bounds on, the errors eal, ear, ebl, and ebr. But those errors are not fixed by the floating-point format. They are not 2 ULP or 1 ULP or any number of ULP scaled by the exponent. They depend on how a and b were computed. In general, the errors can range from 0 to infinity, and they can also be NaN.
So, to test whether a and b might be equal, you need to analyze the floating-point arithmetic errors that could have occurred. In general, this is difficult. There is an entire field of mathematics for it, numerical analysis.
If you have computed bounds on the errors, then you can just compare the intervals using ordinary arithmetic. There is no need to take apart the floating-point representation and work with the bits. Just use the normal add, subtract, and comparison operations.
(The problem is actually more complicated than I allowed above. Given a computed value a, the potential values of a do not always lie in a single interval. They could be an arbitrary set of points.)
As I have written previously, there is no general solution for comparing numbers containing arithmetic errors: 0 1 2 3.
Once you figure out error bounds and write a test that returns true if a and b might be equal, you still have the problem that the test also accepts false negatives: It will return true even in cases where a and b are not equal. In other words, you have just replaced a program that is wrong because it rejects equality even though a and b would be equal with a program that is wrong in other cases because it accepts equality in cases where a and b are not equal. This is another reason there is no general solution: In some applications, accepting as equal numbers that are not equal is okay, at least for some situations. In other applications, that is not okay, and using a test like this will break the program.

How do I handle values close to zero in c++?

I am trying to code an iterative function which takes an initial
double t = /*formula 1*/;
and then computes
for (auto i = 0; i < bigNumber; ++i)
{
temp = /*formula 2*/;
t = t*temp;
}
This works fine, except in the cases where the initial t is so small that C++ automatically sets it equal to zero (it is NOT actually supposed to be zero).
Then of course t will forever remain zero since we multiply it by itself, and that's the problem.
I tried solving this by setting t equal to some very small, but non-zero, number in case C++ had set it to zero, but this doesn't work, because then, I end up with the opposite problem, as t eventually blows up, once we have iterated it enough times.
How do I solve this problem?
Possibly worth mentioning:
The first formula (formula 1) involves stuff like exp(-verybignumber) and the second formula involves stuff like pow(i, -1), meaning it becomes very small with higher iterations.
Floating-point arithmetic isn't trivial, as you just discovered. This is not really related to C++, but to the IEEE 754 standard.
One of the things you need to need to ensure is that you stay within the normal numbers. That is, ensure your values throughout your computation do not get too small or too large.
In some cases, this is easy and maybe rescaling the input data is enough. In other cases, maybe you have to rethink your equations (steps) to avoid this.
Sometimes you can simply get away using a bigger type, e.g. long double or even __float128 (quad, check libquadmath).
Other solutions are to employ arbitrary-precision numbers (use a library like GMP and MPFR; do not attempt to do it yourself as a beginner) or even symbolic computation. It all depends on what performance you require.
Note that there are many other pitfalls when dealing with floating-point arithmetic.

Is it ok to compare floating points to 0.0 without epsilon?

I am aware, that to compare two floating point values one needs to use some epsilon precision, as they are not exact. However, I wonder if there are edge cases, where I don't need that epsilon.
In particular, I would like to know if it is always safe to do something like this:
double foo(double x){
if (x < 0.0) return 0.0;
else return somethingelse(x); // somethingelse(x) != 0.0
}
int main(){
int x = -3.0;
if (foo(x) == 0.0) {
std::cout << "^- is this comparison ok?" << std::endl;
}
}
I know that there are better ways to write foo (e.g. returning a flag in addition), but I wonder if in general is it ok to assign 0.0 to a floating point variable and later compare it to 0.0.
Or more general, does the following comparison yield true always?
double x = 3.3;
double y = 3.3;
if (x == y) { std::cout << "is an epsilon required here?" << std::endl; }
When I tried it, it seems to work, but it might be that one should not rely on that.
Yes, in this example it is perfectly fine to check for == 0.0. This is not because 0.0 is special in any way, but because you only assign a value and compare it afterwards. You could also set it to 3.3 and compare for == 3.3, this would be fine too. You're storing a bit pattern, and comparing for that exact same bit pattern, as long as the values are not promoted to another type for doing the comparison.
However, calculation results that would mathematically equal zero would not always equal 0.0.
This Q/A has evolved to also include cases where different parts of the program are compiled by different compilers. The question does not mention this, my answer applies only when the same compiler is used for all relevant parts.
C++ 11 Standard,
§5.10 Equality operators
6 If both operands are of arithmetic or enumeration type, the usual
arithmetic conversions are performed on both operands; each of the
operators shall yield true if the specified relationship is true and
false if it is false.
The relationship is not defined further, so we have to use the common meaning of "equal".
§2.13.4 Floating literals
1 [...] If the scaled value is in the range of representable values
for its type, the result is the scaled value if representable, else
the larger or smaller representable value nearest the scaled value,
chosen in an implementation-defined manner. [...]
The compiler has to choose between exactly two values when converting a literal, when the value is not representable. If the same value is chosen for the same literal consistently, you are safe to compare values such as 3.3, because == means "equal".
Yes, if you return 0.0 you can compare it to 0.0; 0 is representable exactly as a floating-point value. If you return 3.3 you have to be a much more careful, since 3.3 is not exactly representable, so a conversion from double to float, for example, will produce a different value.
correction: 0 as a floating point value is not unique, but IEEE 754 defines the comparison 0.0==-0.0 to be true (any zero for that matter).
So with 0.0 this works - for every other number it does not. The literal 3.3 in one compilation unit (e.g. a library) and another (e.g. your application) might differ. The standard only requires the compiler to use the same rounding it would use at runtime - but different compilers / compiler settings might use different rounding.
It will work most of the time (for 0), but is very bad practice.
As long as you are using the same compiler with the same settings (e.g. one compilation unit) it will work because the literal 0.0 or 0.0f will translate to the same bit pattern every time. The representation of zero is not unique though. So if foo is declared in a library and your call to it in some application the same function might fail.
You can rescue this very case by using std::fpclassify to check whether the returned value represents a zero. For every finite (non-zero) value you will have to use an epsilon-comparison though unless you stay within one compilation unit and perform no operations on the values.
As written in both cases you are using identical constants in the same file fed to the same compiler. The string to float conversion the compiler uses should return the same bit pattern so these should not only be equal as in a plus or minus cases for zero thing but equal bit by bit.
Were you to have a constant which uses the operating systems C library to generate the bit pattern then have a string to f or something that can possibly use a different C library if the binary is transported to another computer than the one compiled on. You might have a problem.
Certainly if you compute 3.3 for one of the terms, runtime, and have the other 3.3 computed compile time again you can and will get failures on the equal comparisons. Some constants obviously are more likely to work than others.
Of course as written your 3.3 comparison is dead code and the compiler just removes it if optimizations are enabled.
You didnt specify the floating point format nor standard if any for that format you were interested in. Some formats have the +/- zero problem, some dont for example.
It is a common misconception that floating point values are "not exact". In fact each of them is perfectly exact (except, may be, some special cases as -0.0 or Inf) and equal to s·2e – (p – 1), where s, e, and p are significand, exponent, and precision correspondingly, each of them integer. E.g. in IEEE 754-2008 binary32 format (aka float32) p = 24 and 1 is represented as ‭0x‭800000‬‬·20 – 23. There are two things that are really not exact when you deal with floating point values:
Representation of a real value using a FP one. Obviously, not all real numbers can be represented using a given FP format, so they have to be somehow rounded. There are several rounding modes, but the most commonly used is the "Round to nearest, ties to even". If you always use the same rounding mode, which is almost certainly the case, the same real value is always represented with the same FP one. So you can be sure that if two real values are equal, their FP counterparts are exactly equal too (but not the reverse, obviously).
Operations with FP numbers are (mostly) inexact. So if you have some real-value function φ(ξ) implemented in the computer as a function of a FP argument f(x), and you want to compare its result with some "true" value y, you need to use some ε in comparison, because it is very hard (sometimes even impossible) to white a function giving exactly y. And the value of ε strongly depends on the nature of the FP operations involved, so in each particular case there may be different optimal value.
For more details see D. Goldberg. What Every Computer Scientist Should Know About Floating-Point Arithmetic, and J.-M. Muller et al. Handbook of Floating-Point Arithmetic. Both texts you can find in the Internet.

Fortran - want to round to one decimal point

In fortran I have to round latitude and longitude to one digit after decimal point.
I am using gfortran compiler and the nint function but the following does not work:
print *, nint( 1.40 * 10. ) / 10. ! prints 1.39999998
print *, nint( 1.49 * 10. ) / 10. ! prints 1.50000000
Looking for both general and specific solutions here. For example:
How can we display numbers rounded to one decimal place?
How can we store such rounded numbers in fortran. It's not possible in a float variable, but are there other ways?
How can we write such numbers to NetCDF?
How can we write such numbers to a CSV or text file?
As others have said, the issue is the use of floating point representation in the NetCDF file. Using nco utilities, you can change the latitude/longitude to short integers with scale_factor and add_offset. Like this:
ncap2 -s 'latitude=pack(latitude, 0.1, 0); longitude=pack(longitude, 0.1, 0);' old.nc new.nc
There is no way to do what you are asking. The underlying problem is that the rounded values you desire are not necessarily able to be represented using floating point.
For example, if you had a value 10.58, this is represented exactly as 1.3225000 x 2^3 = 10.580000 in IEEE754 float32.
When you round this to value to one decimal point (however you choose to do so), the result would be 10.6, however 10.6 does not have an exact representation. The nearest representation is 1.3249999 x 2^3 = 10.599999 in float32. So no matter how you deal with the rounding, there is no way to store 10.6 exactly in a float32 value, and no way to write it as a floating point value into a netCDF file.
YES, IT CAN BE DONE! The "accepted" answer above is correct in its limited range, but is wrong about what you can actually accomplish in Fortran (or various other HGL's).
The only question is what price are you willing to pay, if the something like a Write with F(6.1) fails?
From one perspective, your problem is a particularly trivial variation on the subject of "Arbitrary Precision" computing. How do you imagine cryptography is handled when you need to store, manipulate, and perform "math" with, say, 1024 bit numbers, with exact precision?
A simple strategy in this case would be to separate each number into its constituent "LHSofD" (Left Hand Side of Decimal), and "RHSofD" values. For example, you might have an RLon(i,j) = 105.591, and would like to print 105.6 (or any manner of rounding) to your netCDF (or any normal) file. Split this into RLonLHS(i,j) = 105, and RLonRHS(i,j) = 591.
... at this point you have choices that increase generality, but at some expense. To save "money" the RHS might be retained as 0.591 (but loose generality if you need to do fancier things).
For simplicity, assume the "cheap and cheerful" second strategy.
The LHS is easy (Int()).
Now, for the RHS, multiply by 10 (if, you wish to round to 1 DEC), e.g. to arrive at RLonRHS(i,j) = 5.91, and then apply Fortran "round to nearest Int" NInt() intrinsic ... leaving you with RLonRHS(i,j) = 6.0.
... and Bob's your uncle:
Now you print the LHS and RHS to your netCDF using a suitable Write statement concatenating the "duals", and will created an EXACT representation as per the required objectives in the OP.
... of course later reading-in those values returns to the same issues as illustrated above, unless the read-in also is ArbPrec aware.
... we wrote our own ArbPrec lib, but there are several about, also in VBA and other HGL's ... but be warned a full ArbPrec bit of machinery is a non-trivial matter ... lucky you problem is so simple.
There are several aspects one can consider in relation to "rounding to one decimal place". These relate to: internal storage and manipulation; display and interchange.
Display and interchange
The simplest aspects cover how we report stored value, regardless of the internal representation used. As covered in depth in other answers and elsewhere we can use a numeric edit descriptor with a single fractional digit:
print '(F0.1,2X,F0.1)', 10.3, 10.17
end
How the output is rounded is a changeable mode:
print '(RU,F0.1,2X,RD,F0.1)', 10.17, 10.17
end
In this example we've chosen to round up and then down, but we could also round to zero or round to nearest (or let the compiler choose for us).
For any formatted output, whether to screen or file, such edit descriptors are available. A G edit descriptor, such as one may use to write CSV files, will also do this rounding.
For unformatted output this concept of rounding is not applicable as the internal representation is referenced. Equally for an interchange format such as NetCDF and HDF5 we do not have this rounding.
For NetCDF your attribute convention may specify something like FORTRAN_format which gives an appropriate format for ultimate display of the (default) real, non-rounded, variable .
Internal storage
Other answers and the question itself mention the impossibility of accurately representing (and working with) decimal digits. However, nothing in the Fortran language requires this to be impossible:
integer, parameter :: rk = SELECTED_REAL_KIND(radix=10)
real(rk) x
x = 0.1_rk
print *, x
end
is a Fortran program which has a radix-10 variable and literal constant. See also IEEE_SELECTED_REAL_KIND(radix=10).
Now, you are exceptionally likely to see that selected_real_kind(radix=10) gives you the value -5, but if you want something positive that can be used as a type parameter you just need to find someone offering you such a system.
If you aren't able to find such a thing then you will need to work accounting for errors. There are two parts to consider here.
The intrinsic real numerical types in Fortran are floating point ones. To use a fixed point numeric type, or a system like binary-coded decimal, you will need to resort to non-intrinsic types. Such a topic is beyond the scope of this answer, but pointers are made in that direction by DrOli.
These efforts will not be computationally/programmer-time cheap. You will also need to take care of managing these types in your output and interchange.
Depending on the requirements of your work, you may find simply scaling by (powers of) ten and working on integers suits. In such cases, you will also want to find the corresponding NetCDF attribute in your convention, such as scale_factor.
Relating to our internal representation concerns we have similar rounding issues to output. For example, if my input data has a longitude of 10.17... but I want to round it in my internal representation to (the nearest representable value to) a single decimal digit (say 10.2/10.1999998) and then work through with that, how do I manage that?
We've seen how nint(10.17*10)/10. gives us this, but we've also learned something about how numeric edit descriptors do this nicely for output, including controlling the rounding mode:
character(10) :: intermediate
real :: rounded
write(intermediate, '(RN,F0.1)') 10.17
read(intermediate, *) rounded
print *, rounded ! This may look not "exact"
end
We can track the accumulation of errors here if this is desired.
The `round_x = nint(x*10d0)/10d0' operator rounds x (for abs(x) < 2**31/10, for large numbers use dnint()) and assigns the rounded value to the round_x variable for further calculations.
As mentioned in the answers above, not all numbers with one significant digit after the decimal point have an exact representation, for example, 0.3 does not.
print *, 0.3d0
Output:
0.29999999999999999
To output a rounded value to a file, to the screen, or to convert it to a string with a single significant digit after the decimal point, use edit descriptor 'Fw.1' (w - width w characters, 0 - variable width). For example:
print '(5(1x, f0.1))', 1.30, 1.31, 1.35, 1.39, 345.46
Output:
1.3 1.3 1.4 1.4 345.5
#JohnE, using 'G10.2' is incorrect, it rounds the result to two significant digits, not to one digit after the decimal point. Eg:
print '(g10.2)', 345.46
Output:
0.35E+03
P.S.
For NetCDF, rounding should be handled by NetCDF viewer, however, you can output variables as NC_STRING type:
write(NetCDF_out_string, '(F0.1)') 1.49
Or, alternatively, get "beautiful" NC_FLOAT/NC_DOUBLE numbers:
beautiful_float_x = nint(x*10.)/10. + epsilon(1.)*nint(x*10.)/10./2.
beautiful_double_x = dnint(x*10d0)/10d0 + epsilon(1d0)*dnint(x*10d0)/10d0/2d0
P.P.S. #JohnE
The preferred solution is not to round intermediate results in memory or in files. Rounding is performed only when the final output of human-readable data is issued;
Use print with edit descriptor ‘Fw.1’, see above;
There are no simple and reliable ways to accurately store rounded numbers (numbers with a decimal fixed point):
2.1. Theoretically, some Fortran implementations can support decimal arithmetic, but I am not aware of implementations that in which ‘selected_real_kind(4, 4, 10)’ returns a value other than -5;
2.2. It is possible to store rounded numbers as strings;
2.3. You can use the Fortran binding of GIMP library. Functions with the mpq_ prefix are designed to work with rational numbers;
There are no simple and reliable ways to write rounded numbers in a netCDF file while preserving their properties for the reader of this file:
3.1. netCDF supports 'Packed Data Values‘, i.e. you can set an integer type with the attributes’ scale_factor‘,’ add_offset' and save arrays of integers. But, in the file ‘scale_factor’ will be stored as a floating number of single or double precision, i.e. the value will differ from 0.1. Accordingly, when reading, when calculating by the netCDF library unpacked_data_value = packed_data_value*scale_factor + add_offset, there will be a rounding error. (You can set scale_factor=0.1*(1.+epsilon(1.)) or scale_factor=0.1d0*(1d0+epsilon(1d0)) to exclude a large number of digits '9'.);
3.2. There are C_format and FORTRAN_format attributes. But it is quite difficult to predict which reader will use which attribute and whether they will use them at all;
3.3. You can store rounded numbers as strings or user-defined types;
Use write() with edit descriptor ‘Fw.1’, see above.

Need pow(-1,1.2) to be 1

I am using math.h with GCC and GSL. I was wondering how to get this to evaluate?
I was hoping that the pow function would recognize pow(-1,1.2) as ((-1)^6)^(1/5). But it doesn't.
Does anybody know of a c++ library that will recognize these? Perhaps somebody has a decomposition routine they could share.
Mathematically, pow(-1, 1.2) is simply not defined. There are no powers with fractional exponents of negative numbers, and I hope there is no library that will simply return some arbitray value for such an expression. Would you also expect things like
pow(-1, 0.5) = ((-1)^2)^(1/4) = 1
which obviously isn't desirable.
Moreover, the floating point number 1.2 isn't even exactly equal to 6/5. The closest double precision number to 1.2 is
1.1999999999999999555910790149937383830547332763671875
Given this, what result would you expect now for pow(-1, 1.2)?
If you want to raise negative numbers to powers -- especially fractional powers -- use the cpow() method. You'll need to include <complex> to use it.
It seems like you're looking for pow(abs(x), y).
Explanation: you seem to be thinking in terms of
xy = (xN)(y/N)
If we choose that N === 2, then you have
(x2)y/2 = ((x2)1/2)y
But
(x2)1/2 = |x|
Substituting gives
|x|y
This is a stretch, because the above manipulations only work for non-negative x, but you're the one who chose to use that assumption.
Sounds like you want to perform a complex power (cpow()) and then take the magnitude (abs()) of that after.
>>> abs(cmath.exp(1.2*cmath.log(-1)))
1.0
>>> abs(cmath.exp(1.2*cmath.log(-293.2834)))
913.57662451612202
pow(a,b) is often thought of, defined as, and implemented as exp(log(a)*b) where log(a) is natural logarithm of a. log(a) is not defined for a<=0 in real numbers. So you need to either write a function with special case for negative a and integer b and/or b=1/(some_integer). It's easy to special-case for integer b, but for b=1/(some_integer) it's prone to round-off problems, like Sven Marnach pointed out.
Maybe for your domain pow(-a,b) should always be -pow(a,b)? But then you'd just implement such function, so I assume the question warrants more explanation .
Like duskwuff suggested, a much more robust and "mathematical" solution is to use complex functions log and exp, but it's much more "complex" (excuse my pun) than it seems on the surface (even though there's cpow function). And it'll be much slower if you have to compute a lot of pow()s.
Now there's an important catch with complex numbers that may or may not be relevant to your problem domain: when done right, the result of pow(a,b) is not one, but often a few complex numbers, but in the cases you care about, one of them will be complex number with nearly-zero imaginary part (it'll be non-zero due to roundoff errors) which you can simply ignore and/or not compute in your code.
To demonstrate it, consider what pow(-1,.5) is. It's a number X such that X^2==-1. Guess what? There are 2 such numbers: i and -i. Generally, pow(-1, 1/N) has exactly N solutions, although you're interested in only one of them.
If the imaginary part of all results of pow(a,b) is significant, it means you are passing wrong values. For single-precision floating point values in the range you describe, 1e-6*max(abs(a),abs(b)) would be a good starting point for defining the "significant enough" threshold. The extreme "wrong values" would be pow(-1,0.5) which would return 0 + 1i (0 in real part, 1 in imaginary part). Here the imaginary part is huge relative to the input and real part, so you know you screwed up your input values.
In any reasonable single-return-result implementation of cpow() , cpow(-1,0.3333) will probably return something like -1+0.000001i and ignore two other values with significant imaginary parts. So you can just take that real value and that's your answer.
Use std::complex. Without that, the roots of unity don't make much sense. With it they make a whole lot of sense.