i am running a program in c++ on windows and on linux.
the output is meant to be identical.
i am trying to make sure that the only differences are real differences oppose to working inviorment differences.
so far i have taken care of all the differences that can be caused by \r\n differences
but there is one thing that i can't seem to figure out.
in the windows out put there is a 0.000 and in linux it is -0.000
does any one know what can it be that is making the difference?
thanx
Probably it comes from differences in how the optimizer optimizes some FP calculations (that can be configurable - see e.g. here); in one case you get a value slightly less than 0, in the other slightly more. Both in output are rounded to a 0.000, but they keep their "real" sign.
Since in the IEEE floating point format the sign bit is separate from the value, you have two different values of 0, a positive and a negative one. In most cases it doesn't make a difference; both zeros will compare equal, and they indeed describe the same mathematical value (mathematically, 0 and -0 are the same). Where the difference can be significant is when you have underflow and need to know whether the underflow occurred from a positive or from a negative value. Also if you divide by 0, the sign of the infinity you get depends on the sign of the 0 (i.e. 1/+0.0 give +Inf, but 1/-0.0 gives -Inf). In other words, most probably it won't make a difference for you.
Note however that the different output does not necessarily mean that the number itself is different. It could well be that the value in Windows is also -0.0, but the output routine on Windows doesn't distinguish between +0.0 and -0.0 (they compare equal, after all).
Unless using (unsafe) flags like -ffast-math, the compiler is limited in the assumptions it can make when 'optimizing' IEEE-754 arithmetic. First check that both platforms are using the same rounding.
Also, if possible, check they are using the same floating-point unit. i.e., SSE vs FPU on x86. The latter might be an issue with math library function implementations - especially trigonometric / transcendental functions.
Related
I am trying to figure out how to set ROUND_UP, ROUND_DOWN, ROUND_to_NEAREST, and ROUND_to_INFINITY for an MS Visual Studio project.
The representation of natural numbers should follow the IEEE 754 standards, Which means setting /FP: strict is selected. However, the code is running in an x64 environment.
Through careful selection of rounding mode, I want to cause -0.000000 to be equal to -0.000001, for example.
Cheers
Commodore
I am making come computations and saving the results (tuple). After each operation, I query saved data to know if I had already had the value. (-0.000000,-0.202319) would be equal to (-0.000001,-0.202319) rounding with nearest. How can I do this with Visual Studio?
In general, == and != for floating-point are not a 'safe' method for doing floating-point comparison except in the specific case of 'binary representation equality' even using 'IEEE-754 compliant' code generation. This is why clang/LLVM for example has the -Wfloat-equal warning.
Be sure to read What Every Computer Scientist Should Know About Floating-Point Arithmetic. I'd also recommend reading the many great Bruce Dawson blog posts on the topic of floating-point.
Instead, you should explicitly use an 'epsilon' comparison:
constexpr float c_EPSILON = 0.000001f;
if (fabsf(a - b) <= c_EPSILON)
{
// A & B are equal within the epsilon value.
}
In general, you can't assume that SSE/SSE2-based floating-point math (required for x64) will match legacy x87-based floating-point math, and in many cases you can't even assume AMD and Intel will always agree even with /fp:strict.
For example, IEEE-754 doesn't specify what happens with fast reciprocal operations such as RCP or RSQRT. AMD and Intel give different answers in these cases in the lower bits.
With all that said, you are intended to use _controlfp or _controlfp_s rather than _control87 to control rounding mode behavior for all platforms. _control87 really only works for 32-bit (x86) platforms when using /arch:IA32.
Keep in mind that changing the floating-point control word and calling code outside of your control is often problematic. Code assumes the default of "no-exceptions, round-to-nearest" and deviation from that can result in untested/undefined behavior. You can really only change the control word, do your own code, then change it back to the default in any safe way. See this old DirectX article.
That’s not how it works. Choosing the rounding mode affects the last bit of the result, that’s it. Comparisons are not affected. And comparisons between 0.000001 and 0.234567 are most definitely not affected.
What you want cannot be achieved with rounding modes. Feel free to write a function that returns true if two numbers are close together.
I'm in the process of converting a program from Scilab code to C++. One loop in particular is producing a slightly different result than the original Scilab code (it's a long piece of code so I'm not going to include it in the question but I'll try my best to summarise the issue below).
The problem is, each step of the loop uses calculations from the previous step. Additionally, the difference between calculations only becomes apparent around the 100,000th iteration (out of approximately 300,000).
Note: I'm comparing the output of my C++ program with the outputs of Scilab 5.5.2 using the "format(25);" command. Meaning I'm comparing 25 significant digits. I'd also like to point out I understand how precision cannot be guaranteed after a certain number of bits but read the sections below before commenting. So far, all calculations have been identical up to 25 digits between the two languages.
In attempts to get to the bottom of this issue, so far I've tried:
Examining the data type being used:
I've managed to confirm that Scilab is using IEEE 754 doubles (according to the language documentation). Also, according to Wikipedia, C++ isn't required to use IEEE 754 for doubles, but from what I can tell, everywhere I use a double in C++ it has perfectly match Scilab's results.
Examining the use of transcendental functions:
I've also read from What Every Computer Scientist Should Know About Floating-Point Arithmetic that IEEE does not require transcendental functions to be exactly rounded. With that in mind, I've compared the results of these functions (sin(), cos(), exp()) in both languages and again, the results appear to be the same (up to 25 digits).
The use of other functions and predefined values:
I repeated the above steps for the use of sqrt() and pow(). As well as the value of Pi (I'm using M_PI in C++ and %pi in Scilab). Again, the results were the same.
Lastly, I've rewritten the loop (very carefully) in order to ensure that the code is identical between the two languages.
Note: Interestingly, I noticed that for all the above calculations the results between the two languages match farther than the actual result of the calculations (outside of floating point arithmetic). For example:
Value of sin(x) using Wolfram Alpha = 0.123456789.....
Value of sin(x) using Scilab & C++ = 0.12345yyyyy.....
Where even once the value computed using Scilab or C++ started to differ from the actual result (from Wolfram). Each language's result still matched each other. This leads me to believe that most of the values are being calculated (between the two languages) in the same way. Even though they're not required to by IEEE 754.
My original thinking was one of the first three points above are implemented differently between the two languages. But from what I can tell everything seems to produce identical results.
Is it possible that even though all the inputs to these loops are identical, the results can be different? Possibly because a very small error (past what I can see with 25 digits) is occurring that accumulates over time? If so, how can I go about fixing this issue?
No, the format of the numbering system does not guarantee equivalent answers from functions in different languages.
Functions, such as sin(x), can be implemented in different ways, using the same language (as well as different languages). The sin(x) function is an excellent example. Many implementations will use a look-up table or look-up table with interpolation. This has speed advantages. However, some implementations may use a Taylor Series to evaluate the function. Some implementations may use polynomials to come up with a close approximation.
Having the same numeric format is one hurdle to solve between languages. Function implementation is another.
Remember, you need to consider the platform as well. A program that uses an 80-bit floating point processor will have different results than a program that uses a 64-bit floating point software implementation.
Some architectures provide the capability of using extended precision floating point registers (e.g. 80 bits internally, versus 64-bit values in RAM). So, it's possible to get slightly different results for the same calculation, depending on how the computations are structured, and the optimization level used to compile the code.
Yes, it's possible to have a different results. It's possible even if you are using exactly the same source code in the same programming language for the same platform. Sometimes it's enough to have a different compiler switch; for example -ffastmath would lead the compiler to optimize your code for speed rather than accuracy, and, if your computational problem is not well-conditioned to begin with, the result may be significantly different.
For example, suppose you have this code:
x_8th = x*x*x*x*x*x*x*x;
One way to compute this is to perform 7 multiplications. This would be the default behavior for most compilers. However, you may want to speed this up by specifying compiler option -ffastmath and the resulting code would have only 3 multiplications:
temp1 = x*x; temp2 = temp1*temp1; x_8th = temp2*temp2;
The result would be slightly different because finite precision arithmetic is not associative, but sufficiently close for most applications and much faster. However, if your computation is not well-conditioned that small error can quickly get amplified into a large one.
Note that it is possible that the Scilab and C++ are not using the exact same instruction sequence, or that one uses FPU and the other uses SSE, so there may not be a way to get them to be exactly the same.
As commented by IInspectable, if your compiler has _control87() or something similar, you can use it to change the precision and/or rounding settings. You could try combinations of this to see if it has any effect, but again, even you manage to get the settings identical for Scilab and C++, differences in the actual instruction sequences may be the issue.
http://msdn.microsoft.com/en-us/library/e9b52ceh.aspx
If SSE is used, I"m not sure what can be adjusted as I don't think SSE has an 80 bit precision mode.
In the case of using FPU in 32 bit mode, and if your compiler doesn't have something like _control87, you could use assembly code. If inline assembly is not allowed, you would need to call an assembly function. This example is from an old test program:
static short fcw; /* 16 bit floating point control word */
/* ... */
/* set precision control to extended precision */
__asm{
fnstcw fcw
or fcw,0300h
fldcw fcw
}
I am porting some program from Matlab to C++ for efficiency. It is important for the output of both programs to be exactly the same (**).
I am facing different results for this operation:
std::sin(0.497418836818383950) = 0.477158760259608410 (C++)
sin(0.497418836818383950) = 0.47715876025960846000 (Matlab)
N[Sin[0.497418836818383950], 20] = 0.477158760259608433 (Mathematica)
So, as far as I know both C++ and Matlab are using IEEE754 defined double arithmetic. I think I have read somewhere that IEEE754 allows differents results in the last bit. Using mathematica to decide, seems like C++ is more close to the result. How can I force Matlab to compute the sin with precision to the last bit included, so that the results are the same?
In my program this behaviour leads to big errors because the numerical differential equation solver keeps increasing this error in the last bit. However I am not sure that C++ ported version is correct. I am guessing that even if the IEEE754 allows the last bit to be different, somehow guarantees that this error does not get bigger when using the result in more IEEE754 defined double operations (because otherwise, two different programs correct according to the IEEE754 standard could produce completely different outputs). So the other question is Am I right about this?
I would like get an answer to both bolded questions. Edit: The first question is being quite controversial, but is the less important, can someone comment about the second one?
Note: This is not an error in the printing, just in case you want to check, this is how I obtained these results:
http://i.imgur.com/cy5ToYy.png
Note (**): What I mean by this is that the final output, which are the results of some calculations showing some real numbers with 4 decimal places, need to be exactly the same. The error I talk about in the question gets bigger (because of more operations, each of one is different in Matlab and in C++) so the final differences are huge) (If you are curious enough to see how the difference start getting bigger, here is the full output [link soon], but this has nothing to do with the question)
Firstly, if your numerical method depends on the accuracy of sin to the last bit, then you probably need to use an arbitrary precision library, such as MPFR.
The IEEE754 2008 standard doesn't require that the functions be correctly rounded (it does "recommend" it though). Some C libms do provide correctly rounded trigonometric functions: I believe that the glibc libm does (typically used on most linux distributions), as does CRlibm. Most other modern libms will provide trig functions that are within 1 ulp (i.e. one of the two floating point values either side of the true value), often termed faithfully rounded, which is much quicker to compute.
None of those values you printed could actually arise as IEEE 64bit floating point values (even if rounded): the 3 nearest (printed to full precision) are:
0.477158760259608 405451814405751065351068973541259765625
0.477158760259608 46096296563700889237225055694580078125
0.477158760259608 516474116868266719393432140350341796875
The possible values you could want are:
The exact sin of the decimal .497418836818383950, which is
0.477158760259608 433132061388630377105954125778369485736356219...
(this appears to be what Mathematica gives).
The exact sin of the 64-bit float nearest .497418836818383950:
0.477158760259608 430531153841011107415427334794384396325832953...
In both cases, the first of the above list is the nearest (though only barely in the case of 1).
The sine of the double constant you wrote is about 0x1.e89c4e59427b173a8753edbcb95p-2, whose nearest double is 0x1.e89c4e59427b1p-2. To 20 decimal places, the two closest doubles are 0.47715876025960840545 and 0.47715876025960846096.
Perhaps Matlab is displaying a truncated value? (EDIT: I now see that the fourth-last digit is a 6, not a 0. Matlab is giving you a result that's still faithfully-rounded, but it's the farther of the two closest doubles to the desired result. And it's still printing out the wrong number.
I should also point out that Mathematica is probably trying to solve a different problem---compute the sine of the decimal number 0.497418836818383950 to 20 decimal places. You should not expect this to match either the C++ code's result or Matlab's result.
Here is my code:
int a = 0x451998a0;
float b = *((float *)&a);
printf("coverto float: %f, %.10lf\n", b, b);
In windows the output is:
coverto float: 2457.539063, 2457.5390625000
In linux the output is:
coverto float: 2457.539062, 2457.5390625000
Is there any way to make sure the output is the same?
The behavior you're seeing is just a consequence of the fact that Windows' printf() function is implemented differently from Linux's printf() function. Most likely the difference is in how printf() implements number rounding.
How printf() works under the hood in either system is an implementation detail; thus the system is not likely to provide such fine-grained control on how printf() displays the floating point values.
There are two ways that may work to keep them the same:
Use more precision during calculation than while displaying it. For example, some scientific and graphing calculators use double precision for all internal calculations, but display the results with only float precision.
Use a cross-platform printf() library. Such libraries would most likely have the same behavior on all platforms, as the calculations required to determine what digits to display are usually platform-agnostic.
However, this really isn't as big of a problem as you think it is. The difference between the outputs is 0.000001. That is a ~0.0000000004% difference from either the two values. The display error is really quite negligible.
Consider this: the distance between Los Angeles and New York is 2464 miles, which is of the same order of magnitude as the numbers in your display outputs. A difference of 0.000001 miles is 1.61 millimeters. We of course don't measure distances between cities with anywhere near that kind of precision. :-)
If you use the same printf() implementation, there's a good chance they'll show the same output. Depending on what you're up to, it may be easier to use GNU GCC on both OSes, or to get printf() source code and add it to your project (you should have no trouble googling one).
BTW - have you actually checked what that hex number encodes? Should it round up or down? The 625 thing is likely itself rounded, so you shouldn't assume it should round to 63....
The obvious answer is to use less precision in your output. In general,
if there's any calculation involved, you can't even be sure that the
actual floating point values are identical. And how printf and
ostream round is implementation defined, even if the floating point
values are equal.
In general, C++ doesn't guarantee that two implementations produce the
same results. In this particular case, if it's important, you can do
the rounding by hand, before doing the conversion, but you'll still have
occasional problems because the actual floating point values will be
different. This may, in fact, occur even with different levels of
optimization with the same compiler. So anything you try (other than
writing the entire program in assembler) is bound to be a loosing battle
in the end.
I am checking to make sure a float is not zero. It is impossible for the float to become negative. So is it faster to do this float != 0.0f or this float > 0.0f?
Thanks.
Edit: Yes, I know this is micro-optimisation. But this is going to be called every time through my game loop, and I would like to know anyway.
There is not likely to be a detectable difference in performance.
Consider, for entertainment purposes only:
Only 2 floating point values compare equal to 0f: zero and negative zero, and they differ only at 1 bit. So circuitry/software emulation that tests whether the 31 non-sign bits are clear will do it.
The comparison >0f is slightly more complicated, since negative numbers and 0 result in false, positive numbers result in true, but NaNs (of both signs) also result in false, so it's slightly more than just checking the sign bit.
Depending on the floating point mode, either operation could cause a super-precise result in a floating point register to be rounded to 32 bit before comparison, so the score's even there.
If there was a difference at all, I'd sort of expect != to be faster, but I wouldn't really expect there to be a difference and I wouldn't be very surprised to be wrong on some particular implementation.
I assume that your proof that the value cannot be negative is not subject to floating point errors. For example, calculations along the lines of 1/2.0 - 1/3.0 - 1/6.0 or 0.4 - 0.2 - 0.2 can result in either positive or negative values if the errors happen to accumulate rather than cancelling, so presumably nothing like that is going on. About only real use of a floating-point test for equality with 0, is to test whether you have assigned a literal 0 to it. Or the result of some other calculation guaranteed to have result 0 in float, but that can be tricksy.
It is not possible to give a clear cut answer without knowing your platform and compiler. The C standard does not define how floats are implemented.
On some platforms, yes, on other platforms, no.
If in doubt, measure.
As far as I know, f != 0.0f will sometimes return true when you think it should be false.
To check whether a float number is non-zero, you should do Math.abs(f) > EPSILON, where EPSILON is the error you can tolerate.
Performance shouldn't be a big issue in this comparison.
This is almost certainly the sort of micro-optimization you shouldn't do until you have quantitative data showing that it's a problem. If you can prove it's a problem, you should figure out how to make your compiler show the machine instructions it's generating, then take that info and go to the data book for the processor you are using, and look up the number of clock cycles required for alternative implementations of the same logic. Then you should measure again to make sure you are seeing the benefits, if any.
If you don't have any data showing that's it's a performance problem stick with the implementation that most clearly and simply presents the logic of what you are trying to do.