I am wondering why I am observering different results when using pow on x86 and x64 respectively. In our application we control the floating point rounding mode, which has worked fine on both Windows and Linux 32 bit.
#include <cmath>
#include <cstdio>
#include <cfloat>
void checkEqual(double expected, double observed) {
if (expected == observed) {
printf("Expected %.23f, got %.23f\n", expected, observed);
}
else {
printf("ERROR: Expected %.23f, got %.23f\n", expected, observed);
}
}
int main(int argc, char **argv) {
unsigned ret, tmp;
_controlfp_s(&ret, 0, 0);
_controlfp_s(&tmp, RC_DOWN, MCW_RC);
checkEqual(2048.0, pow(2.0, 11.0));
checkEqual(0.125, pow(2.0, -3.0));
return 0;
}
Compiling and running with Visual Studio 2015 (2012 gives the same result) gives me the following output
x86:
Expected 2048.00000000000000000000000, got 2048.00000000000000000000000
Expected 0.12500000000000000000000, got 0.12500000000000000000000
x64:
ERROR: Expected 2048.00000000000000000000000, got 2047.99999999999977262632456
ERROR: Expected 0.12500000000000000000000, got 0.12499999999999998612221
Can anyone explain the differences? I know that inherently, floating point calculations are not exact, but for these specific values, I would expect the function to produce the same result regardless of rounding mode.
I investigated this some more and found that the real issue is not how pow is implemented but rather that x86 and x64 hardware are different as Hans Passant suggested.
There are many different implementations of pow possible. For instance, pow(x,y) is exp(y*ln(x)). As you can imagine, exp(11.0*ln(2.0)) can suffer from internal rounding errors even though x and y are exactly representable.
However, it's quite common to call pow with integer arguments, and therefore some libraries have optimized paths for those cases. E.g., pow(x,2*n) is pow(x,n) squared, and pow(x, 2n+1) is x*pow(x, 2n).
It seems your x86 and x64 implementations differ in this respect. That can happen. IEEE only promises precise results for +-*/ and sqrt.
Related
Some weird result encountered in VC++2010:
enum dow :unsigned long long {mon=0x800022223333ULL,tue};
int _tmain(int argc, _TCHAR* argv[])
{
dow a=mon;
unsigned long long b=0x800022223333ULL;
printf("%d\n",sizeof(a));
printf("%llx\n",a); // a is 32bit or 64 bit???
printf("%llx\n",(unsigned long long)a);
printf("%llx\n",b);
return 0;
}
I got some unexpected result:
8
1ff1d3f622223333
800022223333
800022223333
The 1ff1d3f622223333 is incorrect. After inspecting the generated assembly code, I found that the compiler was passing a 32bit 0x22223333 in the printf for a which was then incorrectly interpreted as 64bit by the printf format specification. Hence the garbage 1ff1d3f6 was inserted. Why is it so?
EDIT
forgot to say that it was compiled as a 32bit exe with both Release and Debug Configuration.
This appears to be a bug in that version of Visual Studio. The following code:
#include <cstdio>
enum dow :unsigned long long {mon=0x800022223333ULL,tue};
int main()
{
dow a=mon;
unsigned long long b=0x800022223333ULL;
printf("%d\n",sizeof(a));
printf("%llx\n",a); // a is 32bit or 64 bit???
printf("%llx\n",(unsigned long long)a);
printf("%llx\n",b);
return 0;
}
produces a similar result in VS2010:
8
22223333
800022223333
800022223333
However, it looks like it's been fixed in later releases, the same code run in VS2015 Express gives:
8
800022223333
800022223333
800022223333
Since our version of VS2010 has all patches installed, it looks like this was never fixed in that version. My suggestion therefore (if you really need those large enumerations) is to upgrade.
I have an application where I absolutely must use long double data type due to catastrophic truncation error when doing math with double precision. My testing procedures are going crazy because on windows long double with Visual Studio is just an alias to double, while on linux and OSX, long double is a real long double with nominally precision of 1e-19.
On mingw (windows port of GCC) is where I am confused. Mingw claims that LDBL_EPSILON has a precision of 1e-19, but googleing suggests that the mingw uses the c runtime that is actually just the microsoft c runtime which doesn't support real long doubles. Can anyone shed any light here?
EDIT: The crux of the problem is this: on mingw, if I call the math function log(long double x), is this just an alias to log(double x)? In either case, how could I write my own script to test this behavior and/or test for it?
Following code
#include <iostream>
#include <cmath>
int main(void)
{
long double y = 2.0L;
std::cout << sizeof(y) << std::endl;
long double q = sqrt(y);
std::cout << q << std::endl;
return 0;
}
produced output 16 1.41421, so far so good
Ran it throw preprocessor (-E option) and found out that internal, but different from double sqrt() function were called
using ::sqrt;
inline constexpr float sqrt(float __x)
{ return __builtin_sqrtf(__x); }
inline constexpr long double sqrt(long double __x)
{ return __builtin_sqrtl(__x); }
Same for log(), sin(), you name it
Thus, I believe MinGW support long double format in arithmetics as well as in math.functions, and this support is built-in, not libquadmath based
Just tried with MinGW (MinGW distro from nuwen, gcc/g++ 4.9.1, 64bit)
Following program
#include <iostream>
int main(void)
{
double x = 1.0;
long double y = 2.0L;
std::cout << sizeof(x) << std::endl;
std::cout << sizeof(y) << std::endl;
}
produced
8
16
I would guess, long double is supported and is different from standard double, thus
your computations might produce desired result
I've heard there are problems with printing long doubles on Windows due to using MS old runtime.
You might have to use casts or roll your own output routines
Sample code which is valid and gets compiled by gcc but not by VS compiler:
#include <cmath>
int main()
{
float x = 1233.23;
x = round (x * 10) / 10;
return 0;
}
but for some reason, when I am compiling this in Visual Studio I get an error:
C3861: 'round': identifier not found
I do include even cmath as someone suggested here: http://www.daniweb.com/software-development/cpp/threads/270269/boss_loken.cpp147-error-c3861-round-identifier-not-found
Is this function in gcc only?
First of all, cmath is not guaranteed to bring round into the global namespace, so your code could fail, even with an up-to-date, standards compliant C or C++ implementation. To be sure, use std::round (or #include <math.h>.)
Note that your C++ compiler must support C++11 for std::round (<cmath>). A C compiler should support C99 for round (from <math.h>.) If your version of MSVC doesn't work after the fix I suggested, it could simply be that that particular version is pre-C++11, or is simply not standards compliant.
You also can use a boost library:
#include <boost/math/special_functions/round.hpp>
const double a = boost::math::round(3.45); // = 3.0
const int b = boost::math::iround(3.45); // = 3
Visual Studio 2010 (C99) doesn't support round but ceil and floor functions, so you may want to add your own function like;
long long int round(float a)
{
long long int result;
if (a-floor(a)>=0.5)
result = floor(a)+1;
else
result = floor(a);
return result;
};
The std::round nor #include <math.h> supported at the VS 2010 according to my experience.
I would use floor twice to get the correct rounded value,
long long int round(float a)
{
long long int result = (long long int)floor(a);
return result + (long long int)floor((a-result)*2)
};
use vsprintf to write the content to file.
output format is:
"tt2:%f, tt2:%x", tt2, *((int *)&tt2)
linux:
gcc 4.4.5: -O2 -ffloat-store
In linux.in file is like this:
tt2:30759.257812, tt2:46f04e84
windows:
vs2005 sp1: /O2 Precise (/fp:precise)
In windows. in file is like this:
tt2:30759.257813, tt2:46f04e84
Why that is different?
==================================
I have find the reason for my case.
In windows, I use the ofstream to output to file. It'c c++ lib.
In linux, I just use write to output to file. It's c lib.
When I use ofstream in linux, the output is the same.
After all, thanks for everyone~
Floating-point numbers are stored in the computer in binary. When printing them into decimal floating-point, there are multiple correct representations for them. In your case, both of them are correct, as both of them convert back to the original binary floating-point value. Look at the output of this file, which I compiled using GCC:
#include <stdint.h>
#include <stdio.h>
int main()
{
float a = 30759.257812f;
float b = 30759.257813f;
printf("%x\n%x\n", *(uint32_t *)&a, *(uint32_t *)&b);
}
Output:
46f04e84
46f04e84
Therefore, an implementation of printf and friends may choose to display any of the two decimal floating-point numbers.
I want to check code inside math library function sqrt() how is it possible? I am using DEV C++ .
This stuff gets compiled into the toolchain runtime, but since GCC and its Windows port MinGW (which is what your Dev-C++ IDE invokes) are open-source, you can just take a look at the source.
Here it is for latest MinGW GCC; both versions appear to defer basically all of the work to the processor (which is not a great surprise, seeing as x86 — by way of the x87 part of the instruction set — supports square root calculations natively).
long double values
#include <math.h>
#include <errno.h>
extern long double __QNANL;
long double
sqrtl (long double x)
{
if (x < 0.0L )
{
errno = EDOM;
return __QNANL;
}
else
{
long double res;
asm ("fsqrt" : "=t" (res) : "0" (x));
return res;
}
}
float values
#include <math.h>
#include <errno.h>
extern float __QNANF;
float
sqrtf (float x)
{
if (x < 0.0F )
{
errno = EDOM;
return __QNANF;
}
else
{
float res;
asm ("fsqrt" : "=t" (res) : "0" (x));
return res;
}
}
Square roots are calculated by the floating point unit of the processor so there is not much C++ to learn there...
EDIT:
x86 instructions
http://en.wikipedia.org/wiki/X86_instruction_listings
http://en.wikipedia.org/wiki/X87
FSQRT - Square root
Even back in the day: en.wikipedia.org/wiki/8087
If there's no source code for your sqrt(), you can always disassemble it. Inspecting the code would be one type of checking.
You can also write a test for sqrt(). That would be the other type of checking.