C++: atof() has wrong behaviour - c++

I am using a library for loading Wavefront .obj files into my OpenGL application (tinyobjloader). I noticed that there is an error when loading objects. When I load an object with a coordinate of eg. 0.9999999 it is set to 0. By debugging I found out that the following method produces this behaviour:
static inline float parseFloat(const char*& token)
{
token += strspn(token, " \t");
float f = (float)atof(token);
token += strcspn(token, " \t\r");
return f;
}
So atof() returns somehow an int, not a float. I read that some compilers don't throw a warning when using atof() without including "stdlib.h" and the result is that atof() returns an integer.
The curious thing is that even if I include "stdlib.h" the error remains. I can't figure out what causes this behaviour.
Any idea?

The standard says to atof:
Except for the behaviour on error, it is equivalent to
strtod(nptr,(char**)NULL)
so yours returning '0' has nothing to do with a float not being able to represent it or similar.
Would you use strtod instead (which you probably should when stringstreams are not an option, just to be able to report errors), then you would likely notice that it stops parsing at the ..
This is a strong indication that you are using a locale that awaits , instead of . as s decimal separator. Depending on how your application works with locales, you might want to run it with a properly set environment variable (e.g. LC_NUMERIC=C) or do a setlocale(LC_NUMERIC,"C"); yourself before any parsing.
In any case you should analyze who in your application is using locale dependent things, and what for, so as to not collide with them. Another possible route is to require locale dependent input everywhere, so everyone needs to give the numbers to you with , as decimal separator.

You can see the documentation of atof here . Some Floating points cannot be represented in 32 bits and hence you are getting an error and value returned is zero.
//Try these
float f = 0.9999999 ;
cout << atof("0.9999999") << " " << f << endl;//output is 1 1
So what you are seeing is a valid behavior.
You may want to try strtod()

Related

Viewing double bit pattern in Visual Studio C++ Debugger

I'm working with IEEE-754 doubles, and I'd like to verify that the bit patterns match between different platforms. For this reason I would like to see the bit pattern of a double in the Visual Studio C++ Debugger.
I've tried format specifiers, but they don't seem to allow me to format a double as anything which would allow me to see the bit pattern.
One way I finally found was to use Memory View and enter the address of the variable (&x) in the address field. This allows me to set for instance 8-bit integer hex display, which gives me what I need. But is there any other more convenient way of formatting a double this way in the debugger?
To view the exact binary floating-point value you should print the it as hexadecimal with %a/%A or std::hexfloat instead of examining its bit pattern
printf("Hexadecimal: %a %A\n", 1.5, 1.5);
std::out << std::hexfloat << 1.5 << '\n';
However if you really need to view the actual bit pattern then you just need to reinterpret the type of the underlying memory region like auto bits = reinterpret_cast<uint64_t*>(doubleValue). You don't need to open the Memory View to achieve this, a simple cast would work in the Watch window. So to get the bit pattern of double and float use *(__int64*)&doubleValue,x and *(int*)&floatValue,x respectively. Strict aliasing does occur but you don't actually need to care about it in MSVC debugger
Note that __int64 is a built-in type of MSVC so you might want to use long long instead. Typedefs and macros like uint64_t won't work while watching
Alternatively you can access the bytes separately by casting to char* and print as an array with (char*)&doubleValue, 8, (char*)&floatValue, 4 or (char*)&floatingPoint, [sizeof floatingPoint]. This time strict aliasing doesn't occur but the output may be less readable

Why do I get platform-specific result for std::exp? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Math precision requirements of C and C++ standard
(1 answer)
Closed 4 years ago.
I have a program that were giving slithly different results under Android and Windows. As I validate the output data against a binary file containign expected result, the difference, even if very small (rounding issue) is annoying and I must find a way to fix it.
Here is a sample program:
#include <iostream>
#include <iomanip>
#include <bitset>
int main( int argc, char* argv[] )
{
// this value was identified as producing different result when used as parameter to std::exp function
unsigned char val[] = {158, 141, 250, 206, 70, 125, 31, 192};
double var = *((double*)val);
std::cout << std::setprecision(30);
std::cout << "var is " << var << std::endl;
double exp_var = std::exp(var);
std::cout << "std::exp(var) is " << exp_var << std::endl;
}
Under Windows, compiled with Visual 2015, I get the output:
var is -7.87234042553191493141184764681
std::exp(var) is 0.00038114128472300899284561093161
Under Android/armv7, compiled with g++ NDK r11b, I get the output:
var is -7.87234042553191493141184764681
std::exp(var) is 0.000381141284723008938635502307335
So the results are different starting e-20:
PC: 0.00038114128472300899284561093161
Android: 0.000381141284723008938635502307335
Note that my program does a lot of math operations and I only noticed std::exp producing different results for the same input...and only for some specific input values (did not investigate if those values are having a similar property), for most of them, results are identical.
Is this behaviour kind of "expected", is there no guarantee to have the same result in some situations?
Is there some compiler flag that could fix that?
Or do I need to round my result to end with the same on both platformas? Then what would be the good strategy for rounding? Because rounding abritrary at e-20 would loose too many information if input var in very small?
Edit: I consider my question not being a duplicate of Is floating point math broken?. I get exactly the same result on both platforms, only std::exp for some specific values produces different results.
The standard does not define how the exp function (or any other math library function1) should be implemented, thus each library implementation may use a different computing method.
For instance, the Android C library (bionic) uses an approximation of exp(r) by a special rational function on the interval [0,0.34658] and scales back the result.
Probably the Microsoft library is using a different computing method (cannot find info about it), thus resulting in different results.
Also the libraries could take a dynamic load strategy (i.e. load a .dll containing the actual implementation) in order to leverage the different hardware specific features, making it even more unpredictable the result, even when using the same compiler.
In order to get the same implementation in both (all) platforms, you could use your own implementation of the exp function, thus not relying on the different implementations of the different libraries.
Take into account that maybe the processors are taking different rounding approaches, which would yield also to a different result.
1 There are some exceptions to these, for isntance the sqrt function or std::fma and some rounding functions and basic arithmetic operations

Understanding floating point variables and operators in c++ (Also a possible book error)

I am working through a beginning C++ class and my book(Starting Out with C++ Early Objects 7th edition) has a very poor example of how to check the value of a floating point variable.
The book example in question(filename pr4-04.cpp):
// This program demonstrates how to safely test a floating-point number
// to see if it is, for all practical purposes, equal to some value.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
double result = .666667 * 6.0;
// 2/3 of 6 should be 4 and, if you print result, 4 is displayed.
cout << "result = " << result << endl;
// However, internally result is NOT precisely equal to 4.
// So test to see if it is "close" to 4.
if (abs(result - 4.0 < .0001))
cout << "result DOES equal 4!" << endl;
else
cout << "result DOES NOT equal 4!" << endl;
return 0;
}
And I use g++ in Ubuntu to compile my code like this:
g++ pr4-04.cpp -o pr4-04 && ./pr4-04
And I get this error:
error: call of overloaded ‘abs(bool)’ is ambiguous
I am able to fix this by changing abs() to fabs(), but this is still super confusing! Why is the book giving us things which won't compile, or is this just me? Why does the cout of 'result' give 4 instead of 4.000002? Why does this value seem to change when it is used in the if{} statement?
I get that we can't just use == to check for equivalence, but why do I need to use the absolute value? I get the same answer whether or not I use it. So what is the point?
Not to mention, this seems like a very poor way to check for floating point equivalence. Is there a better way to do this? This topic seems awfully important.
I found this topic here on stackoverflow, but their solution:
fabs(f1 - f2) < precision-requirement
fabs(f1 - f2) < max(fabs(f1), fabs(f2)) * percentage-precision-requirement
Doesn't make much sense to me in the context of my 4 chapters worth of C++ experience. I would greatly appreciate some help. Our book has given me a whopping 6 sentences of text to explain all of this.
Edit: As suggested by some I tried to find an errata page, but after 30mins of searching the textbook, internet, and my course website I was only able to find this downloadable zip file, which required a login -_-
I also copied the code perfectly. That was not MY typo, I copied it directly from a CD with the code on it. It is also typed that way in the book.
if (abs(result - 4.0 < .0001))
The parenthesis are wrong, you probably mean: if (abs(result-4.0) < .0001).
As to why it did not compile, the standard determines in §26.8p8 that
In addition to the double versions of the math functions in , C++ adds float and long double overloaded versions of these functions, with the same semantics.
The expression (result-4.0 < .0001) yields a bool, and there is no overload of abs that takes a bool argument, but there are multiple versions of abs for which the argument is implicitly convertible from bool. The compiler does not find one of the conversion sequences better than the rest and bails out with the ambiguity error.
The problem is clearly the line
if (abs(result - 4.0 < .0001))
which should be written as
if (abs(result - 4.0) < .0001)
I would assume that this is a simple typo. Report the error to the author of the book!
BTW, the original code does compile on my system without any problem, giving the expected result! That is, even if the author tested the code he may not have noticed that it is problematic!
Also answering the question on why abs() is needed: some decimal numbers are rounded to a floating point value which is slightly smaller than the expected result while others are rounded to number which are slightly bigger. In which direction the values are round (if at all: some decimal numbers can be represented exactly using binary floating points) is somewhat hard to predict. Thus, the result may be slightly bigger or slightly smaller than the expectation and the difference, thus, positive or negative, respectively.

Incorrect results from C++ math library's trigonometry functions

I'm currently working on a personal project that I've been doing for nearly a year now. I am trying to port it over to a Windows environment, which has succeeded. Because I am trying to get a Windows version out to people soon, I decided to continue to develop in Windows while I try to add new features and get bugs that have existed for months out. While recently attempting to add functionality which relied heavily on trigonometry, I found that all 3 trigonometric functions, oddly enough, returned the same value (1072693887) regardless of the parameter I passed. As you can imagine, this is leading to some rather strange bugs in the system.
I have math.h included, and to my knowledge no other files that would contain this function. (Perhaps there's a debugger command to find where a symbol is defined? I couldn't find any such thing, but perhaps I missed something.) I've tried asking elsewhere and searching around on Google, but to no avail...
Has anyone else heard of this problem before, or know how to fix it?
EDIT : This answer is not relevant. See comments.
This is probably due to numerical instability.
When you pass such a large value into sin(), cos(), or any of the periodic trig functions, you have to remember that there's an implicit modulo by 2*pi.
If you are using float, then the uncertainty of 1072693887, is way more than 2*pi. Therefore, whatever result you get is garbage.
We'll need to see some code to be able to see exactly what's going on though.
EDIT : Here's an illustration:
sin(1072693886) = 0.6783204666
sin(1072693887) = -0.2517863119
sin(1072693888) = -0.9504019164
But if the datatype is float, then the uncertainty of 1072693887 is +/- ~64...
1072693887 is 3FF207FF in hexadecimal, which represents 1.8908690 in IEEE single precision floating point. Are you sure your problem isn't just a representation one, ie you are casting or view the result as a integer?
All I know is that GDB is telling me the result of it is 1072693887, that it's occurring with all 3 of my trig functions (and that the arc versions of all three of them just return -1072693887) regardless of what parameter I pass.
Might be a GDB issue. What happens if you just manually print the values to the console?
Math library is fine.
You realize that the functions expect radians as input right?
E.g. :
double param = 90.0;
double rads = param * M_PI/180;
std::cout << std::fixed << "Angle : " << param << " sin : " << sin (rads) << " cos " << cos(rads);
Output :
Angle : 90.000000 sin : 1.000000 cos 0.000000-0.304811

Writing numbers to a file with more precision - C++

I wrote some parameters (all of type double) to a file for use in performing some complex computations. I write the parameters to the files like so:
refStatsOut << "SomeParam:" << value_of_type_double << endl;
where refStatsOut is an ofstreamparameter. There are four such parameters, each of type double. What I see as written to the file is different from what its actual value is (in terms of loss of precision). As an example, if value_of_type_double had a value -28.07270379934792, then what I see as written in the file is -28.0727.
Also, once these stats have been computed and written I run different programs that use these statistics. The files are read and the values are initially stored as std::strings and then converted to double via atof functions. This results in the values that I have shown above and ruins the computations further down.
My question is this:
1. Is there a way to increase the resolution with which one can write values (of type double and the like) to a file so as to NOT lose any precision?
2. Could this also be a problem of std::string to double conversion with atof? If so, what other function could I use to solve this?
P.S: Please let me know in case some of the details in this question are not clear. I will try to update them and provide more details.
You can use the setprecision function.
ofstream your_file;
you can use your_file.precision(X);
The main difference between precision() and setPrecision() is that precision returns the current precision and setPrecision doesn't. Therefore, you can use precision like this.
streamsize old_precision = your_file.precision(X);
// do what ever you want
//restore precision
your_file.precision(old_precision);
a double is basically a 64-bit integer, if you want a cheap way of writing it out, you can do something like this (note I'm assuming that your compiler uses long for 64-bit ints)
double value = 32985.932235;
long *saveme = (long*)&value;
Just beware of the caveat that the saved value may not remain the same if loaded back on a different architecture.