Problems discovering glibc version in C++ - c++

My application is distributed with two binaries. One which is linked to an old glibc (for example, 2.17) and another binary that targets a newer glibc (for example, 2.34). I also distribute a launcher binary (linked against glibc 2.17) which interrogates the user's libc version and then runs the correct binary.
My code to establish the user's libc looks something like the following:
std::string gnu(gnu_get_libc_version());
double g_ver = std::stod(gnu);
if(g_ver >= 2.34)
{
return GLIBC_MAIN;
}
else
{
return GLIBC_COMPAT;
}
This works perfectly for the best part, however, some of my users report that despite having a new glibc, the old glibc binary is actually run. I have investigated this and have discovered that double g_ver is equal to 2, not 2.34 as it should be. That is, the decimal part is missing. gnu_get_libc_version() always has the correct value so it must be a problem when converting the string to a double.
I have also tried boost::lexical_cast but this has the same effect.
std::string gnu(gnu_get_libc_version());
//double g_ver = std::stod(gnu);
double glibc = boost::lexical_cast<double>(gnu);
if(g_ver >= 2.34)
{
return GLIBC_MAIN;
}
else
{
return GLIBC_COMPAT;
}
Needless to say, I am unable to reproduce this behaviour on any of my computers even when running the exact same distribution / version of Linux as affected users.
Does anybody know why boost::lexical_cast or std::stod sometimes misses the decimal part of the version number? Is there an alternative approach to this?
UPDATE
Upon further tests, this problem is introduced when using a different locale. I set my locale to fr_FR.UTF-8 on a test machine and was able to reproduce this problem. However, the output of gnu_get_libc_version() seems to be correct but std::stod is unable to parse the decimal point part of the version.
Does anybody know how to correct this problem?

The fundamental problem is that the glibc version is a string and not a decimal number. So for a "proper" solution you need to parse it manually and implement your own logic to decide which version is bigger or smaller.
However, as a quick and dirty hack, try inserting the line
setlocale(LC_NUMERIC, "C");
before the strtod call. That will set the numeric locale back to the default C locale where the decimal separator is .. If you're doing something that needs correct locales later in the program you need to set it back again. Depending on how your program initialized locales earlier, something like
setlocale(LC_NUMERIC, "");
should reset it back to what the environment says the locale should be.

Could this be the locale of the user?
The decimal separator is not a '.' in all locales and stod uses the current locale?

As you are depending on glibc anyway, you can use strverscmp.

gnu_get_libc_version() may return more than just X.X, It may be be 2.0.1 for example.
However strtod wont handle all of the invalid part of the number to would equate to 2.0 in this case.
You need to verify the actual value as a string returned from gnu_get_libc_version in the versions you currently do not have access to

Related

sprintf formatting problem for doubles with high precision

So I was recently upgrading an old c++ project that was built using the Visual Studio 2012 - Windows XP (v110_xp) platform toolset. In the code of this project, there is some very precise double calculations happening to require up to 20 characters of precision. These doubles were then saved to a string and printed off using the printf APIs. Here is an example of what something that would happen in this project:
double testVal = 123.456789;
// do some calculations on testVal
char str[100] = { 0 };
sprintf(str, "%.20le", testVal);
After this operation str = "1.23456789000...000e+02", which is what is expected.
However, once I update the project to be compatible with Visual Studio 2019, using Visual Studio 2019 (v142) platform Toolset, with c++ 17, the above-mentioned code produces different outputs for str.
After the call to sprintf to format the value to a string, str = "1.23456789000...556e+02". This problem isn't localized to this one value, there are even more aggregious problems. For example, one of the starting values of "2234332.434322" after the sprintf formatting gets changed to "2.23433324343219995499e+07"
From all the documentation I've read with the "l" format code, it should be the correct character for converting long doubles to the string. This behavior feels like textbook float->double conversion though.
I tried setting the projects floating-point model build an argument to precise, strict, and then fast to see if any of these options would help, but it does not have an effect on the problem.
Does anyone know why this is happening?
Use the brand new Ryu (https://github.com/ulfjack/ryu) or Grisu-Exact (https://github.com/jk-jeon/Grisu-Exact) instead which are much faster than sprintf and guaranteed to be roundtrip-correct (and more), or the good old Double-Conversion (https://github.com/google/double-conversion) which is slower than the other two but has the same guarantees, still much faster than sprintf, and is battle-tested.
(Disclaimer: I'm the author of Grisu-Exact.)
I'm not sure if you really need to print out exactly 20 decimal digits, because I personally had rare occasions where the number of digits mattered. If the sole purpose of having 20 digits is just to not lose any precision, then the above mentioned libraries will definitely provide you better (and shorter) results. If the number of digits must be precisely 20 for some reasons, then well, Ryu still provides such a feature (it's called Ryu-printf) which again has the roundtrip-guarantee and much faster than sprintf.
EDIT
To elaborate more on the last sentence, note that in general it is impossible to have the roundtrip guarantee if the number of digits is fixed, because, well, if that fixed number is too small, say, 3, then there is no way to distinguish 0.123 and 0.1234. However, 20 is big enough so that the best approximation of the true value (which is what Ryu-printf produces) is always guaranteed to be roundtrip-correct.

Float comparison gives wrong result and precision changes

I have version number 1.1, 1.2, 1.3 and I need to check version less or greater than 1.2, but while debugging I get wrong answer than expected
float versionNumber = versinInfo.toFloat();
static float const VERSION_NUMBER(1.2);
if(abs((versionNumber - VERSION_NUMBER) <= 0.001))
{
// do operation
}
versionNumber comes 1.10000005, I though to change check from 0.001 to 0.0000005 but it maynt be a correct fix
please suggest best method
Version numbers are inherently integers. You already have a class, this class should have an integer value for the major version, and integer version for the minor version, and comparison operators (that's what you are trying to do).
Other schemes even have a third patch integer, a string for alpha/RC... Implement this properly in a class that uses proper semantics (i.e. a method named is_one_minor_version_away where you properly test is there is only one minor version change).
Also what happens if it's 2.9 against 3.0 in your case?
Why don't you create class that hold major version and minor version of passed versionInfo and add logic that checks there. The idea would be not to parse this as float, but to separate it by comma and take that info as int.

How to check if a value is 1.#INF0000?

I have, in short, this code:
vec3 contribX1 = Sample(O, D, 0);
if (std::isinf(contribX1.x)){
..do something..
}
According to my debug I have sometimes an infinite value that is returned by the Sample method and I have to solve it. But before doing it, I need the tools to debug properly. So I have been looking around and I found this std::isinf() that should return me a bool. Unfortunately it seems I never enter that IF condition, even if right after I am able to check the contribX1.x and it actually is 1.#INF0000. What am I doing wrong?
EDIT: The compiler is cl.exe.. I am using Visual Studio 2013
You can use isfinite to test whether the value is a valid and non-continuous (i.e. inifinite) value:
if (!std::isfinite(contribX1.x)){
should work for you, I think the issue here is that there are various values used to represent positive and negative infinite values along with NaN, in your situation I think using this test should be fine.
I don't know your platform but for Windows this related question is what helped me: std::isfinite on MSVC

dtoa vs sprintf vs Grisu3 algorithm

What is the best way to render double precision numbers as strings in C++?
I ran across the article Here be dragons: advances in problems you didn’t even know you had which discusses printing floating point numbers.
I have been using sprintf. I don't understand why I would need to modify the code?
If you are happy with sprintf_s you shouldn't change. However if you need to format your output in a way that is not supported by your library, you might need to reimplement a specialized version of sprintf (with any of the known algorithms).
For example JavaScript has very precise requirements on how its numbers must be printed (see section 9.8.1 of the specification). The correct output can't be accomplished by simply calling sprintf. Indeed, Grisu has been developed to implement correct number-printing for a JavaScript compiler.
Grisu is also faster than sprintf, but unless floating-point printing is a bottleneck in your application this should not be a reason to switch to a different library.
Ahah !
The problem outlined in the article you give is that for some numbers, the computer displays something that is theoritically correct but not what we, humans, would have used.
For example, like the article says, 1.2999999... = 1.3, so if your result is 1.3, it's (quite) correct for the computer to display it as 1.299999999... But that's not what you would have seen...
Now the question is why does the computer do that ? The reason is the computer compute in base 2 (binary) and that we usually compute in base 10 (decimal). The results are the same (thanks god !) but the internal storage and the representation are not.
Some numbers looks nice when displayed in base 10, like 1.3 for example, but others don't, for example 1/3 = 0.333333333.... It's the same in base 2, some numbers "looks" nice in base 2 (usually when composed of fractions of 2) and other not. When the computer stores number internally, it may not be able to store it "exactly" and store the closest possible representation, even if the number looked "finite" in decimal. So yes, in this case, it "drifts" a little bit. If you do that again and again, you may lose precision. But there is no other way (unless using special math libs able to store fractions)
The problem arise when the computer tries to give you back in base 10 the number you gave it. Then the computer may gives you 1.299999 instead of the 1.3 you were expected.
That's also the reason why you should never compare floats with ==, <, >, but instead use the special functions islessgreater(a, b) isgreater(a, b) etc.
So the actual function you use (sprintf) is fine and as exact as it can, it gives you correct values, you just have to know that when dealing with floats, 1.2999999 at maximum precision is OK if you were expecting 1.3
Now if you want to "pretty print" those numbers to have the best "human" representation (base 10), you may want to use a special library, like your grisu3 which will try to undo the drift that may have happen and align the number to the closest base 10 representation.
Now the library cannot use a crystal ball and find what numbers were drifted or not, so it may happen that you really meant 1.2999999 at maximum precision as stored in the computer and the lib will "convert" it to 1.3... But it's not worse nor less precise than displaying 1.29999 instead of 1.3.
If you need a good readability, such lib will be useful. If not, it's just a waste of time.
Hope this help !
The best way to do this in any reasonable language is:
Use your language's runtime library. Don't ever roll your own. Even if you have the knowledge and curiosity to write it, you don't want to test it and you don't want to maintain it.
If you notice any misbehavior from the runtime library conversion, file a bug.
If these conversions are a measurable bottleneck for your program, don't try to make them faster. Instead, find a way to avoid doing them at all. Instead of storing numbers as strings, just store the floating-point data (after possibly controlling for endianness). If you need a string representation, use a hexadecimal floating-point format instead.
I don't mean to discourage you, or anyone. These are actually fascinating functions to work on, but they are also shocking complex, and trying to design good test coverage for any non-naive implementation is even more involved. Don't get started unless you're prepared to spend months thinking about the problem.
You might want to use something like Grisu (or a faster method) because it gives you the shortest decimal representation with round trip guarantee unlike sprintf which only takes a fixed precision. The good news is that C++20 includes std::format that gives you this by default. For example:
printf("%.*g", std::numeric_limits<double>::max_digits10, 0.3);
prints 0.29999999999999999 while
puts(fmt::format("{}", 0.3).c_str());
prints 0.3 (godbolt).
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("{}", 0.3);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
In C++ why aren't you using iostreams? You should probably be using cout for the console and ostringstream for string-oriented output (unless you have a very specific need to use a printf family method).
You shouldn't worry about formatting performance unless actual profiling shows that CPU is the bottleneck (compared to say I/O).
void outputdouble( ostringstream & oss, double d )
{
oss.precision( 5 );
oss << d;
}
http://www.cplusplus.com/reference/iostream/ostringstream/

How to detect an overflow in C++?

I just wonder if there is some convenient way to detect if overflow happens to any variable of any default data type used in a C++ program during runtime? By convenient, I mean no need to write code to follow each variable if it is in the range of its data type every time its value changes. Or if it is impossible to achieve this, how would you do?
For example,
float f1=FLT_MAX+1;
cout << f1 << endl;
doesn't give any error or warning in either compilation with "gcc -W -Wall" or running.
Thanks and regards!
Consider using boosts numeric conversion which gives you negative_overflow and positive_overflow exceptions (examples).
Your example doesn't actually overflow in the default floating-point environment in a IEEE-754 compliant system.
On such a system, where float is 32 bit binary floating point, FLT_MAX is 0x1.fffffep127 in C99 hexadecimal floating point notation. Writing it out as an integer in hex, it looks like this:
0xffffff00000000000000000000000000
Adding one (without rounding, as though the values were arbitrary precision integers), gives:
0xffffff00000000000000000000000001
But in the default floating-point environment on an IEEE-754 compliant system, any value between
0xfffffe80000000000000000000000000
and
0xffffff80000000000000000000000000
(which includes the value you have specified) is rounded to FLT_MAX. No overflow occurs.
Compounding the matter, your expression (FLT_MAX + 1) is likely to be evaluated at compile time, not runtime, since it has no side effects visible to your program.
In situations where I need to detect overflow, I use SafeInt<T>. It's a cross platform solution which throws an exception in overflow situations.
SafeInt<float> f1 = FLT_MAX;
f1 += 1; // throws
It is available on codeplex
http://www.codeplex.com/SafeInt/
Back in the old days when I was developing C++ (199x) we used a tool called Purify. Back then it was a tool that instrumented the object code and logged everything 'bad' during a test run.
I did a quick google and I'm not quite sure if it still exists.
As far as I know nowadays several open source tools exist that do more or less the same.
Checkout electricfence and valgrind.
Clang provides -fsanitize=signed-integer-overflow and -fsanitize=unsigned-integer-overflow.
http://clang.llvm.org/docs/UsersManual.html#controlling-code-generation