Float comparison gives wrong result and precision changes - c++

I have version number 1.1, 1.2, 1.3 and I need to check version less or greater than 1.2, but while debugging I get wrong answer than expected
float versionNumber = versinInfo.toFloat();
static float const VERSION_NUMBER(1.2);
if(abs((versionNumber - VERSION_NUMBER) <= 0.001))
{
// do operation
}
versionNumber comes 1.10000005, I though to change check from 0.001 to 0.0000005 but it maynt be a correct fix
please suggest best method

Version numbers are inherently integers. You already have a class, this class should have an integer value for the major version, and integer version for the minor version, and comparison operators (that's what you are trying to do).
Other schemes even have a third patch integer, a string for alpha/RC... Implement this properly in a class that uses proper semantics (i.e. a method named is_one_minor_version_away where you properly test is there is only one minor version change).
Also what happens if it's 2.9 against 3.0 in your case?

Why don't you create class that hold major version and minor version of passed versionInfo and add logic that checks there. The idea would be not to parse this as float, but to separate it by comma and take that info as int.

Related

Problems discovering glibc version in C++

My application is distributed with two binaries. One which is linked to an old glibc (for example, 2.17) and another binary that targets a newer glibc (for example, 2.34). I also distribute a launcher binary (linked against glibc 2.17) which interrogates the user's libc version and then runs the correct binary.
My code to establish the user's libc looks something like the following:
std::string gnu(gnu_get_libc_version());
double g_ver = std::stod(gnu);
if(g_ver >= 2.34)
{
return GLIBC_MAIN;
}
else
{
return GLIBC_COMPAT;
}
This works perfectly for the best part, however, some of my users report that despite having a new glibc, the old glibc binary is actually run. I have investigated this and have discovered that double g_ver is equal to 2, not 2.34 as it should be. That is, the decimal part is missing. gnu_get_libc_version() always has the correct value so it must be a problem when converting the string to a double.
I have also tried boost::lexical_cast but this has the same effect.
std::string gnu(gnu_get_libc_version());
//double g_ver = std::stod(gnu);
double glibc = boost::lexical_cast<double>(gnu);
if(g_ver >= 2.34)
{
return GLIBC_MAIN;
}
else
{
return GLIBC_COMPAT;
}
Needless to say, I am unable to reproduce this behaviour on any of my computers even when running the exact same distribution / version of Linux as affected users.
Does anybody know why boost::lexical_cast or std::stod sometimes misses the decimal part of the version number? Is there an alternative approach to this?
UPDATE
Upon further tests, this problem is introduced when using a different locale. I set my locale to fr_FR.UTF-8 on a test machine and was able to reproduce this problem. However, the output of gnu_get_libc_version() seems to be correct but std::stod is unable to parse the decimal point part of the version.
Does anybody know how to correct this problem?
The fundamental problem is that the glibc version is a string and not a decimal number. So for a "proper" solution you need to parse it manually and implement your own logic to decide which version is bigger or smaller.
However, as a quick and dirty hack, try inserting the line
setlocale(LC_NUMERIC, "C");
before the strtod call. That will set the numeric locale back to the default C locale where the decimal separator is .. If you're doing something that needs correct locales later in the program you need to set it back again. Depending on how your program initialized locales earlier, something like
setlocale(LC_NUMERIC, "");
should reset it back to what the environment says the locale should be.
Could this be the locale of the user?
The decimal separator is not a '.' in all locales and stod uses the current locale?
As you are depending on glibc anyway, you can use strverscmp.
gnu_get_libc_version() may return more than just X.X, It may be be 2.0.1 for example.
However strtod wont handle all of the invalid part of the number to would equate to 2.0 in this case.
You need to verify the actual value as a string returned from gnu_get_libc_version in the versions you currently do not have access to

How to print glm vector with manually specified precion?

I know about glm::to_string but I cannot find away to set precision of the string conversion. A few issues come up because of this. If I want to serialize glm structures, I won't be able to store the full value in text. More recently I ran into an issue where I had a bug with a value being on the order of -10^-18, but glm insisted on showing it as -0.000 requiring me to manually print out the individual values or create my own printing function to actually see the true value (and I'd really like not to have to do that).
Does glm provide the means to set glm::vecN to std::string output precision or use std::setprecision?
Note: the answers on this question do not present a way to set the precision of glm::vecN using glm built in facilities.

Convert half to float in OpenCL

I apologize if this is trivial, but I've been unable to find an answer by google.
As per the OpenCL standard (since 1.0), the half type is supported for storage reasons.
It seems to me however, that without the cl_khr_fp16 extension, it's impossible to use this for anything?
What I would like to do is to save my values as half, but perform all calculations in float.
I tried using convert_half(), but that's not supported without the cl_khr_fp16.
I tried just writing (float) before the half for auto c-style conversion, didn't work eighter.
So my question is, how do I utilize half for storage?
I need to be able to both read and write half's.
Use vload_halfN and store_halfN. The halfN values stored will be converted to/from floatN.
As far as I know the type half is only supported on the GPU, but you can convert it to and back from a float fairly simply, as long as you know a bit about bitwise manipulation.
Have a look at the following link for a good explanation on how to do so.
ftp://ftp.fox-toolkit.org/pub/fasthalffloatconversion.pdf
Since it wasn't mentioned in any of the other answers I thought I'd add: You can also use half float in OpenCL images and the read_imagef and write_imagef functions will do the conversion to/from float for you (cl_khr_fp16 extension not required). That extension is only for having variables in (and doing math in) half.

dtoa vs sprintf vs Grisu3 algorithm

What is the best way to render double precision numbers as strings in C++?
I ran across the article Here be dragons: advances in problems you didn’t even know you had which discusses printing floating point numbers.
I have been using sprintf. I don't understand why I would need to modify the code?
If you are happy with sprintf_s you shouldn't change. However if you need to format your output in a way that is not supported by your library, you might need to reimplement a specialized version of sprintf (with any of the known algorithms).
For example JavaScript has very precise requirements on how its numbers must be printed (see section 9.8.1 of the specification). The correct output can't be accomplished by simply calling sprintf. Indeed, Grisu has been developed to implement correct number-printing for a JavaScript compiler.
Grisu is also faster than sprintf, but unless floating-point printing is a bottleneck in your application this should not be a reason to switch to a different library.
Ahah !
The problem outlined in the article you give is that for some numbers, the computer displays something that is theoritically correct but not what we, humans, would have used.
For example, like the article says, 1.2999999... = 1.3, so if your result is 1.3, it's (quite) correct for the computer to display it as 1.299999999... But that's not what you would have seen...
Now the question is why does the computer do that ? The reason is the computer compute in base 2 (binary) and that we usually compute in base 10 (decimal). The results are the same (thanks god !) but the internal storage and the representation are not.
Some numbers looks nice when displayed in base 10, like 1.3 for example, but others don't, for example 1/3 = 0.333333333.... It's the same in base 2, some numbers "looks" nice in base 2 (usually when composed of fractions of 2) and other not. When the computer stores number internally, it may not be able to store it "exactly" and store the closest possible representation, even if the number looked "finite" in decimal. So yes, in this case, it "drifts" a little bit. If you do that again and again, you may lose precision. But there is no other way (unless using special math libs able to store fractions)
The problem arise when the computer tries to give you back in base 10 the number you gave it. Then the computer may gives you 1.299999 instead of the 1.3 you were expected.
That's also the reason why you should never compare floats with ==, <, >, but instead use the special functions islessgreater(a, b) isgreater(a, b) etc.
So the actual function you use (sprintf) is fine and as exact as it can, it gives you correct values, you just have to know that when dealing with floats, 1.2999999 at maximum precision is OK if you were expecting 1.3
Now if you want to "pretty print" those numbers to have the best "human" representation (base 10), you may want to use a special library, like your grisu3 which will try to undo the drift that may have happen and align the number to the closest base 10 representation.
Now the library cannot use a crystal ball and find what numbers were drifted or not, so it may happen that you really meant 1.2999999 at maximum precision as stored in the computer and the lib will "convert" it to 1.3... But it's not worse nor less precise than displaying 1.29999 instead of 1.3.
If you need a good readability, such lib will be useful. If not, it's just a waste of time.
Hope this help !
The best way to do this in any reasonable language is:
Use your language's runtime library. Don't ever roll your own. Even if you have the knowledge and curiosity to write it, you don't want to test it and you don't want to maintain it.
If you notice any misbehavior from the runtime library conversion, file a bug.
If these conversions are a measurable bottleneck for your program, don't try to make them faster. Instead, find a way to avoid doing them at all. Instead of storing numbers as strings, just store the floating-point data (after possibly controlling for endianness). If you need a string representation, use a hexadecimal floating-point format instead.
I don't mean to discourage you, or anyone. These are actually fascinating functions to work on, but they are also shocking complex, and trying to design good test coverage for any non-naive implementation is even more involved. Don't get started unless you're prepared to spend months thinking about the problem.
You might want to use something like Grisu (or a faster method) because it gives you the shortest decimal representation with round trip guarantee unlike sprintf which only takes a fixed precision. The good news is that C++20 includes std::format that gives you this by default. For example:
printf("%.*g", std::numeric_limits<double>::max_digits10, 0.3);
prints 0.29999999999999999 while
puts(fmt::format("{}", 0.3).c_str());
prints 0.3 (godbolt).
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("{}", 0.3);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
In C++ why aren't you using iostreams? You should probably be using cout for the console and ostringstream for string-oriented output (unless you have a very specific need to use a printf family method).
You shouldn't worry about formatting performance unless actual profiling shows that CPU is the bottleneck (compared to say I/O).
void outputdouble( ostringstream & oss, double d )
{
oss.precision( 5 );
oss << d;
}
http://www.cplusplus.com/reference/iostream/ostringstream/

Float or Double Special Value

I have double (or float) variables that might be "empty", as in holding no valid value. How can I represent this condition with the built in types float and double?
One option would be a wrapper that has a float and a boolean, but that can´t work, as my libraries have containers that store doubles and not objects that behave as doubles. Another would be using NaN (std::numeric_limits). But I see no way to check for a variable being NaN.
How can I solve the problem of needing a "special" float value to mean something other than the number?
We have done that by using NaN:
double d = std::numeric_limits<double>::signaling_NaN();
bool isNaN = (d != d);
NaN values compared for equality against itself will yield false. That's the way you test for NaN, but which seems to be only valid if std::numeric_limits<double>::is_iec559 is true (if so, it conforms to ieee754 too).
In C99 there is a macro called isnan for this in math.h, which checks a floating point number for a NaN value too.
In Visual C++, there is a non-standard _isnan(double) function that you can import through float.h.
In C, there is a isnan(double) function that you can import through math.h.
In C++, there is a isnan(double) function that you can import through cmath.
As others have pointed out, using NaN's can be a lot of hassle. They are a special case that has to be dealt with like NULL pointers. The difference is that a NaN will not usually cause core dumps and application failures, but they are extremely hard to track down. If you decide to use NaN's, use them as little as possible. Overuse of NaN's is an offensive coding practice.
It's not a built-in type, but I generally use boost::optional for this kind of thing. If you absolutely can't use that, perhaps a pointer would do the trick -- if the pointer is NULL, then you know the result doesn't contain a valid value.
One option would be a wrapper that has a float ad a boolean, but that can´t work, as my libraries have containers that store doubles and not objects that behave as doubles.
That's a shame. In C++ it's trivial to create a templated class that auto-converts to the actual double (reference) attribute. (Or a reference to any other type for that matter.) You just use the cast operator in a templated class. E.g.: operator TYPE & () { return value; } You can then use a HasValue<double>anywhere you'd normally use a double.
Another would be using NaN (std::numeric_limits). But i see no way to check for a variable being NaN.
As litb and James Schek also remarked, C99 provides us with isnan().
But be careful with that! Nan values make math & logic real interesting! You'd think a number couldn't be both NOT>=foo and NOT<=foo. But with NaN, it can.
There's a reason I keep a WARN-IF-NAN(X) macro in my toolbox. I've had some interesting problems arise in the past.