Weird conditional statement (same result) - c++

Going through some code, I found this :
#ifdef trunc
# undef trunc
#endif
inline float trunc(float x)
{
return (x < 0.0f) ? float(int(x)) : float(int(x));
}
inline double trunc(double x)
{
return (x < 0.0f) ? double(int(x)) : double(int(x));
}
inline long double trunc(long double x)
{
return (x < 0.0f) ? (long double)(int(x)) : (long double)(int(x));
}
#endif // _WIN32
Of course, the ?: operator always returns one and the same value in each case, regardless of its conditional expression.
On the other hand, I guess the author had his reasons to write these functions this way; I can't find one though. Any idea ? Is this just an error (typo) ?
[EDIT] Reply from the author :
Good point - this is just overzealous cut-and-paste from the definition of
round(). The following should be just fine (other than the limitation on the
range of int):
inline float trunc(float x)
{
return float(int(x));
}
inline double trunc(double x)
{
return double(int(x));
}
inline long double trunc(long double x)
{
return (long double)(int(x));
}

This code looks wrong.
My guess is that they meant something more like this:
inline float trunc(float x)
{
return (x < 0.0f) ? -float(int(-x)) : float(int(x));
}
But even that's dubious. I believe int(x) always performs truncation, so even then the two branches of ?: should yield the same result.
In case rounding mode does matter to the typecast (and after a moment's thought, I'm not sure it does), you may really want to use a function like modf, modff or modfl to break the number into integer and fractional portions, and discard the fractional portion.
For example:
inline float trunc(float x)
{
float int_part;
modf(x, &int_part);
return int_part;
}
Edit: One other observation. The original code will fail for values that do not fit in an int. Yet another strike against it.

The code returns the same output for both the conditions. This is just a redundancy and moreover, the float(int(x)) doesn't make a point. Because int(x) converts the number to an integer, again converting it to float or double makes no difference but only the type of the variable returned.

Related

PVS-Studio complaining about float comparison

I scanned my code with PVS Studio analyzer and I am confused on why this error and how to fix this.
V550 An odd precise comparison: * dest == value. It's probably better to use a comparison with defined precision: fabs(A - B) < Epsilon.
bool PipelineCache::SetShadowRegister(float* dest, uint32_t register_name) {
float value = register_file_->values[register_name].f32;
if (*dest == value) {
return false;
}
*dest = value;
return true;
}
I am guessing to change code like this:
bool PipelineCache::SetShadowRegister(float* dest, float* epsilon uint32_t register_name) {
float value = register_file_->values[register_name].f32;
return fabs(dest - value) < epsilon;
}
Whoever's wondering, we're talking about this code.
I'll try to explain what PVS studio developers were trying to achieve with this message. Citing their reference about V550:
Consider this sample:
double a = 0.5;
if (a == 0.5) //OK
x++;
double b = sin(M_PI / 6.0);
if (b == 0.5) //ERROR
x++;
The first comparison 'a == 0.5' is true. The second comparison 'b == 0.5' may be both true and false. The result of the 'b == 0.5' expression depends upon the processor, compiler's version and settings being used. For instance, the 'b' variable's value was 0.49999999999999994 when we used the Visual C++ 2010 compiler.
What they are trying to say, is, comparing floating point numbers is tricky. If you just assign your floating point number, store it and move it around the memory to later compare with itself in this function - feel free to dismiss this error message.
If you are looking to perform some bit-representation check (which I honestly think you are doing), see below.
If you are performing some massive calculations on floating point numbers, and you are a game developer, calculating the coordinates of the enemy battlecruisers - this warning is one of your best friends.
Anyway, let's return to your case. As it usually happens with PVS-Studio, they did not see the exact error, but they pointed you to the right direction. You actually want to compare two float values, but you are doing it wrong. The thing is, if both float numbers you are comparing contain NaN (even in the same bit representation), you'll get *dest != value, and your code will not work in the way you want.
In this scenario, you better reinterpret the memory under float * as uint32_t (or whatever integer type has the same size as float on your target) and compare them instead.
For example, in your particular case, register_file_->values[register_name] is of type xe::gpu::RegisterFile::RegisterValue, which already supports uint32_t representation.
As a side effect, this will draw the warning away :)

Can I hint the optimizer by giving the range of an integer?

I am using an int type to store a value. By the semantics of the program, the value always varies in a very small range (0 - 36), and int (not a char) is used only because of the CPU efficiency.
It seems like many special arithmetical optimizations can be performed on such a small range of integers. Many function calls on those integers might be optimized into a small set of "magical" operations, and some functions may even be optimized into table look-ups.
So, is it possible to tell the compiler that this int is always in that small range, and is it possible for the compiler to do those optimizations?
Yes, it is possible. For example, for gcc you can use __builtin_unreachable to tell the compiler about impossible conditions, like so:
if (value < 0 || value > 36) __builtin_unreachable();
We can wrap the condition above in a macro:
#define assume(cond) do { if (!(cond)) __builtin_unreachable(); } while (0)
And use it like so:
assume(x >= 0 && x <= 10);
As you can see, gcc performs optimizations based on this information:
#define assume(cond) do { if (!(cond)) __builtin_unreachable(); } while (0)
int func(int x){
assume(x >=0 && x <= 10);
if (x > 11){
return 2;
}
else{
return 17;
}
}
Produces:
func(int):
mov eax, 17
ret
One downside, however, that if your code ever breaks such assumptions, you get undefined behavior.
It doesn't notify you when this happens, even in debug builds. To debug/test/catch bugs with assumptions more easily, you can use a hybrid assume/assert macro (credits to #David Z), like this one:
#if defined(NDEBUG)
#define assume(cond) do { if (!(cond)) __builtin_unreachable(); } while (0)
#else
#include <cassert>
#define assume(cond) assert(cond)
#endif
In debug builds (with NDEBUG not defined), it works like an ordinary assert, printing error message and abort'ing program, and in release builds it makes use of an assumption, producing optimized code.
Note, however, that it is not a substitute for regular assert - cond remains in release builds, so you should not do something like assume(VeryExpensiveComputation()).
There is standard support for this. What you should do is to include stdint.h (cstdint) and then use the type uint_fast8_t.
This tells the compiler that you are only using numbers between 0 - 255, but that it is free to use a larger type if that gives faster code. Similarly, the compiler can assume that the variable will never have a value above 255 and then do optimizations accordingly.
The current answer is good for the case when you know for sure what the range is, but if you still want correct behavior when the value is out of the expected range, then it won't work.
For that case, I found this technique can work:
if (x == c) // assume c is a constant
{
foo(x);
}
else
{
foo(x);
}
The idea is a code-data tradeoff: you're moving 1 bit of data (whether x == c) into control logic.
This hints to the optimizer that x is in fact a known constant c, encouraging it to inline and optimize the first invocation of foo separately from the rest, possibly quite heavily.
Make sure to actually factor the code into a single subroutine foo, though -- don't duplicate the code.
Example:
For this technique to work you need to be a little lucky -- there are cases where the compiler decides not to evaluate things statically, and they're kind of arbitrary. But when it works, it works well:
#include <math.h>
#include <stdio.h>
unsigned foo(unsigned x)
{
return x * (x + 1);
}
unsigned bar(unsigned x) { return foo(x + 1) + foo(2 * x); }
int main()
{
unsigned x;
scanf("%u", &x);
unsigned r;
if (x == 1)
{
r = bar(bar(x));
}
else if (x == 0)
{
r = bar(bar(x));
}
else
{
r = bar(x + 1);
}
printf("%#x\n", r);
}
Just use -O3 and notice the pre-evaluated constants 0x20 and 0x30e in the assembler output.
I am just pitching in to say that if you may want a solution that is more standard C++, you can use the [[noreturn]] attribute to write your own unreachable.
So I'll re-purpose deniss' excellent example to demonstrate:
namespace detail {
[[noreturn]] void unreachable(){}
}
#define assume(cond) do { if (!(cond)) detail::unreachable(); } while (0)
int func(int x){
assume(x >=0 && x <= 10);
if (x > 11){
return 2;
}
else{
return 17;
}
}
Which as you can see, results in nearly identical code:
detail::unreachable():
rep ret
func(int):
movl $17, %eax
ret
The downside is of course, that you get a warning that a [[noreturn]] function does, indeed, return.

const vs #define (strange behavior)

I used to replace const with #define, but in the below example it prints false.
#include <iostream>
#define x 3e+38
using namespace std;
int main() {
float p = x;
if (p==x)
cout<<"true"<<endl;
else
cout<<"false"<<endl;
return 0;
}
But if I replace
#define x 3e+38
with
const float x = 3e+38;
it works perfectly, question is why? (I know there are several topics discussed for #define vs const, but really didn't get this, kindly enlighten me)
In c++ the literals are double precision. In the first examples the number 3e+38 is first converted to float in the variable initialization and then back to double precision in the comparison. The conversions are not necessary exact, so the numbers may differ. In the second example numbers stay float all the time. To fix it you can change p to double, write
#define x 3e+38f
(which defines a float literal), or change the comparison to
if (p == static_cast<float>(x))
which performs the same conversion as the variable initialization, and does then the comparison in single precision.
Also as commented the comparison of floating point numbers with == is not usually a good idea, as rounding errors yield unexpected results, e.g., x*y might be different from y*x.
The number 3e+38 is double due its magnitude.
The assignment
float p = x;
causes the 3e+38 to lose its precision and hence its value when stored in p.
thats why the comparison :
if(p==x)
results in false because p has different value than 3e+38.

native isnan check in C++

I stumbled upon this code to check for NaN:
/**
* isnan(val) returns true if val is nan.
* We cannot rely on std::isnan or x!=x, because GCC may wrongly optimize it
* away when compiling with -ffast-math (default in RASR).
* This function basically does 3 things:
* - ignore the sign (first bit is dropped with <<1)
* - interpret val as an unsigned integer (union)
* - compares val to the nan-bitmask (ones in the exponent, non-zero significand)
**/
template<typename T>
inline bool isnan(T val) {
if (sizeof(val) == 4) {
union { f32 f; u32 x; } u = { (f32)val };
return (u.x << 1) > 0xff000000u;
} else if (sizeof(val) == 8) {
union { f64 f; u64 x; } u = { (f64)val };
return (u.x << 1) > 0x7ff0000000000000u;
} else {
std::cerr << "isnan is not implemented for sizeof(datatype)=="
<< sizeof(val) << std::endl;
}
}
This looks arch dependent, right? However, I'm not sure about endianess, because no matter about little or big endian, the float and the int are probably stored in the same order.
Also, I wonder whether something like
volatile T x = val;
return std::isnan(x);
would have worked.
This was used with GCC 4.6 in the past.
Also, I wonder whether something like std::isnan((volatile)x) would have worked.
isnan takes its argument by value so the volatile qualifier would have been discarded. In other words, no, this doesn’t work.
The code you’ve posted relies on a specific floating point representation (IEEE). It also exhibits undefined behaviour since it relies on the union hack to retrieve the underlying float representation.
On a note about code review, the function is badly written even if we ignore the potential problems of the previous paragraph (which are justifiable): why does the function use runtime checks rather than compile-time checks and compile time error handling? It would have been better and easier just to offer two overloads.

Is there a standard sign function (signum, sgn) in C/C++?

I want a function that returns -1 for negative numbers and +1 for positive numbers.
http://en.wikipedia.org/wiki/Sign_function
It's easy enough to write my own, but it seems like something that ought to be in a standard library somewhere.
Edit: Specifically, I was looking for a function working on floats.
The type-safe C++ version:
template <typename T> int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
Benefits:
Actually implements signum (-1, 0, or 1). Implementations here using copysign only return -1 or 1, which is not signum. Also, some implementations here are returning a float (or T) rather than an int, which seems wasteful.
Works for ints, floats, doubles, unsigned shorts, or any custom types constructible from integer 0 and orderable.
Fast! copysign is slow, especially if you need to promote and then narrow again. This is branchless and optimizes excellently
Standards-compliant! The bitshift hack is neat, but only works for some bit representations, and doesn't work when you have an unsigned type. It could be provided as a manual specialization when appropriate.
Accurate! Simple comparisons with zero can maintain the machine's internal high-precision representation (e.g. 80 bit on x87), and avoid a premature round to zero.
Caveats:
It's a template so it might take longer to compile in some circumstances.
Apparently some people think use of a new, somewhat esoteric, and very slow standard library function that doesn't even really implement signum is more understandable.
The < 0 part of the check triggers GCC's -Wtype-limits warning when instantiated for an unsigned type. You can avoid this by using some overloads:
template <typename T> inline constexpr
int signum(T x, std::false_type is_signed) {
return T(0) < x;
}
template <typename T> inline constexpr
int signum(T x, std::true_type is_signed) {
return (T(0) < x) - (x < T(0));
}
template <typename T> inline constexpr
int signum(T x) {
return signum(x, std::is_signed<T>());
}
(Which is a good example of the first caveat.)
I don't know of a standard function for it. Here's an interesting way to write it though:
(x > 0) - (x < 0)
Here's a more readable way to do it:
if (x > 0) return 1;
if (x < 0) return -1;
return 0;
If you like the ternary operator you can do this:
(x > 0) ? 1 : ((x < 0) ? -1 : 0)
There is a C99 math library function called copysign(), which takes the sign from one argument and the absolute value from the other:
result = copysign(1.0, value) // double
result = copysignf(1.0, value) // float
result = copysignl(1.0, value) // long double
will give you a result of +/- 1.0, depending on the sign of value. Note that floating point zeroes are signed: (+0) will yield +1, and (-0) will yield -1.
It seems that most of the answers missed the original question.
Is there a standard sign function (signum, sgn) in C/C++?
Not in the standard library, however there is copysign which can be used almost the same way via copysign(1.0, arg) and there is a true sign function in boost, which might as well be part of the standard.
#include <boost/math/special_functions/sign.hpp>
//Returns 1 if x > 0, -1 if x < 0, and 0 if x is zero.
template <class T>
inline int sign (const T& z);
Apparently, the answer to the original poster's question is no. There is no standard C++ sgn function.
Is there a standard sign function (signum, sgn) in C/C++?
Yes, depending on definition.
C99 and later has the signbit() macro in <math.h>
int signbit(real-floating x);
The signbit macro returns a nonzero value if and only if the sign of its argument value is negative. C11 §7.12.3.6
Yet OP wants something a little different.
I want a function that returns -1 for negative numbers and +1 for positive numbers. ... a function working on floats.
#define signbit_p1_or_n1(x) ((signbit(x) ? -1 : 1)
Deeper:
OP's question is not specific in the following cases: x = 0.0, -0.0, +NaN, -NaN.
A classic signum() returns +1 on x>0, -1 on x<0 and 0 on x==0.
Many answers have already covered that, but do not address x = -0.0, +NaN, -NaN. Many are geared for an integer point-of-view that usually lacks Not-a-Numbers (NaN) and -0.0.
Typical answers function like signnum_typical() On -0.0, +NaN, -NaN, they return 0.0, 0.0, 0.0.
int signnum_typical(double x) {
if (x > 0.0) return 1;
if (x < 0.0) return -1;
return 0;
}
Instead, I propose this functionality: On -0.0, +NaN, -NaN, it returns -0.0, +NaN, -NaN.
double signnum_c(double x) {
if (x > 0.0) return 1.0;
if (x < 0.0) return -1.0;
return x;
}
Faster than the above solutions, including the highest rated one:
(x < 0) ? -1 : (x > 0)
There's a way to do it without branching, but it's not very pretty.
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
http://graphics.stanford.edu/~seander/bithacks.html
Lots of other interesting, overly-clever stuff on that page, too...
If all you want is to test the sign, use signbit (returns true if its argument has a negative sign).
Not sure why you would particularly want -1 or +1 returned; copysign is more convenient
for that, but it sounds like it will return +1 for negative zero on some platforms with
only partial support for negative zero, where signbit presumably would return true.
In general, there is no standard signum function in C/C++, and the lack of such a fundamental function tells you a lot about these languages.
Apart from that, I believe both majority viewpoints about the right approach to define such a function are in a way correct, and the "controversy" about it is actually a non-argument once you take into account two important caveats:
A signum function should always return the type of its operand, similarly to an abs() function, because signum is usually used for multiplication with an absolute value after the latter has been processed somehow. Therefore, the major use case of signum is not comparisons but arithmetic, and the latter shouldn't involve any expensive integer-to/from-floating-point conversions.
Floating point types do not feature a single exact zero value: +0.0 can be interpreted as "infinitesimally above zero", and -0.0 as "infinitesimally below zero". That's the reason why comparisons involving zero must internally check against both values, and an expression like x == 0.0 can be dangerous.
Regarding C, I think the best way forward with integral types is indeed to use the (x > 0) - (x < 0) expression, as it should be translated in a branch-free fashion, and requires only three basic operations. Best define inline functions that enforce a return type matching the argument type, and add a C11 define _Generic to map these functions to a common name.
With floating point values, I think inline functions based on C11 copysignf(1.0f, x), copysign(1.0, x), and copysignl(1.0l, x) are the way to go, simply because they're also highly likely to be branch-free, and additionally do not require casting the result from integer back into a floating point value. You should probably comment prominently that your floating point implementations of signum will not return zero because of the peculiarities of floating point zero values, processing time considerations, and also because it is often very useful in floating point arithmetic to receive the correct -1/+1 sign, even for zero values.
My copy of C in a Nutshell reveals the existence of a standard function called copysign which might be useful. It looks as if copysign(1.0, -2.0) would return -1.0 and copysign(1.0, 2.0) would return +1.0.
Pretty close huh?
The question is old but there is now this kind of desired function. I added a wrapper with not, left shift and dec.
You can use a wrapper function based on signbit from C99 in order to get the exact desired behavior (see code further below).
Returns whether the sign of x is negative.
This can be also applied to infinites, NaNs and zeroes (if zero is unsigned, it is considered positive
#include <math.h>
int signValue(float a) {
return ((!signbit(a)) << 1) - 1;
}
NB: I use operand not ("!") because the return value of signbit is not specified to be 1 (even though the examples let us think it would always be this way) but true for a negative number:
Return value
A non-zero value (true) if the sign of x is negative; and zero (false) otherwise.
Then I multiply by two with left shift (" << 1") which will give us 2 for a positive number and 0 for a negative one and finally decrement by 1 to obtain 1 and -1 for respectively positive and negative numbers as requested by OP.
The accepted answer with the overload below does indeed not trigger -Wtype-limits. But it does trigger unused argument warnings (on the is_signed variable). To avoid these the second argument should not be named like so:
template <typename T> inline constexpr
int signum(T x, std::false_type) {
return T(0) < x;
}
template <typename T> inline constexpr
int signum(T x, std::true_type) {
return (T(0) < x) - (x < T(0));
}
template <typename T> inline constexpr
int signum(T x) {
return signum(x, std::is_signed<T>());
}
For C++11 and higher an alternative could be.
template <typename T>
typename std::enable_if<std::is_unsigned<T>::value, int>::type
inline constexpr signum(T const x) {
return T(0) < x;
}
template <typename T>
typename std::enable_if<std::is_signed<T>::value, int>::type
inline constexpr signum(T const x) {
return (T(0) < x) - (x < T(0));
}
For me it does not trigger any warnings on GCC 5.3.1.
No, it doesn't exist in c++, like in matlab. I use a macro in my programs for this.
#define sign(a) ( ( (a) < 0 ) ? -1 : ( (a) > 0 ) )
Bit off-topic, but I use this:
template<typename T>
constexpr int sgn(const T &a, const T &b) noexcept{
return (a > b) - (a < b);
}
template<typename T>
constexpr int sgn(const T &a) noexcept{
return sgn(a, T(0));
}
and I found first function - the one with two arguments, to be much more useful from "standard" sgn(), because it is most often used in code like this:
int comp(unsigned a, unsigned b){
return sgn( int(a) - int(b) );
}
vs.
int comp(unsigned a, unsigned b){
return sgn(a, b);
}
there is no cast for unsigned types and no additional minus.
in fact i have this piece of code using sgn()
template <class T>
int comp(const T &a, const T &b){
log__("all");
if (a < b)
return -1;
if (a > b)
return +1;
return 0;
}
inline int comp(int const a, int const b){
log__("int");
return a - b;
}
inline int comp(long int const a, long int const b){
log__("long");
return sgn(a, b);
}
You can use boost::math::sign() method from boost/math/special_functions/sign.hpp if boost is available.
Here's a branching-friendly implementation:
inline int signum(const double x) {
if(x == 0) return 0;
return (1 - (static_cast<int>((*reinterpret_cast<const uint64_t*>(&x)) >> 63) << 1));
}
Unless your data has zeros as half of the numbers, here the branch predictor will choose one of the branches as the most common. Both branches only involve simple operations.
Alternatively, on some compilers and CPU architectures a completely branchless version may be faster:
inline int signum(const double x) {
return (x != 0) *
(1 - (static_cast<int>((*reinterpret_cast<const uint64_t*>(&x)) >> 63) << 1));
}
This works for IEEE 754 double-precision binary floating-point format: binary64 .
While the integer solution in the accepted answer is quite elegant it bothered me that it wouldn't be able to return NAN for double types, so I modified it slightly.
template <typename T> double sgn(T val) {
return double((T(0) < val) - (val < T(0)))/(val == val);
}
Note that returning a floating point NAN as opposed to a hard coded NAN causes the sign bit to be set in some implementations, so the output for val = -NAN and val = NAN are going to be identical no matter what (if you prefer a "nan" output over a -nan you can put an abs(val) before the return...)
int sign(float n)
{
union { float f; std::uint32_t i; } u { n };
return 1 - ((u.i >> 31) << 1);
}
This function assumes:
binary32 representation of floating point numbers
a compiler that make an exception about the strict aliasing rule when using a named union
double signof(double a) { return (a == 0) ? 0 : (a<0 ? -1 : 1); }
Why use ternary operators and if-else when you can simply do this
#define sgn(x) x==0 ? 0 : x/abs(x)