Somewhere in my code, I have preprocessor definition
#define ZOOM_FACTOR 1
In another place I have
#ifdef ZOOM_FACTOR
#if (ZOOM_FACTOR == 1)
#define FONT_SIZE 8
#else
#define FONT_SIZE 12
#endif
#else
#define FONT_SIZE 8
#endif
The problem is when I change ZOOM_FACTOR value to floating point value, for example 1.5, I'm getting compile error C1017: invalid integer constant expression.
Does anyone know why am I getting this error and is there any way to make a comparison between integer and floating point number within preprocessor directive?
The error is because the language does not permit it.
As per the C++ standard, [cpp.cond]/1:
The expression that controls conditional inclusion shall be an integral constant expression.
Instead of defining ZOOM_FACTOR as floating point value 1.5, why not define it as a multiple of such value. For example, multiply with a constant such as 2 and then make your comparisons.
Related
I've been going through an old source project, trying to make it compile and run (it's an old game that's been uploaded to GitHub). I think a lot of the code was written with C-style/C-syntax in mind (a lot of typedef struct {...} and the likes) and I've been noticing that they define certain macros with the following style:
#define MyMacroOne (1<<0) //This equals 1
#define MyMacroTwo (1<<1) //This equals 2, etc.
So my question now is this - is there any reason why macros would be defined this way? Because, for example, 0x01 and 0x02 are the numerical result of the above. Or is it that the system will not read MyMacroOne = 0x01 but rather as a "shift object" with the value (1<<0)?
EDIT: Thanks for all of your inputs!
It makes it more intuitive and less error prone to define bit values, especially on multibit bitfields. For example, compare
#define POWER_ON (1u << 0)
#define LIGHT_ON (1u << 1)
#define MOTOR_ON (1u << 2)
#define SPEED_STOP (0u << 3)
#define SPEED_SLOW (1u << 3)
#define SPEED_FAST (2u << 3)
#define SPEED_FULL (3u << 3)
#define LOCK_ON (1u << 5)
and
#define POWER_ON 0x01
#define LIGHT_ON 0x02
#define MOTOR_ON 0x04
#define SPEED_STOP 0x00
#define SPEED_SLOW 0x08
#define SPEED_FAST 0x10
#define SPEED_FULL 0x18
#define LOCK_ON 0x20
It is convenient for the humans
for example
#define PIN0 (1u<<0)
#define PIN5 (1u<<5)
#define PIN0MASK (~(1u<<0))
#define PIN5MASK (~(1u<<5))
and it is easy too see if there is a correct bit position. it does not make the code slower as it is calculated at the compile time
You can always use constant integer expression shifts as a way to express (multiples of) powers of two, i.e. Multiple*(2 to the N-th power) = Mutliple << N (with some caveats related to when you hit the guaranteed size limits of the integer types and UB sets in*) and pretty much rely on the compiler folding them.
An integer expression made of integer constants is defined as an integer constant expression. These can be used to specify array sizes, case labels and stuff like that and so every compiler has to be able to fold them into a single intermediate and it'd be stupid not to utilize this ability even where it isn't strictly required.
*E.g.: you can do 1U<<15, but at 16 you should switch to at least 1L<<16 because ints/unsigneds are only required to have at least 16 bits and leftshifting an integer by its width or into the place where its sign bit is is undefined (6.5.7p4):
The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated
bits are filled with zeros. If E1 has an unsigned type, the value of
the result is E1 x 2E2 , reduced modulo one more than the maximum
value representable in the result type. If E1 has a signed type and
nonnegative value, and E1 x 2E2 is representable in the result type,
then that is the resulting value; otherwise, the behavior is
undefined.
Macro are just replacement text. Everywhere macro is replaced by replacement text!! This is convenient especially if you want to name something constant which otherwise is prone to mistakes.
To illustrate how this (1<<0) syntax is more practical, consider this example from the code-base of Git 2.25 (Q1 2020), which moves the definition of a set of bitmask constants from 0ctal literal to (1U<<count) notation.
See commit 8679577 (17 Oct 2019) by Hariom Verma (harry-hov).
(Merged by Junio C Hamano -- gitster -- in commit 8f40d89, 10 Nov 2019)
builtin/blame.c: constants into bit shift format
Signed-off-by: Hariom Verma
We are looking at bitfield constants, and elsewhere in the Git source code, such cases are handled via bit shift operators rather than octal numbers, which also makes it easier to spot holes in the range.
If, say, 1<<5 was missing:
it is easier to spot it between 1<<4 and 1<<6
than it is to spot a missing 040 between a 020 and a 0100.
So instead of:
#define OUTPUT_ANNOTATE_COMPAT 001
#define OUTPUT_LONG_OBJECT_NAME 002
#define OUTPUT_RAW_TIMESTAMP 004
#define OUTPUT_PORCELAIN 010
You get:
#define OUTPUT_ANNOTATE_COMPAT (1U<<0)
#define OUTPUT_LONG_OBJECT_NAME (1U<<1)
#define OUTPUT_RAW_TIMESTAMP (1U<<2)
#define OUTPUT_PORCELAIN (1U<<3)
I would like to know if there is a particular reason to define the macro UINT_MAX as (2147483647 * 2U + 1U) and not directly its true value (4294967295U) in the climits header file.
Thank you all.
As far as the compiled code is concerned, there would be no difference, because the compiler would evaluate both constant expressions to produce the same value at compile time.
Defining UINT_MAX in terms of INT_MAX lets you reuse a constant that you have already defined:
#define UINT_MAX (INT_MAX * 2U + 1U)
In fact, this is very much what clang's header does, reusing an internal constant __INT_MAX__ for both INT_MAX and UINT_MAX:
#define INT_MAX __INT_MAX__
#define UINT_MAX (__INT_MAX__ *2U +1U)
The C standard, which C++ relies on for these matters as well, as far as I know, has the following section:
When a value of integer type is converted to a real floating type, if the value being converted can be represented exactly in the new type, it is unchanged. If the value being converted is in the range of values that can be represented but cannot be represented exactly, the result is either the nearest higher or nearest lower representable value, chosen in an implementation-defined manner. If the value being converted is outside the range of values that can be represented, the behavior is undefined.
Is there any way I can check for the last case? It seems to me that this last undefined behaviour is unavoidable. If I have an integral value i and naively check something like
i <= FLT_MAX
I will (apart from other problems related to precision) already trigger it because the comparison first converts i to a float (in this case or to any other floating type in general), so if it is out of range, we get undefined behaviour.
Or is there some guarantee about the relative sizes of integral and floating types that would imply something like "float can always represent all values of int (not necessarily exactly of course)" or at least "long double can always hold everything" so that we could do comparisons in that type? I couldn't find anything like that, though.
This is mainly a theoretical exercise, so I'm not interested in answers along the lines of "on most architectures these conversions always work". Let's try to find a way to detect this kind of overflow without assuming anything beyond the C(++) standard! :)
Detect overflow when converting integral to floating types
FLT_MAX, DBL_MAX are at least 1E+37 per the C spec, so all integers with |values| of 122 bits or less will convert to a float without overflow on all compliant platforms. Same with double
To solve this in the general case for integers of 128/256/etc. bits, both FLT_MAX and some_big_integer_MAX need to be reduced.
Perhaps by taking the log of both. (bit_count() is a TBD user code)
if(bit_count(unsigned_big_integer_MAX) > logbf(FLT_MAX)) problem();
Or if the integer lacks padding
if(sizeof(unsigned_big_integer_MAX)*CHAR_BIT > logbf(FLT_MAX)) problem();
Note: working with a FP function like logbf() may produce an edge condition with the exact integer math with an incorrect compare.
Macro magic can use obtuse tests like the following that takes advantage the BIGINT_MAX is certainly a power-of-2 minus 1 and FLT_MAX division by a power of 2 is certainly exact (unless FLT_RADIX == 10).
This pre-processor code will complain if conversion from a big integer type to float will be inexact for some big integer.
#define POW2_61 0x2000000000000000u
#if BIGINT_MAX/POW2_61 > POW2_61
// BIGINT is at least a 122 bit integer
#define BIGINT_MAX_PLUS1_div_POW2_61 ((BIGINT_MAX/2 + 1)/(POW2_61/2))
#if BIGINT_MAX_PLUS1_div_POW2_61 > POW2_61
#warning TBD code for an integer wider than 183 bits
#else
_Static_assert(BIGINT_MAX_PLUS1_div_POW2_61 <= FLT_MAX/POW2_61,
"bigint too big for float");
#endif
#endif
[Edit 2]
Is there any way I can check for the last case?
This code will complain if conversion from a big integer type to float will be inexact for a select big integer.
Of course the test needs to occur before the conversion is attempted.
Given various rounding modes or a rare FLT_RADIX == 10, the best that can readily be had is a test that aims a bit low. When it is true, the conversion will work. Yet a vary small range of of big integers that report false on the below test do convert OK.
Below is a more refined idea that I need to mull over for a bit, yet I hope it provides some coding idea for the test OP is looking for.
#define POW2_60 0x1000000000000000u
#define POW2_62 0x4000000000000000u
#define MAX_FLT_MIN 1e37
#define MAX_FLT_MIN_LOG2 (122 /* 122.911.. */)
bool intmax_to_float_OK(intmax_t x) {
#if INTMAX_MAX/POW2_60 < POW2_62
(void) x;
return true; // All big integer values work
#elif INTMAX_MAX/POW2_60/POW2_60 < POW2_62
return x/POW2_60 < (FLT_MAX/POW2_60)
#elif INTMAX_MAX/POW2_60/POW2_60/POW2_60 < POW2_62
return x/POW2_60/POW2_60 < (FLT_MAX/POW2_60/POW2_60)
#else
#error TBD code
#endif
}
Here's a C++ template function that returns the largest positive integer that fits into both of the given types.
template<typename float_type, typename int_type>
int_type max_convertible()
{
static const int int_bits = sizeof(int_type) * CHAR_BIT - std::is_signed<int_type>() ? 1 : 0;
if ((int)ceil(std::log2(std::numeric_limits<float_type>::max())) > int_bits)
return std::numeric_limits<int_type>::max();
return (int_type) std::numeric_limits<float_type>::max();
}
If the number you're converting is larger than the return from this function, it can't be converted. Unfortunately I'm having trouble finding a combination of types to test it with, it's very hard to find an integer type that won't fit into the smallest floating point type.
I'm writing a GLSL shader using a #if preprocessor directive, but I'm always getting the error incorrect preprocessor directive.
Here's my code below (just the relevant part):
#define thre 20
float s = get_sample_data(sampling_pos);
#if s > thre
vec4 val = texture(transfer_texture, vec2(s, s));
#endif
Preprocessing is one of the compilation's steps, which occurs before runtime. It just transform the source based on the # lines it finds. It doesn't have any clue about variable, which are runtime concepts. At this time, variable has no values, and preprocessor don't even know them.
Knowing that, it is trivial to say that you can't use a variable value in a preprocessor directive.
You can compare a #defined value to a literal constant :
#define thre 12
#if thre > 15
float x = 1.;
#else
float x = -1.;
#endif
In glsl, you still can use conditionnal structure, but it is just 'regular' if.
if(s>thre){
// do something
}else{
// do something else
}
The question is about modeling infinity in C++ for the double data type. I need it in a header file, so we cannot use functions like numeric_limits.
Is there a defined constant that represents the largest value?
floating point numbers(such as doubles) can actually hold positive and negative infinity. The constant INFINITY should be in your math.h header.
Went standard diving and found the text:
4 The macro INFINITY expands to a constant expression of type float
representing positive or unsigned infinity, if available; else to a
positive constant of type float that overflows at translation time.
In Section 7.12 Mathematics <math.h>
Then of course you have the helper function isinf to test for infinity(which is also in math.h).
7.12.3.3 The isinf macro
int isinf(real-floating x);
Description: The isinf macro determines whether its argument value is an infinity (positive
or negative). First, an argument represented in a format wider than
its semantic type is converted to its semantic type. Then
determination is based on the type of the argument.
Returns: The
isinf macro returns a nonzero value if and only if its argument has an
infinite value.
numeric_limits functions are all constexpr so they work just fine as compile time constants (assuming you're using the current version of C++). So std::numeric_limits<double>::infinity() ought to work in any context.
Even if you're using an older version, this will still work anywhere that you don't require a compile time constant. It's not clear from your question if your use really needs a compile time constant or not; just being in a header doesn't necessarily require it.
If you are using an older version, and you really do need a compile time constant, the macro INFINITY in cmath should work for you. It's actually the float value for infinity, but it can be converted to a double.
Not sure why you can't use std::numeric_limits in a header file. But there is also this carried over from ANSI C:
#include <cfloat>
DBL_MAX
Maybe in your C++ environment you have float.h, see http://www.gamedev.net/topic/392211-max-value-for-double-/ (DBL_MAX)
I thought the answer was "42.0" ;)
This article might be of interest:
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
Or this:
http://www.cplusplus.com/reference/clibrary/cfloat/
MAXimum Maximum finite representable floating-point number:
FLT_MAX 1E+37
DBL_MAX 1E+37
LDBL_MAX 1E+37
From Wikipedia:
0x 7ff0 0000 0000 0000 = Infinity
0x fff0 0000 0000 0000 = −Infinity
DBL_MAX can be used. This is found in float.h as follows
#define DBL_MAX 1.7976931348623158e+308 /* max value */
#include <cmath>
...
double d = INFINITY;
You can find INFINITY defined in <cmath> (math.h):
A constant expression of type float representing positive or unsigned infinity, if available; else a positive constant of type float that overflows at translation time.
Wouldn't this work?
const double infinity = 1.0/0.0;