What is the number 0ui64? - c++

I see the
#define NUM_MAX_VOLUME 0ui64
in other people's code
What is the number 0ui64? It seems it is not a hex number though.

I am surpsised that there are many answers, but none has pointed out the official and authoritative documentation that should be noted in my opinion, so here goes the MSDN documentation:
unsigned-suffix: one of
u U
and
64-bit integer-suffix:
i64 LL ll
So, it is indeed not a hexa number, but basically a macro define to zero that represents an unsiged 64 bit integer number. Please note that 0Ui64, 0ULL, 0ull, etc, would be all the same, too.
This is necessary when you want to make sure that the sign and size are fixed so that it cannot go unexpected or undefined behavior.
This is neither standard C++, nor C, but a Microsoft compiler feature. Try to avoid it.
Since your question is tagged as Qt, the recommendation is to use quint64 and Q_UINT64_C instead which will work cross-platform. So, you would write something like this:
#define NUM_MAX_VOLUME Q_UINT64_C(0)

"ui64" means unsigned 64-bit integer. It is a non-standard suffix in some cases.
"0ui64" is just 0, and i guess the reason to write like this is for compatibility.

It's basically used in a expression where the size of the operand (the constant here) matters. Take shifting for example:
auto i = 1 << 36;
On a machine where int is 32-bits long this will lead to undefined behaviour. Since the 1 here is taken as an int, and you're trying to shift it beyond the size of the resulting type: int. What you want is a 64-bit integral type, say unsigned long long then you'd do
auto i = 1ULL << 36;
This isn't UB since the resulting type would also be an unsigned long long due to the operand (which is now an unsigned long long too).
Another example is type deduction of the C++11's auto keyword. Try this:
for (auto i = 0; i < v.size(); ++i)
With warnings enabled GCC barks (live example)
warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
However, changing this to
for (auto i = 0u; i < v.size(); ++i)
make the warning disappear. Again since the suffix 0u led the compiler to deduce the type of i as unsigned int and not simply int.
In your case, you've the suffix ui64 which isn't standard C++, so it should be an implementation-specific extension that denotes unsigned 64-bit integer.

0xFull ,for example, is also valid C++ constant. It is 0xF, unsigned long long, a.k.a. ui64 in Microsoft compilers transcription.

Related

how does the short(vector.size()) command conversion work in C++?

I don't know any other way to return the size of a vector other than the .size() command, and it works very well, but, it return a variable of type long long unsigned int, and this in very cases are very good, but I'm sure my program will never have a vector so big that it need all that size of return, short int is more than enough.
I know, for today's computers those few enused bytes are irrelevant, but I don't like to leave these "loose ends" even if they are small, and whem I was programming, I came across some details that bothered me.
Look at these examples:
for(short int X = 0 ; X < Vector.size() ; X++){
}
compiling this, I receive this warning:
warning: comparison of integer expressions of different signedness: 'short int' and 'std::vector<unsigned char>::size_type' {aka 'long long unsigned int'} [-Wsign-compare]|
this is because the .size() return value type is different from the short int I'm compiling, "X" is a short int, and Vector.size() return a long long unsigned int, was expected, so if I do this:
for(size_t X = 0 ; X < Vector.size() ; X++){
}
the problem is gone, but by doing this, I'm creating a long long unsigned int in variable size_t and I'm returning another variable long long unsigned int, so, my computer allocale two variables long long unsigned int, so, what I do for returning a simple short int? I don't need anything more than this, long long unsigned int is overkill, so I did this:
for(short int X = 0 ; X < short(Vector.size()) ; X++){
}
but... how is this working? short int X = 0 is allocating a short int, nothing new, but what about short (Vector.size()), is the computer allocating a long unsigned int and converting it to a short int? or is the compiler "changing" the return of the .size() function by making it naturally return a short int and, in this case, not allocating a long long unsined int? because I know the compilers are responsible for optimizing the code too, is there any "problem" or "detail" when using this method? since I rarely see anyone using this, what exactly is this short() doing in memory allocation? where can i read more about it?
(thanks to everyone who responded)
Forget for a moment that this involves a for loop; that's important for the underlying code, but it's a distraction from what's going on with the conversion.
short X = Vector.size();
That line calls Vector.size(), which returns a value of type std::size_t. std::size_t is an unsigned type, large enough to hold the size of any object. So it could be unsigned long, or it could be unsigned long long. In any event, it's definitely not short. So the compiler has to convert that value to short, and that's what it does.
Most compilers these days don't trust you to understand what this actually does, so they warn you. (Yes, I'm rather opinionated about compilers that nag; that doesn't change the analysis here). So if you want to see that warning (i.e., you don't turn it off), you'll see it. If you want to write code that doesn't generate that warning, then you have to change the code to say "yes, I know, and I really mean it". You do that with a cast:
short X = short(Vector.size());
The cast tells the compiler to call Vector.size() and convert the resulting value to short. The code then assigns the result of that conversion to X. So, more briefly, in this case it tells the compiler that you want it to do exactly what it would have done without the cast. The difference is that because you wrote a cast, the compiler won't warn you that you might not know what you're doing.
Some folks prefer to write that cast is with a static_cast:
short X = static_cast<short>(Vector.size());
That does the same thing: it tells the compiler to do the conversion to short and, again, the compiler won't warn you that you did it.
In the original for loop, a different conversion occurs:
X < Vector.size()
That bit of code calls Vector.size(), which still returns an unsigned type. In order to compare that value with X, the two sides of the < have to have the same type, and the rules for this kind of expression require that X gets promoted to std::size_t, i.e., that the value of X gets treated as an unsigned type. That's okay as long as the value isn't negative. If it's negative, the conversion to the unsigned type is okay, but it will produce results that probably aren't what was intended. Since we know that X is not negative here, the code works perfectly well.
But we're still in the territory of compiler nags: since X is signed, the compiler warns you that promoting it to an unsigned type might do something that you don't expect. Again, you know that that won't happen, but the compiler doesn't trust you. So you have to insist that you know what you're doing, and again, you do that with a cast:
X < short(Vector.size())
Just like before, that cast converts the result of calling Vector.size() to short. Now both sides of the < are the same type, so the < operation doesn't require a conversion from a signed to an unsigned type, so the compiler has nothing to complain about. There is still a conversion, because the rules say that values of type short get promoted to int in this expression, but don't worry about that for now.
Another possibility is to use an unsigned type for that loop index:
for (unsigned short X = 0; X < Vector.size(); ++X)
But the compiler might still insist on warning you that not all values of type std::size_t can fit in an unsigned short. So, again, you might need a cast. Or change the type of the index to match what the compiler think you need:
for (std::size_t X = 0; X < Vector.size(); ++X_
If I were to go this route, I would use unsigned int and if the compiler insisted on telling me that I don't know what I'm doing I'd yell at the compiler (which usually isn't helpful) and then I'd turn off that warning. There's really no point in using short here, because the loop index will always be converted to int (or unsigned int) wherever it's used. It will probably be in a register, so there is no space actually saved by storing it as a short.
Even better, as recommended in other answers, is to use a range-base for loop, which avoids managing that index:
for (auto& value: Vector) ...
In all cases, X has a storage duration of automatic, and the result of Vector.size() does not outlive the full expression where it is created.
I don't need anything more than this, long long unsigned int is overkill
Typically, automatic duration variables are "allocated" either on the stack, or as registers. In either case, there is no performance benefit to decreasing the allocation size, and there can be a performance penalty in narrowing and then widening values.
In the very common case where you are using X solely to index into Vector, you should strongly consider using a different kind of for:
for (auto & value : Vector) {
// replace Vector[X] with value in your loop body
}

Is there a portable literal suffix for int64_t and similar types?

I was trying to understand std::variant:
#include <cstdint>
#include <variant>
std::variant<int64_t, double> v;
I wanted to assign an int64_t variant: v = 5L;
That compiles on x86_64 because int64_t is long. But it does not compile on arm, because int64_t is long long. The type deduction has now two equal choices between int64_t and double to convert my number to, so it declines. With variant<int64_t, string> I wouldn't even have noticed the conversion, because then there is only one available and the compiler would have accepted it.
Similar issue with: v = 5LL; Now arm / 32 bit is fine, but x86_64 not anymore.
I get this compiling on both platforms but this is (sometimes) a type conversion with potential side-effects I am not able to foresee: v = int64_t(5LL);. Without the LL I wouldn't even be able to express values outside 32bit int.
The INT64_C macro seems to be the most portable and safest way to express this: v = INT64_C(5);
But this not nice to read and write anymore.
Is there a similar literal suffix like L/LL for int64_t that is portable?
No, there are no standard literals for fixed width integer aliases.
One potential workaround would be to use std::variant<long long, double> v;. Although long long theoretically isn't guaranteed to be exactly 64 bits (it may be wider, but not narrower), it is 64 bits on practically every system that supports long long today. The benefit is that long long of course has a standard literal. The potential drawback is that the size situation may theoretically change in future.
A more general solution is to give up on using a literal suffix, and instead use a cast: v = static_cast<int64_t>(5);.
Another solution is to create a user defined literal as shown in this answer linked in comments:
constexpr std::int64_t operator "" _int64(unsigned long long v)
{ return static_cast<std::int64_t>(v); }
There is a proposal to add literals such as this to the standard library: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1280r2
On a related note, there is a proposal to add literals for std::size_t and std::ptrdiff_t. That proposal suggests core language literals instead of a standard library literals: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0330r3

Is `-1` correct for using as maximum value of an unsigned integer?

Is there any c++ standard paragraph which says that using -1 for this is portable and correct way or the only way of doing this correctly is using predefined values?
I have had a conversation with my colleague, what is better: using -1 for a maximum unsigned integer number or using a value from limits.h or std::numeric_limits ?
I have told my colleague that using predefined maximum values from limits.h or std::numeric_limits is the portable and clean way of doing this, however, the colleague objected to -1 being as same portable as numeric limits, and more, it has one more advantage:
unsigned short i = -1; // unsigned short max
can easily be changed to any other type, like
unsigned long i = -1; // unsigned long max
when using the predefined value from the limits.h header file or std::numeric_limits also requires to rewrite it too along with the type to the left.
Regarding conversions of integers, C 2011 [draft N1570] 6.3.1.3 2 says
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Thus, converting -1 to an unsigned integer type necessarily produces the maximum value of that type.
There may be issues with using -1 in various contexts where it is not immediately converted to the desired type. If it is immediately converted to the desired unsigned integer type, as by assignment or explicit conversion, then the result is clear. However, if it is a part of an expression, its type is int, and it behaves like an int until converted. In contrast, UINT_MAX has the type unsigned int, so it behaves like an unsigned int.
As chux points out in a comment, USHRT_MAX effectively has a type of int, so even the named limits are not fully safe from type issues.
Not using the standard way or not clearly showing the intent is often a bad idea that we pay later
I would suggest:
auto i = std::numeric_limits<unsigned int>::max();
or #jamesdin suggested a certainly better one, closer to the C
habits:
unsigned int i = std::numeric_limits<decltype(i)>::max();
Your colleague argument is not admissible. Changing int -> long int, as bellow:
auto i = std::numeric_limits<unsigned long int>::max();
does not require extra work compared to the -1 solution (thanks to the use of auto).
the '-1' solution does not directly reflect our intent, hence it possibly has harmful consequences. Consider this code snippet:
.
using index_t = unsigned int;
... now in another file (or far away from the previous line) ...
const index_t max_index = -1;
First, we do not understand why max_index is -1.
Worst, if someone wants to improve the code and define
using index_t = ptrdiff_t;
=> then the statement max_index=-1 is not the max anymore and you get a buggy code. Again this can not happen with something like:
const index_t max_index = std::numeric_limits<index_t>::max();
CAVEAT: nevertheless there is a caveat when using std::numeric_limits. It has nothing to do with integers, but is related to floating point numbers.
std::cout << "\ndouble lowest: "
<< std::numeric_limits<double>::lowest()
<< "\ndouble min : "
<< std::numeric_limits<double>::min() << '\n';
prints:
double lowest: -1.79769e+308
double min : 2.22507e-308 <-- maybe you expected -1.79769e+308 here!
min returns the smallest finite value of the given type
lowest returns the lowest finite value of the given type
Always interesting to remember that, as it can be a source of bug if we do not pay attention to (using min instead of lowest).
Is -1 correct for using as maximum value of an unsigned integer?
Yes, it is functionally correct when used as a direct assignment/initialization. Yet often looks questionable #Ron.
Constants from limits.h or std::numeric_limits convey more code understanding, yet need maintenance should the type of i change.
[Note] OP later drop the C tag.
To add an alternative to assigning a maximum value (available in C11) that helps reduce code maintenance:
Use the loved/hated _Generic
#define info_max(X) _Generic((X), \
long double: LDBL_MAX, \
double: DBL_MAX, \
float: FLT_MAX, \
unsigned long long: ULLONG_MAX, \
long long: LLONG_MAX, \
unsigned long: ULONG_MAX, \
long: LONG_MAX, \
unsigned: UINT_MAX, \
int: INT_MAX, \
unsigned short: USHRT_MAX, \
short: SHRT_MAX, \
unsigned char: UCHAR_MAX, \
signed char: SCHAR_MAX, \
char: CHAR_MAX, \
_Bool: 1, \
default: 1/0 \
)
int main() {
...
some_basic_type i = info_max(i);
...
}
The above macro info_max() have limitations concerning types like size_t, intmax_t, etc. that may not be enumerated in the above list. There are more complex macros that can cope with that. The idea here is illustrative.
The technical side has been covered by other answers; and while you focus on technical correctness in your question, pointing out the cleanness aspect again is important, because imo that’s the much more important point.
The major reason why it is a bad idea to use that particular trickery is: The code is ambiguous. It is unclear whether someone used the unsigned trickery intentionally or made a mistake and actually wanted to initialize a signed variable to -1. Should your colleague mention a comment after you present this argument, tell him to stop being silly. :)
I’m actually slightly baffled that someone would even consider this trick in earnest. There’s an unambigous, intuitive and idiomatic way to set a value to its max in C: the _MAX macros. And there’s an additional, equally unambigous, intuitive and idiomatic way in C++ that provides some more type safety: numeric_limits. That -1 trick is a classic case of being clever.
The C++ standard says this about signed to unsigned conversions ([conv.integral]/2):
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo
2n where n is the number of bits used to represent the unsigned type). [ Note: In a two's complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
So yes, converting -1 to an n-bit unsigned integer will always give you 2n-1, regardless of which signed integer type the -1 started as.
Whether or not unsigned x = -1; is more or less readable than unsigned x = UINT_MAX; though is another discussion (there's definitely the chance that it'll raise some eyebrows, maybe even your own when you look at your own code later;).

Is it safe to compare an unsigned int with a std::string::size_type

I am going trough the book "Accelerated C++" by Andrew Koenig and Barbara E. Moo and I have some questions about the main example in chap 2. The code can be summarized as below, and is compiling without warning/error with g++:
#include <string>
using std::string;
int main()
{
const string greeting = "Hello, world!";
// OK
const int pad = 1;
// KO
// int pad = 1;
// OK
// unsigned int pad = 1;
const string::size_type cols = greeting.size() + 2 + pad * 2;
string::size_type c = 0;
if (c == 1 + pad)
{;}
return 0;
}
However, if I replace const int pad = 1; by int pad = 1;, the g++ compiler will return a warning:
warning: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (c == 1 + pad)
If I replace const int pad = 1; by unsigned int pad = 1;, the g++ compiler will not return a warning.
I understand why g++ return the warning, but I am not sure about the three below points:
Is it safe to use an unsigned int in order to compare with a std::string::size_type? The compiler does not return a warning in that case but I am not sure if it is safe.
Why is the compiler not giving a warning with the original code const int pad = 1. Is the compiler automatically converting the variable pad to an unsigned int?
I could also replace const int pad = 1; by string::size_type pad = 1;, but the meaning of the variable pad is not really linked to a string size in my opinion. Still, would this be the best approach in that case to avoid having different types in the comparison?
From the compiler point of view:
It is unsafe to compare signed and unsinged variables (non-constants).
It is safe to compare 2 unsinged variables of different sizes.
It is safe to compare an unsigned variable with a singed constant if the compiler can check that constant to be in the allowed range for the type of the signed variable (e.g. for 16-bit signed integer it is safe to use a constant in range [0..32767]).
So the answers to your questions:
Yes, it is safe to compare unsigned int and std::string::size_type.
There is no warning because the compiler can perform the safety check (while compiling :)).
There is no problem to use different unsigned types in comparison. Use unsinged int.
Comparing signed and unsigned values is "dangerous" in the sense that you may not get what you expect when the signed value is negative (it may well behave as a very large unsigned value, and thus a > b gives true when a = -1 and b = 100. (The use of const int works because the compiler knows the value isn't changing and thus can say "well, this value is always 1, so it works fine here")
As long as the value you want to compare fits in unsigned int (on typical machines, a little over 4 billion) is fine.
If you are using std::string with the default allocator (which is likely), then size_type is actually size_t.
[support.types]/6 defines that size_t is
an implementation-defined unsigned integer type that is large enough to contain the size
in bytes of any object.
So it's not technically guaranteed to be a unsigned int, but I believe it is defined this way in most cases.
Now regarding your second question: if you use const int something = 2, the compiler sees that this integer is a) never negative and b) never changes, so it's always safe to compare this variable with size_t. In some cases the compiler may optimize the variable out completely and simply replace all it's occurrences with 2.
I would say that it is better to use size_type everywhere where you are to the size of something, since it is more verbose.
What the compiler warns about is the comparison of unsigned and signed integer types. This is because the signed integer can be negative and the meaning is counter intuitive. This is because the signed is converted to unsigned before comparison, which means the negative number will compare greater than the positive.
Is it safe to use an unsigned int in order to compare with a std::string::size_type? The compiler does not return a warning in that case but I am not sure if it is safe.
Yes, they are both unsigned and then the semantics is what's expected. If their range differs the narrower are converted to a wider type.
Why is the compiler not giving a warning with the original code const int pad = 1. Is the compiler automatically converting the variable pad to an unsigned int?
This is because how the compiler is constructed. The compiler parses and to some extent optimizes the code before warnings are issued. The important point is that at the point this warning is being considered the compiler nows that the signed integer is 1 and then it's safe to compare with a unsigned integer.
I could also replace const int pad = 1; by string::size_type pad = 1;, but the meaning of the variable pad is not really linked to a string size in my opinion. Still, would this be the best approach in that case to avoid having different types in the comparison?
If you don't want it to be constant the best solution would probably be to make it at least an unsigned integer type. However you should be aware that there is no guaranteed relation between normal integer types and sizes, for example unsigned int may be narrower, wider or equal to size_t and size_type (the latter may also differ).

Is it safe to use -1 to set all bits to true?

I've seen this pattern used a lot in C & C++.
unsigned int flags = -1; // all bits are true
Is this a good portable way to accomplish this? Or is using 0xffffffff or ~0 better?
I recommend you to do it exactly as you have shown, since it is the most straight forward one. Initialize to -1 which will work always, independent of the actual sign representation, while ~ will sometimes have surprising behavior because you will have to have the right operand type. Only then you will get the most high value of an unsigned type.
For an example of a possible surprise, consider this one:
unsigned long a = ~0u;
It won't necessarily store a pattern with all bits 1 into a. But it will first create a pattern with all bits 1 in an unsigned int, and then assign it to a. What happens when unsigned long has more bits is that not all of those are 1.
And consider this one, which will fail on a non-two's complement representation:
unsigned int a = ~0; // Should have done ~0u !
The reason for that is that ~0 has to invert all bits. Inverting that will yield -1 on a two's complement machine (which is the value we need!), but will not yield -1 on another representation. On a one's complement machine, it yields zero. Thus, on a one's complement machine, the above will initialize a to zero.
The thing you should understand is that it's all about values - not bits. The variable is initialized with a value. If in the initializer you modify the bits of the variable used for initialization, the value will be generated according to those bits. The value you need, to initialize a to the highest possible value, is -1 or UINT_MAX. The second will depend on the type of a - you will need to use ULONG_MAX for an unsigned long. However, the first will not depend on its type, and it's a nice way of getting the highest value.
We are not talking about whether -1 has all bits one (it doesn't always have). And we're not talking about whether ~0 has all bits one (it has, of course).
But what we are talking about is what the result of the initialized flags variable is. And for it, only -1 will work with every type and machine.
unsigned int flags = -1; is portable.
unsigned int flags = ~0; isn't portable because it
relies on a two's-complement representation.
unsigned int flags = 0xffffffff; isn't portable because
it assumes 32-bit ints.
If you want to set all bits in a way guaranteed by the C standard, use the first one.
Frankly I think all fff's is more readable. As to the comment that its an antipattern, if you really care that all the bits are set/cleared, I would argue that you are probably in a situation where you care about the size of the variable anyway, which would call for something like boost::uint16_t, etc.
A way which avoids the problems mentioned is to simply do:
unsigned int flags = 0;
flags = ~flags;
Portable and to the point.
I am not sure using an unsigned int for flags is a good idea in the first place in C++. What about bitset and the like?
std::numeric_limit<unsigned int>::max() is better because 0xffffffff assumes that unsigned int is a 32-bit integer.
unsigned int flags = -1; // all bits are true
"Is this a good[,] portable way to accomplish this?"
Portable? Yes.
Good? Debatable, as evidenced by all the confusion shown on this thread. Being clear enough that your fellow programmers can understand the code without confusion should be one of the dimensions we measure for good code.
Also, this method is prone to compiler warnings. To elide the warning without crippling your compiler, you'd need an explicit cast. For example,
unsigned int flags = static_cast<unsigned int>(-1);
The explicit cast requires that you pay attention to the target type. If you're paying attention to the target type, then you'll naturally avoid the pitfalls of the other approaches.
My advice would be to pay attention to the target type and make sure there are no implicit conversions. For example:
unsigned int flags1 = UINT_MAX;
unsigned int flags2 = ~static_cast<unsigned int>(0);
unsigned long flags3 = ULONG_MAX;
unsigned long flags4 = ~static_cast<unsigned long>(0);
All of which are correct and more obvious to your fellow programmers.
And with C++11: We can use auto to make any of these even simpler:
auto flags1 = UINT_MAX;
auto flags2 = ~static_cast<unsigned int>(0);
auto flags3 = ULONG_MAX;
auto flags4 = ~static_cast<unsigned long>(0);
I consider correct and obvious better than simply correct.
Converting -1 into any unsigned type is guaranteed by the standard to result in all-ones. Use of ~0U is generally bad since 0 has type unsigned int and will not fill all the bits of a larger unsigned type, unless you explicitly write something like ~0ULL. On sane systems, ~0 should be identical to -1, but since the standard allows ones-complement and sign/magnitude representations, strictly speaking it's not portable.
Of course it's always okay to write out 0xffffffff if you know you need exactly 32 bits, but -1 has the advantage that it will work in any context even when you do not know the size of the type, such as macros that work on multiple types, or if the size of the type varies by implementation. If you do know the type, another safe way to get all-ones is the limit macros UINT_MAX, ULONG_MAX, ULLONG_MAX, etc.
Personally I always use -1. It always works and you don't have to think about it.
As long as you have #include <limits.h> as one of your includes, you should just use
unsigned int flags = UINT_MAX;
If you want a long's worth of bits, you could use
unsigned long flags = ULONG_MAX;
These values are guaranteed to have all the value bits of the result set to 1, regardless of how signed integers are implemented.
Yes. As mentioned in other answers, -1 is the most portable; however, it is not very semantic and triggers compiler warnings.
To solve these issues, try this simple helper:
static const struct All1s
{
template<typename UnsignedType>
inline operator UnsignedType(void) const
{
static_assert(std::is_unsigned<UnsignedType>::value, "This is designed only for unsigned types");
return static_cast<UnsignedType>(-1);
}
} ALL_BITS_TRUE;
Usage:
unsigned a = ALL_BITS_TRUE;
uint8_t b = ALL_BITS_TRUE;
uint16_t c = ALL_BITS_TRUE;
uint32_t d = ALL_BITS_TRUE;
uint64_t e = ALL_BITS_TRUE;
On Intel's IA-32 processors it is OK to write 0xFFFFFFFF to a 64-bit register and get the expected results. This is because IA32e (the 64-bit extension to IA32) only supports 32-bit immediates. In 64-bit instructions 32-bit immediates are sign-extended to 64-bits.
The following is illegal:
mov rax, 0ffffffffffffffffh
The following puts 64 1s in RAX:
mov rax, 0ffffffffh
Just for completeness, the following puts 32 1s in the lower part of RAX (aka EAX):
mov eax, 0ffffffffh
And in fact I've had programs fail when I wanted to write 0xffffffff to a 64-bit variable and I got a 0xffffffffffffffff instead. In C this would be:
uint64_t x;
x = UINT64_C(0xffffffff)
printf("x is %"PRIx64"\n", x);
the result is:
x is 0xffffffffffffffff
I thought to post this as a comment to all the answers that said that 0xFFFFFFFF assumes 32 bits, but so many people answered it I figured I'd add it as a separate answer.
See litb's answer for a very clear explanation of the issues.
My disagreement is that, very strictly speaking, there are no guarantees for either case. I don't know of any architecture that does not represent an unsigned value of 'one less than two to the power of the number of bits' as all bits set, but here is what the Standard actually says (3.9.1/7 plus note 44):
The representations of integral types shall define values by use of a pure binary numeration system. [Note 44:]A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position.
That leaves the possibility for one of the bits to be anything at all.
I would not do the -1 thing. It's rather non-intuitive (to me at least). Assigning signed data to an unsigned variable just seems to be a violation of the natural order of things.
In your situation, I always use 0xFFFF. (Use the right number of Fs for the variable size of course.)
[BTW, I very rarely see the -1 trick done in real-world code.]
Additionally, if you really care about the individual bits in a vairable, it would be good idea to start using the fixed-width uint8_t, uint16_t, uint32_t types.
Although the 0xFFFF (or 0xFFFFFFFF, etc.) may be easier to read, it can break portability in code which would otherwise be portable. Consider, for example, a library routine to count how many items in a data structure have certain bits set (the exact bits being specified by the caller). The routine may be totally agnostic as to what the bits represent, but still need to have an "all bits set" constant. In such a case, -1 will be vastly better than a hex constant since it will work with any bit size.
The other possibility, if a typedef value is used for the bitmask, would be to use ~(bitMaskType)0; if bitmask happens to only be a 16-bit type, that expression will only have 16 bits set (even if 'int' would otherwise be 32 bits) but since 16 bits will be all that are required, things should be fine provided that one actually uses the appropriate type in the typecast.
Incidentally, expressions of the form longvar &= ~[hex_constant] have a nasty gotcha if the hex constant is too large to fit in an int, but will fit in an unsigned int. If an int is 16 bits, then longvar &= ~0x4000; or longvar &= ~0x10000; will clear one bit of longvar, but longvar &= ~0x8000; will clear out bit 15 and all bits above that. Values which fit in int will have the complement operator applied to a type int, but the result will be sign extended to long, setting the upper bits. Values which are too big for unsigned int will have the complement operator applied to type long. Values which are between those sizes, however, will apply the complement operator to type unsigned int, which will then be converted to type long without sign extension.
As others have mentioned, -1 is the correct way to create an integer that will convert to an unsigned type with all bits set to 1. However, the most important thing in C++ is using correct types. Therefore, the correct answer to your problem (which includes the answer to the question you asked) is this:
std::bitset<32> const flags(-1);
This will always contain the exact amount of bits you need. It constructs a std::bitset with all bits set to 1 for the same reasons mentioned in other answers.
It is certainly safe, as -1 will always have all available bits set, but I like ~0 better. -1 just doesn't make much sense for an unsigned int. 0xFF... is not good because it depends on the width of the type.
Practically: Yes
Theoretically: No.
-1 = 0xFFFFFFFF (or whatever size an int is on your platform) is only true with two's complement arithmetic. In practice, it will work, but there are legacy machines out there (IBM mainframes, etc.) where you've got an actual sign bit rather than a two's complement representation. Your proposed ~0 solution should work everywhere.
I say:
int x;
memset(&x, 0xFF, sizeof(int));
This will always give you the desired result.
Leveraging on the fact that assigning all bits to one for an unsigned type is equivalent to taking the maximum possible value for the given type,
and extending the scope of the question to all unsigned integer types:
Assigning -1 works for any unsigned integer type (unsigned int, uint8_t, uint16_t, etc.) for both C and C++.
As an alternative, for C++, you can either:
Include <limits> and use std::numeric_limits< your_type >::max()
Write a custom templated function (This would also allow some sanity check, i.e. if the destination type is really an unsigned type)
The purpose could be add more clarity, as assigning -1 would always need some explanatory comment.
A way to make the meaning bit more obvious and yet to avoid repeating the type:
const auto flags = static_cast<unsigned int>(-1);
An additional effort to emphasize, why Adrian McCarthy's approach here might be the best solution at latest since C++11 in terms of a compromise between standard conformity, type safety/explicit clearness and reduction of possible ambiguities:
unsigned int flagsPreCpp11 = ~static_cast<unsigned int>(0);
auto flags = ~static_cast<unsigned int>(0); // C++11 initialization
predeclaredflags = ~static_cast<decltype(predeclaredflags)>(0); // C++11 assignment to already declared variable
I'm going to explain my preference in detail below. As Johannes mentioned totally correctly, the fundamental origin of irritations here is the question about value vs. according bit representation semantics and about what types we're talking about exactly (the assigned value type vs. the possible compile time integral constant's type). Since there's no standard built-in mechanism to explicitly ensure the set of all bits to 1 for the concrete use case of the OP about unsigned integer values, it's obvious, that it's impossible to be fully independent of value semantics here (std::bitset is a common pure bit-layer refering container but the question was about unsigned integers in general). But we might be able to reduce ambiguity here.
Comparison of the 'better' standard compliant approaches:
The OP's way:
unsigned int flags = -1;
PROs:
is "established" and short
is quite intuitive in terms of modulo perspective of value to "natural" bit value representation
changing the target unsigned type to unsigned long for instance is possible without any further adaptions
CONs:
At least beginners might not be sure about the standard conformity ("Do I have to concern about padding bits?").
Violates type ranges (in the heavier way: signed vs. unsigned).
Solely from the code, you do not directly see any bit semantics association.
Refering to maximum values via defines:
unsigned int flags = UINT_MAX;
This circumvents the signed unsigned transition issue of the -1 approach but introduces several new problems: In doubt, one has to look twice here again, at the latest if you want to change the target type to unsigned long for instance. And here, one has to be sure about the fact, that the maximum value leads to all bits set to 1 by the standard (and padding bit concerns again). Bit semantics are also not obvious here directly from the code solely again.
Refering to maximum values more explicitly:
auto flags = std::numeric_limits<unsigned int>::max();
On my opinion, that's the better maximum value approach since it's macro/define free and one is explicit about the involved type. But all other concerns about the approach type itself remain.
Adrian's approach (and why I think, it's the preferred one before C++11 and since):
unsigned int flagsPreCpp11 = ~static_cast<unsigned int>(0);
auto flagsCpp11 = ~static_cast<unsigned int>(0);
PROs:
Only the simplest integral compile time constant is used: 0. So no worries about further bit representation or (implicit) casts are justified. From an intuitive point of view, I think we all can agree on the fact, that the bit representation for zero is commonly clearer than for maximum values, not only for unsigned integrals.
No type ambiguities are involved, no further look-ups required in doubt.
Explicit bit semantics are involved here via the complement ~. So it's quite clear from the code, what the intention was. And it's also very explicit, on which type and type range, the complement is applied.
CONs:
If assigned to a member for instance, there's a small chance that you mismatch types with pre C++11:
Declaration in class:
unsigned long m_flags;
Initialization in constructor:
m_flags(~static_cast<unsigned int>(0))
But since C++11, the usage of decltype + auto is powerful to prevent most of these possible issues. And some of these type mismatch scenarios (on interface boundaries for instance) are also possible for the -1 approach.
Robust final C++11 approach for pre-declared variables:
m_flags(~static_cast<decltype(m_flags)>(0)) // member initialization case
So with a full view on the weighting of the PROs and CONs of all approaches here, I recommend this one as the preferred approach, at latest since C++11.
Update: Thanks to a hint by Andrew Henle, I removed the statement about its readability since that might be a too subjective statement. But I still think, its readability is at least not that worse than most of the maximum value approaches or the ones with explicit maximum value provision via compile time integrals/literals since static_cast-usage is "established" too and built-in in contrast to defines/macros and even the std-lib.
yes the representation shown is very much correct as if we do it the other way round u will require an operator to reverse all the bits but in this case the logic is quite straightforward if we consider the size of the integers in the machine
for instance in most machines an integer is 2 bytes = 16 bits maximum value it can hold is 2^16-1=65535 2^16=65536
0%65536=0
-1%65536=65535 which corressponds to 1111.............1 and all the bits are set to 1 (if we consider residue classes mod 65536)
hence it is much straight forward.
I guess
no if u consider this notion it is perfectly dine for unsigned ints and it actually works out
just check the following program fragment
int main()
{
unsigned int a=2;
cout<<(unsigned int)pow(double(a),double(sizeof(a)*8));
unsigned int b=-1;
cout<<"\n"<<b;
getchar();
return 0;
}
answer for b = 4294967295 whcih is -1%2^32 on 4 byte integers
hence it is perfectly valid for unsigned integers
in case of any discrepancies plzz report