What does the postfix (or suffix) U mean for the following values?
0U
100U
It stands for unsigned.
When you declare a constant, you can also specify its type. Another common example is L, which stands for long. (and you have to put it twice to specify a 64-bit constant).
Example: 1ULL.
It helps in avoiding explicit casts.
Integer constants in C and C++ can optionally have several suffixes:
123u - the value 123 is an unsigned int
123l - (that's a lowercase L) 123 is a signed long
123L - ditto
123uL - unsigned long
123LL - a signed long long, a 64 bit or 128 bit value (depending on the environment)
123uLL - unsigned long long
You can read more here: https://en.cppreference.com/w/cpp/language/integer_literal
Related
I am looking through some c++ code and I came across this:
if( (size & 0x03L) != 0 )
throw MalformedBundleException( "bundle size must be multiple of four" );
what does L stand for after the hexadecimal value ?
how does it alter the value 0x03 ?
It means Long, as in, the type of the literal 0x03L is long instead of the default int. On some platforms that will mean 64 bits instead of 32 bits, but that's entirely platform-dependent (the only guarantee is that long is not shorter than int).
This suffix sets the type of the numeric literal. L stands for long; LL stands for long long type. The number does not need to be hex - it works on decimals and octals as well.
3LL // A decimal constant 3 of type long long
03L // An octal constant 3 of type long
0x3L // A hex constant 3 of type long
It means so-called long-suffix of integer literals and denotes that the type of the literal is int long The integer literal in your example is hexadecomal integer literal of type int long.
You can meet also two LL (or ll) that denote type int long long
I am transitioning from Java to C++ and have some questions about the long data type. In Java, to hold an integer greater than 232, you would simply write long x;. However, in C++, it seems that long is both a data type and a modifier.
There seems to be several ways to use long:
long x;
long long x;
long int x;
long long int x;
Also, it seems there are things such as:
long double x;
and so on.
What is the difference between all of these various data types, and do they all have the same purpose?
long and long int are identical. So are long long and long long int. In both cases, the int is optional.
As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long.
The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types, section 2 (a later section gives similar rules for the unsigned integral types):
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.
There's also a table 9 in 7.1.6.2 Simple type specifiers, which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below:
Specifier(s) Type
------------- -------------
long long int long long int
long long long long int
long int long int
long long int
Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type.
Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type.
Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references, as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable.
In terms of long double, that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).
Long and long int are at least 32 bits.
long long and long long int are at least 64 bits. You must be using a c99 compiler or better.
long doubles are a bit odd. Look them up on Wikipedia for details.
long is equivalent to long int, just as short is equivalent to short int. A long int is a signed integral type that is at least 32 bits, while a long long or long long int is a signed integral type is at least 64 bits.
This doesn't necessarily mean that a long long is wider than a long. Many platforms / ABIs use the LP64 model - where long (and pointers) are 64 bits wide. Win64 uses the LLP64, where long is still 32 bits, and long long (and pointers) are 64 bits wide.
There's a good summary of 64-bit data models here.
long double doesn't guarantee much other than it will be at least as wide as a double.
While in Java a long is always 64 bits, in C++ this depends on computer architecture and operating system. For example, a long is 64 bits on Linux and 32 bits on Windows (this was done to keep backwards-compatability, allowing 32-bit programs to compile on 64-bit Windows without any changes). long int is a synonym for long.
Later on, long long was introduced to mean "long (64 bits) on Windows for real this time". long long int is a synonym for this.
It is considered good C++ style to avoid short, int, long etc. and instead use:
std::int8_t # exactly 8 bits
std::int16_t # exactly 16 bits
std::int32_t # exactly 32 bits
std::int64_t # exactly 64 bits
std::size_t # can hold all possible object sizes, used for indexing
You can use these int*_t types by including the <cstdint> header. size_t is in <stdlib.h>.
This looks confusing because you are taking long as a datatype itself.
long is nothing but just the shorthand for long int when you are using it alone.
long is a modifier, you can use it with double also as long double.
long == long int.
Both of them take 4 bytes.
Historically, in early C times, when processors had 8 or 16 bit wordlength,intwas identical to todays short(16 bit). In a certain sense, int is a more abstract data type thanchar,short,longorlong long, as you cannot be sure about the bitwidth.
When definingint n;you could translate this with "give me the best compromise of bitwidth and speed on this machine for n". Maybe in the future you should expect compilers to translateintto be 64 bit. So when you want your variable to have 32 bits and not more, better use an explicitlongas data type.
[Edit: #include <stdint.h> seems to be the proper way to ensure bitwidths using the int##_t types, though it's not yet part of the standard.]
There is no deffirence, (long long x ) is equivalent to (long long int x ), but the second confirms that variable x is integer
This question already has answers here:
Difference between unsigned and unsigned int in C
(5 answers)
Closed 9 years ago.
I saw in some C++ code the keyword "unsigned" in the following form:
const int HASH_MASK = unsigned(-1) >> 1;
and later:
unsigned hash = HASH_SEED;
(it is taken from the CS106B/X reader - of Stanford - by Eric S. Roberts - on the topic of "implementation of the hash code function for strings").
Can someone tell me please what does that keyword mean and when do I use it anyway?
Thanks!
Take a look: https://stackoverflow.com/a/7176690/1758762
unsigned is a modifier which can apply to any integral type (char,
short, int, long, etc.) but on its own it is identical to unsigned
int.
It's a short version of unsigned int. Syntactically, you can use it anywhere you would use any other datatype like float or short.
Unsigned types are types that can't represent negative numbers; only zero and positive numbers. In C++, they use modular arithmetic; the modulus for an N-bit type is 2^N. It's a good idea to use unsigned rather than signed types when messing around with bit patterns (for example, when calculating hash codes), since C++ allows several different representations of negative numbers which could lead to portability issues.
unsigned can be used as a qualifier for any integer type (e.g. unsigned int or unsigned long long); or on its own as shorthand for unsigned int.
So the first converts -1 into unsigned int. Due to modular arithmetic, this gives the largest representable value. This could also be written (more clearly, in my opinion) as std::numeric_limits<unsigned>::max().
The second declares and initialises a variable of type unsigned int.
Values are signed by default, which means they can be positive or negative. The unsigned keyword is used to specify that a value must be positive.
Signed variables use 1 bit to specify whether the value is positive or not. The unsigned keyword actualy makes this bit part of the value (thus allowing bigger numbers to be stored).
Lastly, unsigned hash is interpreted by compilers as unsigned int hash (int being the default type in C programming).
To get a good idea what unsigned means, one has to understand signed and unsigned integers. For a full explanation of twos-compliment, search Wikipedia, but in a nutshell, a computer stores negative numbers by subtracting negative numbers from 2^32 (for a 32-bit integer). In this way, -1 is stored as 2^32-1. This does mean that you only have 2^31 positive numbers, but that is by the by. This is known as signed integers (as it can have positive or negative sign)
Unsigned tells the compiler that you don't want twos compliment and are dealing only in positive numbers. When -1 is typecast (as it is in the code) to an unsigned int it becomes
2^32-1 = 0b111111111...
Thus that is an easy way of getting a whole lot of 1s in binary.
Use unsigned rarely. If you need to do bit operations, or for some reason need only positive integers bigger than 2^31. Otherwise, if you leave it out, c++ assumes signed integers.
C allows chars to be signed or unsigned, depending on which is more efficient for the host computer. if you want to be sure your char is unsigned, you can declare your variable to be unsigned char. You can use signed char if you want the ensure signed interpretation.
Incidentally, the C and C++ compilers treatd char, signed char, and unsigned char as three distinct types, even though char is compiled into one of the other two.
I've run across some code like this:
line += addr & 0x3fULL;
Obviously, 'U' and 'L' are not hex digits. I'm guessing that the 'ULL' at the end of that hex numeric literal means "Unsigned Long Long" - am I correct? (this sort of thing is very difficult to google) if so then this is some sort of suffix modifier on the number?
From the gcc manual:
ISO C99 supports data types for integers that are at least 64 bits wide ( . . . ) . To make an integer constant of type long long int, add the suffix LL to the integer. To make an integer constant of type unsigned long long int, add the suffix ULL to the integer.
These suffixes have also been added to C++ in C++11, and were already supported long long (pun intended) before that as compiler extensions.
Yes that's correct.
0x prefix makes it a hexadecimal literal.
ULL suffix makes it type unsigned long long.
I'm positing a new answer because I recognize that the current answers do not cite from a cross platform source. The c++11 standard dictates that a literal with U/u and LL/ll suffixes is a literal of type: unsigned long long int [source]
U/u is the C/C++ suffix for an unsigned integer.
LL/ll is the C/C++ suffix for a long long integer which is a new type in C++11 and required to have a length of at least 64-bits.
Notes:
The keyword int may be omitted if any modifiers are used, unsigned long long for example. So this will define one as an unsigned long long int, and any number assigned to it will be static_cast to unsigned long long int: unsigned long long one = 1
c++11 marked the advent of auto. Which sets the variable type to the type assigned to it on declaration. For example, because 2ULL is an unsigned long long int literal two will be defined as an unsigned long long int: auto two = 2ULL
c++14 introduced order independent literal suffixes. Previously the U/u suffix had to preceded any size suffix. But circa c++14 the suffixes are accepted in either order, so now since 3LLU is an unsigned long long int literal three will be defined as an unsigned long long int: auto three = 3LLU
The book told that writing:
static unsigned int foo;
and later
if( foo > 0)
{
is wrong, and it will leads to a hard to find bug.
Why is that?
In the x86 assembly language programming there are signed arithmetic instructions and
also unsigned arithmetic instructions,
JG JL <-signed arithmetic
JB JA <- unsigned instructions.
So the compiler can just assemble that if (foo >0 ) statement with unsigned instructions
isn't it? Can somebody explain how it works in advance?
Is that instruction wrong? Or if there is a difference in "C" where "C++" is strict in
that case? Please explain.
Here we are comparing a unsigned variable with a immediate value. What is happening inside
the compiler actually in this case?
And when we compare a signed value with unsigned value what happens? Then what instructions will compiler choose, signed instructions or unsigned instructions?
--thanks in advance--
This question should not be answered on the level of assembler but stil on c/c++ language level. On most architectures it is impossible to compare signed and unsigned numbers, and c/c++ does not facilitate such comparisons. Instead there are rules about converting one of the operands to type of the other in order to compare them - see for example aswers to this question
About comparing to literals - typical way of doing it (as you did) is not wrong, but you can do it better - according to c++ standard:
2.13.1.1 An integer literal is a sequence of digits that has no period
or exponent part. An integer literal may have a prefix that specifies
its base and a suffix that specifies its type. The lexically first
digit of the sequence of digits is the most significant. A decimal
integer literal (base ten) begins with a digit other than 0 and con-
sists of a sequence of decimal digits. An octal integer literal (base
eight) begins with the digit 0 and con- sists of a sequence of octal
digits.22) A hexadecimal integer literal (base sixteen) begins with 0x
or 0X and consists of a sequence of hexadecimal digits, which include
the decimal digits and the letters a through f and A through F with
decimal values ten through fifteen. [Example: the number twelve can be
written 12, 014, or 0XC. ]
2.13.1.2 The type of an integer literal depends on its form, value,
and suffix. If it is decimal and has no suffix, it has the first of
these types in which its value can be represented: int, long int; if
the value cannot be repre- sented as a long int, the behavior is
undefined. If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int,
unsigned int, long int, unsigned long int. If it is suffixed by u or
U, its type is the first of these types in which its value can be
repre- sented: unsigned int, unsigned long int. If it is suffixed by l
or L, its type is the first of these types in which its value can be
represented: long int, unsigned long int. If it is suffixed by ul, lu,
uL, Lu, Ul, lU, UL, or LU, its type is unsigned long int.
If you want to be sure about your literal type (and therefore comaprison type) add described suffixes to ensure right type of literal.
It is also worth noticing that literal 0 is actually not decimal but octal - it doesn't seem to change anything, but is quite unexpected - or am I wrong?
To summarize - it is not wrong to write code like that, but you should remeber that in certain conditions in might behave counter-intuitive (or at least counter-mathematical ;)