Use of long long int in 2D Array [duplicate] - c++

Is LL defined anywhere in the standard (hard term to come by)?
ideone accepts the code
int main()
{
std::cout << sizeof(0LL) << std::endl;
std::cout << sizeof(0);
}
and prints
8
4
But what does it mean?

It is specified in Paragraph 2.14.2 of the C++11 Standard:
2.14.2 Integer literals
[...]
long-long-suffix: one of
ll LL
Paragraph 2.14.2/2, and in particular Table 6, goes on specifying the meaning of the suffix for decimal, octal, and hexadecimal constants, and the types they are given.
Since 0 is an octal literal, the type of 0LL is long long int:
#include <type_traits>
int main()
{
// Won't fire
static_assert(std::is_same<decltype(0LL), long long int>::value, "Ouch!");
}

LL is the suffix for long-long, which is 64-bit on most (all?) C/C++ implementations. So 0LL is a 64-bit literal with the value of 0.
This is similar to L being the suffix for a long literal, which on most 32- and 64-bit C/C++ implementations is the same size as a non-long int. (On 16-bit implementations, the size of int is usually 16 bits, and so the L suffix would indicate a 32-bit integer literal in contrast to the default of 16 bits.)

0LL is an integer literal. It's suffix is LL which determines the possible set of types that it might have. For a decimal constant, the type will be long long int. For an octal or hexadecimal constant, the type will be long long int or unsigned long long int if necessary. In the case of 0LL, the literal is of type long long int.
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.
Table 6 - Types of integer constants
Suffix Decimal constants Octal or hexadecimal constant
...
ll or LL long long int long long int
unsigned long long int
...

We will begin with an example:
std::cout << 2LL << endl;
This outcome will be 2, and this happens, because depending on the data size, and to properly fix it, we want in some situations, use a 2 as long long, and this is exactly what happens. The output given is of type long long, representing the constant int 2.
Another suffixes are (from geeks):
unsigned int: character u or U at the end of integer constant.
long int: character l or L at the end of integer constant.
unsigned long int: character ul or UL at the end of integer constant.
long long int: character ll or LL at the end of integer constant.
unsigned long long int: character ull or ULL at the end of integer constant.

Related

Why does the base of a literal affect its type?

The decimal number 4294967295 is equal to hexadecimal 0xFFFFFFFF, so I would expect a literal to have the same type regardless of what base it is expressed in, yet
std::is_same<decltype(0xFFFFFFFF), decltype(4294967295)>::value; //evaluates false
It appears that on my compiler decltype(0xFFFFFFFF) is unsigned int, while decltype(4294967295) is signed long.
hex literals and decimal literals types are determined differently from lex.icon table 7
The type of an integer literal is the first of the corresponding list in Table 7 in which its value can be represented.
when there is no suffix for decimal literal the types listed are in order:
integer
long int
long long int
for hexidecimal the list in order are:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
Why does this difference exist? Considering we also have this in C, we can look at the C99 rationale document and it says:
Unlike decimal constants, octal and hexadecimal constants too large to be ints are typed as
unsigned int if within range of that type, since it is more likely that they represent bit
patterns or masks, which are generally best treated as unsigned, rather than “real” numbers.

C++ Primer paragraph on Integer literals, need someone to clarify some points

I'm currently working through C++ Primer (5th Edition), and I'm struggling trying to figure out what the author means in this part on literals (Chapter 2, section 2.1.3):
... By default, decimal literals are signed whereas octal and hexadecimal literals can be either signed or unsigned types. A decimal literal has the smallest type of int, long, or long long (i.e., the first type in this list) in which the literal’s value fits. Octal and hexadecimal literals have the smallest type of int, unsigned int, long, unsigned long, long long, or unsigned long long in which the literal’s value fits. It is an error to use a literal that is too large to fit in the largest related type...
In the first sentence, does the author mean that decimal literals are signed according to the C++ standard, and for octal and hexadecimal literals it depends on the compiler?
The next three sentences really confuse me though, so if someone could offer an alternative explaination, it would be greatly appreciated.
If you have an integer literal for example a decimal integer literal the compiler has to define its type. For example a decimal literal can be used in expressions and the compiler need to determine the type of an expression based on the types of its operands.
So for decimal integer literals the compiler selects between the following types
int
long int
long long int
and choices the first type that can accomodate the decimal literal.
It does not consider unsigned integer types as for example unsigned int or unsigned long int though they could accomodate a given literal.
The situation is different when the compiler deals with octal or hexadecimal integer literals. In this case it considers the following types in the given order
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
That it would be more clear consider an artificial example to demonstrate the idea. Let's assume that you have a value equal to 127. This value can be stored in type signed char. Now what about value 128? It can not be stored in an object of type signed char because the maximum positive value that can be stored in an object of type signed char is 127.
What to do? We could store 128 in an object of type unsigned char because its maximum value is 255. However the compiler prefers to store it in an object of type signed short.
But if this value was specified like 0x80 then the compiler would select an object of type unsigned char
It is of course an imaginary process.
However in realty a similar algorithm is used for decimal literals only the compiler takes into account integer types starting from int that to determine the type of a decimal literal.
Decimal (meaning base-10) literals are those that have no prefix. The author is saying that these are always signed.
5 // signed int (decimal)
12 // signed int (decimal)
They can also be signed or unsigned based on either you providing a suffix. Here's a full reference for integer literal syntax.
5 // signed int
7U // unsigned int
7UL // unsigned long
Hex (base-8) values will be prefixed with 0x.
0x05 // int (hex)
Similarly octal (base-8) values are prefixed with 0.
05 // int (octal)
To append to Cory's answer:
The relevant diagram in the link states
Types allowed for integer literals
No suffix, regular decimal
int, long int, long long int(since C++11)
So the decimal number
78625723
Is represented by a signed type.
No suffix hexadecimal or octal bases
int, long int,
unsigned int, unsigned long int
long long int(since C++11)
unsigned long long int(since C++11)
So the 0x hex number
0x78625723
Might be represented by a signed or an unsigned value.
The place this is relevant is when you have literal values that are just a little too big to fit in a signed type, but do fit in the corresponding unsigned type. For example, on a machine with 16-bit int and 32-bit long (rare these days, but the minimum allowed by the spec), the constant literal 0xffff will be an unsigned int, while the literal 65535 (same value) will be a long.
Of course, you can force the latter to be an unsigned by using a U suffix; this part of the spec is only relevant for literals with no suffix.

what does L represent in "<any hex number>L"

I am looking through some c++ code and I came across this:
if( (size & 0x03L) != 0 )
throw MalformedBundleException( "bundle size must be multiple of four" );
what does L stand for after the hexadecimal value ?
how does it alter the value 0x03 ?
It means Long, as in, the type of the literal 0x03L is long instead of the default int. On some platforms that will mean 64 bits instead of 32 bits, but that's entirely platform-dependent (the only guarantee is that long is not shorter than int).
This suffix sets the type of the numeric literal. L stands for long; LL stands for long long type. The number does not need to be hex - it works on decimals and octals as well.
3LL // A decimal constant 3 of type long long
03L // An octal constant 3 of type long
0x3L // A hex constant 3 of type long
It means so-called long-suffix of integer literals and denotes that the type of the literal is int long The integer literal in your example is hexadecomal integer literal of type int long.
You can meet also two LL (or ll) that denote type int long long

What does LL mean?

Is LL defined anywhere in the standard (hard term to come by)?
ideone accepts the code
int main()
{
std::cout << sizeof(0LL) << std::endl;
std::cout << sizeof(0);
}
and prints
8
4
But what does it mean?
It is specified in Paragraph 2.14.2 of the C++11 Standard:
2.14.2 Integer literals
[...]
long-long-suffix: one of
ll LL
Paragraph 2.14.2/2, and in particular Table 6, goes on specifying the meaning of the suffix for decimal, octal, and hexadecimal constants, and the types they are given.
Since 0 is an octal literal, the type of 0LL is long long int:
#include <type_traits>
int main()
{
// Won't fire
static_assert(std::is_same<decltype(0LL), long long int>::value, "Ouch!");
}
LL is the suffix for long-long, which is 64-bit on most (all?) C/C++ implementations. So 0LL is a 64-bit literal with the value of 0.
This is similar to L being the suffix for a long literal, which on most 32- and 64-bit C/C++ implementations is the same size as a non-long int. (On 16-bit implementations, the size of int is usually 16 bits, and so the L suffix would indicate a 32-bit integer literal in contrast to the default of 16 bits.)
0LL is an integer literal. It's suffix is LL which determines the possible set of types that it might have. For a decimal constant, the type will be long long int. For an octal or hexadecimal constant, the type will be long long int or unsigned long long int if necessary. In the case of 0LL, the literal is of type long long int.
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.
Table 6 - Types of integer constants
Suffix Decimal constants Octal or hexadecimal constant
...
ll or LL long long int long long int
unsigned long long int
...
We will begin with an example:
std::cout << 2LL << endl;
This outcome will be 2, and this happens, because depending on the data size, and to properly fix it, we want in some situations, use a 2 as long long, and this is exactly what happens. The output given is of type long long, representing the constant int 2.
Another suffixes are (from geeks):
unsigned int: character u or U at the end of integer constant.
long int: character l or L at the end of integer constant.
unsigned long int: character ul or UL at the end of integer constant.
long long int: character ll or LL at the end of integer constant.
unsigned long long int: character ull or ULL at the end of integer constant.

On type of a literal, unsigned negative number

The C++ Primer says:
We can independently specify the signedees and the size of an integral
literal. If the suffix contains a U, then the literal has an unsigned
type, so a decimal, octal or hexadecimal literal with a U suffix has
the smallest type of unsigned int, unsigned long or unsigned long long
in which the literal's value fits
When one declares
int i = -12U;
The way i understand it is that -12 is converted to the unsigned version of itself (4294967284) and then assigned to an int, making the result a very large positive number due to rollover.
This does not seem to happen. What am i missing please?
cout << i << endl; // -12
You are assigning the unsigned int back to a signed int, so it gets converted again.
It's like you did this:
int i = (int)(unsigned int)(-12);
u effectively binds more tightly than -. You are getting -(12u).
12 has type int and the value 12.
12U has type unsigned int and the value 12.
-12U has type unsigned int and the value std::numeric_limits<unsigned int>::max() + 1 - 12.
int i = -12U; applies an implementation-defined conversion to convert -12U to type int.