Why is the result of sizeof implementation defined? [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
In C99, §6.5.3.4:
2 The sizeof operator yields the size (in bytes) of its operand,
which may be an expression or the parenthesized name of a type. ...
4 The value of the result is implementation-defined, and its type (an
unsigned integer type) is size_t, defined in <stddef.h> (and other
headers).
In C++14, §5.3.3:
1 The sizeof operator yields the number of bytes in the object
representation of its operand. ... The result of sizeof applied to any
other fundamental type (3.9.1) is implementation-defined.
The only guaranteed values are sizeof(char), sizeof(unsigned char) and sizeof(signed char) which is one.
However, "the number of bytes in the object representation" seems pretty iron-clad to me. For example, in C99 §6.2.6.1:
4 Values stored in non-bit-field objects of any other object type
consist of n × CHAR_BIT bits, where n is the size of an object
of that type, in bytes. ...
So why is it implementation-defined if it seems pretty defined?
Many of you seem to be misinterpretating my question. I never claimed that:
A) The size of types are defined or the same on all systems,
B) implementation-defined means it can return "random values"
What I'm getting at here is that n * CHAR_BITS is a fixed formula. The formula itself can't changed between implementations. Yes, an int may be 4 bytes or 8 bytes. I get that. But between all implementations, the value must n * CHAR_BITS.

The result of sizeof is implementation defined because the size of the various basic types are implementation defined. The only guarantees we have on the size of the types in C++ is that
sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <=
sizeof(long) <= sizeof(long long)
And that each type has a minimum value it must support C11 [Annex E (informative) Implementation limits]/1
[...]The minimum magnitudes shown shall be replaced by implementation-defined magnitudes with the same sign.[...]
#define CHAR_BIT 8
#define CHAR_MAX UCHAR_MAX or SCHAR_MAX
#define CHAR_MIN 0 or SCHAR_MIN
#define INT_MAX +32767
#define INT_MIN -32767
#define LONG_MAX +2147483647
#define LONG_MIN -2147483647
#define LLONG_MAX +9223372036854775807
#define LLONG_MIN -9223372036854775807
#define MB_LEN_MAX 1
#define SCHAR_MAX +127
#define SCHAR_MIN -127
#define SHRT_MAX +32767
#define SHRT_MIN -32767
#define UCHAR_MAX 255
#define USHRT_MAX 65535
#define UINT_MAX 65535
#define ULONG_MAX 4294967295
#define ULLONG_MAX 18446744073709551615
So per the standard a int has to be able to store a number that could be stored in 16 bits but it can be bigger and on most of today's systems it is 32 bits.
What I'm getting at here is that n * CHAR_BITS is a fixed formula. The formula itself can't changed between implementations. Yes, an int may be 4 bytes or 8 bytes. I get that. But between all implementations, the value must n * CHAR_BITS.
You are correct but n is defined per C99 §6.2.6.1 as
where n is the size of an object of that type
emphasis mine
So the formula may be fixed but n is not fixed and different implementations on the same system can use a different value of n.

The result of sizeof is not implementation defined. The standard does not say that; it says:
The value of the result is implementation-defined, [...]
That is semantically different. The result of sizeof is well defined:
[...] the size (in bytes) of its operand [...]
Both the bit width of a byte in this context and the number of bytes in non char types is implementation defined.

Because the sizes of basic types are defined in terms of efficiency, not in terms of exact number of bits. An "int" must be something that the CPU can manipulate efficiently. For most modern systems, this quantity turns out to be 32 bits (or 64 bits). For older systems, it was quite often 16 bits. However, if a 35 bits CPU were to exist, an int on such a system would be 35 bits. In other words, C++ does not apply a penalty to enforce a bit-width a CPU might not support at all.
Of course, one could argue that notions of exotic bit widths for basic types have been overtaken by history. I cannot think of any modern CPU that does not support the standard set of 8, 16, and 32 bits (feel free to disagree with this statement, but at least be so kind to give an example!), and 64 bits is also pretty common (and not a big deal to support in software if hardware support is unavailable).
Arguably the C++ language has already moved away from having variable numbers of bits for char; as far as I know, u8"..." converts to char *, but the unicode specification demands that u8 is encoded in 8 bits.
If a char of 8 bits is size 1, then an int of 32 bits is size 4. If a char of 16 bits is size 1, then an int of 32 bits is only size 2. Both situations are equally valid in C++, if such sizes happen to be good choices for their respective hardware.

Padding bits are "unspecified" not "implementation-defined".
Wrong. Very, very wrong. The values of padding bytes are unspecified. The intention here is that the values of these bits may represent trap values, but not necessarily.
The standard tells you sizeof returns bytes * CHAR_BITS, but doesn't specify a size (other than the exact-width types). The number of bytes a type occupies is implementation-defined, hence sizeof must be as well.
Implementation-defined is decribed as:
unspecified value where each implementation documents how the choice
is made

When you declare a new variable in example like this:
size_t a;
it will be equal with this:
unsigned short int a; // unsigned short a
On 32-bit computers size of the integer number (int) is 4 bytes.
Size of the short int is 2 bytes.
In C programming languange 'size_t' is the return type of the 'sizeof()' operator.When you use 'sizeof()' operator he will give you the size of the object.Argument of the 'sizeof()' must be an l-value type. Size of the element(object) cannot be a negative number and it must be an integer.

Related

Does sizeof(T) * CHAR_BIT guarantee bit size?

There doesn't appear to be any library function for calculating the size of a type in bits.
Am I right to assume that this can be done in the following way?
#include <climits>
template <typename T>
size_t Size_In_Bits(){
return sizeof(T) * CHAR_BIT;
}
Will this always give back the amount of bits that can be targeted on a type?
This is guaranteed to give you size (storage) in bits, but not the width (number of value bits). The latter could be less if the type has padding bits. For unsigned types there you can measure the number of value bits directly by converting -1 to the type (to get the max possible value in the type) and counting them. For signed types, std::numeric_limits<T>::max() can be used to get the max. Or, if you know the specific type already, you can use the xxx_MAX macros from limits.h or stdint.h.
sizeof(T) * CHAR_BIT returns the numbers of bits the type takes up in memory.
Yet the size of bits may be more than the bits the integer can be mathematically used - (consider padding bits).
Detail: integers have value bits, sign bit (signed integers) and possible padding bits. All these bits contribute to the storage size.
unsigned char will never have padding bits.

how 256 stored in char variable and unsigned char

Up to 255, I can understand how the integers are stored in char and unsigned char ;
#include<stdio.h>
int main()
{
unsigned char a = 256;
printf("%d\n",a);
return(0);
}
In the code above I have an output of 0 for unsigned char as well as char.
For 256 I think this is the way the integer stored in the code (this is just a guess):
First 256 converted to binary representation which is 100000000 (totally 9 bits).
Then they remove the remove the leftmost bit (the bit which is set) because the char datatype only have 8 bits of memory.
So its storing in the memory as 00000000 , that's why its printing 0 as output.
Is the guess correct or any other explanation is there?
Your guess is correct. Conversion to an unsigned type uses modular arithmetic: if the value is out of range (either too large, or negative) then it is reduced modulo 2N, where N is the number of bits in the target type. So, if (as is often the case) char has 8 bits, the value is reduced modulo 256, so that 256 becomes zero.
Note that there is no such rule for conversion to a signed type - out-of-range values give implementation-defined results. Also note that char is not specified to have exactly 8 bits, and can be larger on less mainstream platforms.
On your platform (as well as on any other "normal" platform) unsigned char is 8 bit wide, so it can hold numbers from 0 to 255.
Trying to assign 256 (which is an int literal) to it results in an unsigned integer overflow, that is defined by the standard to result in "wraparound". The result of u = n where u is an unsigned integral type and n is an unsigned integer outside its range is u = n % (max_value_of_u +1).
This is just a convoluted way to say what you already said: the standard guarantees that in these cases the assignment is performed keeping only the bits that fit in the target variable. This norm is there since most platform already implement this at the assembly language level (unsigned integer overflow typically results in this behavior plus some kind of overflow flag set to 1).
Notice that all this do not hold for signed integers (as often plain char is): signed integer overflow is undefined behavior.
yes, that's correct. 8 bits can hold 0 to 255 unsigned, or -128 to 127 signed. Above that and you've hit an overflow situation and bits will be lost.
Does the compiler give you warning on the above code? You might be able to increase the warning level and see something. It won't warn you if you assign a variable that can't be determined statically (before execution), but in this case it's pretty clear you're assigning something too large for the size of the variable.

size guarantee for integral/arithmetic types in C and C++

I know that the C++ standard explicitly guarantees the size of only char, signed char and unsigned char. Also it gives guarantees that, say, short is at least as big as char, int as big as short etc. But no explicit guarantees about absolute value of, say, sizeof(int). This was the info in my head and I lived happily with it. Some time ago, however, I came across a comment in SO (can't find it) that in C long is guaranteed to be at least 4 bytes, and that requirement is "inherited" by C++. Is that the case? If so, what other implicit guarantees do we have for the sizes of arithmetic types in C++? Please note that I am absolutely not interested in practical guarantees across different platforms in this question, just theoretical ones.
18.2.2 guarantees that <climits> has the same contents as the C library header <limits.h>.
The ISO C90 standard is tricky to get hold of, which is a shame considering that C++ relies on it, but the section "Numerical limits" (numbered 2.2.4.2 in a random draft I tracked down on one occasion and have lying around) gives minimum values for the INT_MAX etc. constants in <limits.h>. For example ULONG_MAX must be at least 4294967295, from which we deduce that the width of long is at least 32 bits.
There are similar restrictions in the C99 standard, but of course those aren't the ones referenced by C++03.
This does not guarantee that long is at least 4 bytes, since in C and C++ "byte" is basically defined to mean "char", and it is not guaranteed that CHAR_BIT is 8 in C or C++. CHAR_BIT == 8 is guaranteed by both POSIX and Windows.
Don't know about C++. In C you have
Annex E
(informative)
Implementation limits
[#1] The contents of the header are given below,
in alphabetical order. The minimum magnitudes shown shall
be replaced by implementation-defined magnitudes with the
same sign. The values shall all be constant expressions
suitable for use in #if preprocessing directives. The
components are described further in 5.2.4.2.1.
#define CHAR_BIT 8
#define CHAR_MAX UCHAR_MAX or SCHAR_MAX
#define CHAR_MIN 0 or SCHAR_MIN
#define INT_MAX +32767
#define INT_MIN -32767
#define LONG_MAX +2147483647
#define LONG_MIN -2147483647
#define LLONG_MAX +9223372036854775807
#define LLONG_MIN -9223372036854775807
#define MB_LEN_MAX 1
#define SCHAR_MAX +127
#define SCHAR_MIN -127
#define SHRT_MAX +32767
#define SHRT_MIN -32767
#define UCHAR_MAX 255
#define USHRT_MAX 65535
#define UINT_MAX 65535
#define ULONG_MAX 4294967295
#define ULLONG_MAX 18446744073709551615
So char <= short <= int <= long <= long long
and
CHAR_BIT * sizeof (char) >= 8
CHAR_BIT * sizeof (short) >= 16
CHAR_BIT * size of (int) >= 16
CHAR_BIT * sizeof (long) >= 32
CHAR_BIT * sizeof (long long) >= 64
Yes, C++ type sizes are inherited from C89.
I can't find the specification right now. But it's in the Bible.
Be aware that the guaranteed ranges of these types are one less wide than on most machines:
signed char -127 ... +127 guranteed but most twos complement machines have -128 ... + 127
Likewise for the larger types.
There are several inaccuracies in what you read. These inaccuracies were either present in the source, or maybe you remembered it all incorrectly.
Firstly, a pedantic remark about one peculiar difference between C and C++. C language does not make any guarantees about the relative sizes of integer types (in bytes). C language only makes guarantees about their relative ranges. It is true that the range of int is always at least as large as the range of short and so on. However, it is formally allowed by C standard to have sizeof(short) > sizeof(int). In such case the extra bits in short would serve as padding bits, not used for value representation. Obviously, this is something that is merely allowed by the legal language in the standard, not something anyone is likely to encounter in practice.
In C++ on the other hand, the language specification makes guarantees about both the relative ranges and relative sizes of the types, so in C++ in addition to the above range relationship inherited from C it is guaranteed that sizeof(int) is greater or equal than sizeof(short).
Secondly, the C language standard guarantees minimum range for each integer type (these guarantees are present in both C and C++). Knowing the minimum range for the given type, you can always say how many value-forming bits this type is required to have (as minimum number of bits). For example, it is true that type long is required to have at least 32 value-forming bits in order to satisfy its range requirements. If you want to recalculate that into bytes, it will depend on what you understand under the term byte. If you are talking specifically about 8-bit bytes, then indeed type long will always consist of at least four 8-bit bytes. However, that does not mean that sizeof(long) is always at least 4, since in C/C++ terminology the term byte refers to char objects. char objects are not limited to 8-bits. It is quite possible to have 32-bit char type in some implementation, meaning that sizeof(long) in C/C++ bytes can legally be 1, for example.
The C standard do not explicitly say that long has to be at least 4 bytes, but they do specify a minimum range for the different integral types, which implies a minimum size.
For example, the minimum range of an unsigned long is 0 to 4,294,967,295. You need at least 32 bits to represent every single number in that range. So yes, the standard guarantee (indirectly) that a long is at least 32 bits.
C++ inherits the data types from C, so you have to go look at the C standard. The C++ standard actually references to parts of the C standard in this case.
Just be careful about the fact that some machines have chars that are more than 8 bits. For example, IIRC on the TI C5x, a long is 32 bits, but sizeof(long)==2 because chars, shorts and ints are all 16 bits with sizeof(char)==1.

C++: What is the default length of an int?

I've been searching for a while but couldn't find a definite answer to this apparently simple question: what is the default length of an int?
I know that by default, an int is signed. But is it short or long?
According to the "Fundamental data types"table found in the following page, an int is a long int by default (4 bytes).
http://www.cplusplus.com/doc/tutorial/variables/
Is it always true, or does this depend on the OS (32bit/64bit), the compiler or other things?
It depends on the compiler implementor. An int is supposed to be the best "native" length for the platform. Best native here typically refers to whichever size is most handy/efficient/fast for the targeted processor to work with. Often you can expect int to have the same size as the processor's (integer) registers.
As others have pointed out, there are certain relationships about the various integer types' sizes that the compiler must adhere to, so it's the implementor is not free to choose anything. For instance, int can't be larger than long, and so on.
You often talk about programming models in relationship with issues like these, e.g. a compiler can chose to make the various types different sizes depending on the chosen model.
The standard requires only:
a range of a least ±32767 (i.e., at least 16 bits)
int is no shorter than short and no longer than long. It may be equal in size to one of them, or neither.
The exact size of integer types depends on the compiler. The de facto standard is
char is 8 bits
short is 16 bits
int is 16 bits on 16-bit systems, and 32 bits on both 32- and 64-bit systems
long may be either 32 or 64 bits
It depends on the architecture, that is the microprocessor/microcontroller you're compiling the code for (x86, ARM, PIC, Z80, 8051 etc.) and on the compiler, that is how the compiler implements the fundamental/built in data types.
You are guaranteed that a short int is at least 16 bits, and that a long int is at least 32 bits, and that plain int will be no smaller than a short nor larger than a long. But the actual sizes will be decided by the compiler implementor.
The C++ Standard says it like this :
3.9.1, §2 :
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion : it depends on which architecture you're working on. Any other assumption is false.
$4.4 from "The C++ programming Language" by Bjarne
Like char, each integer type comes in three forms: ‘‘plain’’ int , signed int, and unsigned int . In addition, integers come in three sizes: short int , ‘‘plain’’ int , and long int. A long int can be referred to as plain long . Similarly, short is a synonym for short int , unsigned for unsigned int, and signed for signed int .
The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules (§C.6.1, §C.6.2.1). Unlike plain chars, plain ints are always signed. The signed int types are simply more explicit synonyms for their plain int counterparts.
Section 4.6 of the same book states
Sizes of C++ objects are expressed in terms of multiples of the size of a char , so by definition the size of a char is 1 . The size of an object or type can be obtained using the sizeof operator
(§6.2). This is what is guaranteed
about sizes of fundamental types:
1 <= sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
1 <= sizeof(bool) <= sizeof(long)
sizeof(char) <= sizeof(wchar_t) <= sizeof(long)
sizeof(float) <= sizeof(double) <= sizeof(long double)
sizeof(N) <= sizeof(signed N) <= sizeof(unsigned N)
where N can be char , short int, int ,
or long int . In addition, it is
guaranteed that a char has at least 8
bits, a short at least 16 bits, and a
long at least 32 bits. A char can hold
a character of the machine’s character
set.
This clearly indicates that
sizeof(int) is implementation defined
but is guaranteed to be minimum 32bits
C++03 $3.9.1/3
"For each of the signed integer types,
there exists a corresponding (but
different) unsigned integer type:
“unsigned char”, “unsigned short int”,
“unsigned int”, and “unsigned long
int,” each of which occupies the
same amount of storage and has the
same alignment requirements (3.9) as
the corresponding signed integer
type40) ; that is, each signed integer
type has the same object
representation as its corresponding
unsigned integer type.

Relation between word length, character size, integer size and byte

What is the relation between word length, character size, integer size, and byte in C++?
The standard requires that certain types have minimum sizes (short is at least 16 bits, int is at least 16 bits, etc), and that some groups of type are ordered (sizeof(int) >= sizeof(short) >= sizeof(char)).
In C++ a char must be large enough to hold any character in the implemetation's basic character set.
int has the "natural size suggested by the architecture of the execution environment". Note that this means that an int does not need to be at least 32-bits in size. Implementations where int is 16 bits are common (think embedded ot MS-DOS).
The following are taken from various parts of the C++98 and C99 standards:
long int has to be at least as large as int
int has to be at least as large as short
short has to be at least as large as char
Note that they could all be the same size.
Also (assuming a two's complement implementation):
long int has to be at least 32-bits
int has to be at least 16-bits
short has to be at least 16-bits
char has to be at least 8 bits
The Standard doesn't know this "word" thingy used by processors. But it says the type "int" should have the natural size for a execution environment. But even for 64 bit environments, int is usually only 32 bits. So "word" in Standard terms has pretty much no common meaning (except for the common English "word" of course).
Character size is the size of a character. Depends on what character you talk about. Character types are char, unsigned char and signed char. Also wchar_t is used to store characters that can have any size (determined by the implementation - but must use one of the integer types as its underlying type. Much like enumerations), while char/signed char or unsigned char has to have one byte. That means that one byte has as much bits as one char has. If an implementation says one object of type char has 16 bits, then a byte has 16 bits too.
Now a byte is the size that one char occupies. It's a unit, not some specific type. There is not much more about it, just that it is the unit that you can access memory. I.e you do not have pointer access to bit-fields, but you have access to units starting at one byte.
"Integer size" now is pretty wide. What do you mean? All of bool, char, short, int, long and their unsinged counterparts are integers. Their range is what i would call "integer size" and it is documented in the C standard - taken over by the C++ Standard. For signed char the range is from -127 <-> 127, for short and int it is the same and is -2^15+1 <-> 2^15-1 and for long it is -2^31+1 <-> 2^31-1. Their unsigned counterparts range from 0 up to 2^8-1, 2^16-1 and 2^32-1 respectively. Those are however minimal sizes. That is, an int may not have maximal size 2^14 on any platform, because that is less than 2^15-1 of course. It follows for those values that a minimum of bits is required. For char that is 8, for short/int that is 16 and for long that is 32. Two's-complement representation for negative numbers is not required, which is why the negative value is not -128 instead of -127 for example for signed char.
Standard C++ doesn't have a datatype called word or byte. The rest are well defined as ranges. The base is a char which has of CHAR_BITS bits. The most commonly used value of CHAR_BITS is 8.
sizeof( char ) == 1 ( one byte ) (in c++, in C - not specified)
sizeof( int ) >= sizeof( char )
word - not c++ type, usualy in computer architecture it mean 2 bytes
Kind of depends on what you mean by relation. The size of numeric types is generally a multiple of the machine word size. A byte is a byte is a byte -- 8 bits, no more, no less. A character is defined in the standard as a single unsigned byte I believe (check your ARM for details).
The general rule is, don't make any assumptions about the actual size of data types. The standard specifies relationships between the types such as a "long" integer will be either the same size or larger than an "int". Individual implementations of the language will pick specific sizes for the types that are convenient for them. For example, a compiler for a 64-bit processor will choose different sizes than a compiler for a 32-bit processor.
You can use the sizeof() operator to examine the specific sizes for the compiler you are using (on the specific target architecture).