I'm looking for detailed information regarding the size of basic C++ types.
I know that it depends on the architecture (16 bits, 32 bits, 64 bits) and the compiler.
But are there any standards for C++?
I'm using Visual Studio 2008 on a 32-bit architecture. Here is what I get:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double: 8 bytes
I tried to find, without much success, reliable information stating the sizes of char, short, int, long, double, float (and other types I didn't think of) under different architectures and compilers.
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name). This is stated explicitly in the standard.
The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
unsigned char: 0 to 255
"plain" char: same range as signed char or unsigned char, implementation-defined
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to a number of bits high enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
Putting this all together, we are guaranteed that:
char, signed char, and unsigned char are at least 8 bits
signed short, unsigned short, signed int, and unsigned int are at least 16 bits
signed long and unsigned long are at least 32 bits
signed long long and unsigned long long are at least 64 bits
No guarantee is made about the size of float or double except that double provides at least as much precision as float.
The actual implementation-specific ranges can be found in <limits.h> header in C, or <climits> in C++ (or even better, templated std::numeric_limits in <limits> header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
For 32-bit systems, the 'de facto' standard is ILP32 — that is, int, long and pointer are all 32-bit quantities.
For 64-bit systems, the primary Unix 'de facto' standard is LP64 — long and pointer are 64-bit (but int is 32-bit). The Windows 64-bit standard is LLP64 — long long and pointer are 64-bit (but long and int are both 32-bit).
At one time, some Unix systems used an ILP64 organization.
None of these de facto standards is legislated by the C standard (ISO/IEC 9899:1999), but all are permitted by it.
And, by definition, sizeof(char) is 1, notwithstanding the test in the Perl configure script.
Note that there were machines (Crays) where CHAR_BIT was much larger than 8. That meant, IIRC, that sizeof(int) was also 1, because both char and int were 32-bit.
In practice there's no such thing. Often you can expect std::size_t to represent the unsigned native integer size on current architecture. i.e. 16-bit, 32-bit or 64-bit but it isn't always the case as pointed out in the comments to this answer.
As far as all the other built-in types go, it really depends on the compiler. Here's two excerpts taken from the current working draft of the latest C++ standard:
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: unsigned char, unsigned short int, unsigned int, unsigned long int, and unsigned long long int, each of which occupies the same amount of storage and has the same alignment requirements.
If you want to you can statically (compile-time) assert the sizeof these fundamental types. It will alert people to think about porting your code if the sizeof assumptions change.
There is standard.
C90 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long)
C99 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
Here is the C99 specifications. Page 22 details sizes of different integral types.
Here is the int type sizes (bits) for Windows platforms:
Type C99 Minimum Windows 32bit
char 8 8
short 16 16
int 16 32
long 32 32
long long 64 64
If you are concerned with portability, or you want the name of the type reflects the size, you can look at the header <inttypes.h>, where the following macros are available:
int8_t
int16_t
int32_t
int64_t
int8_t is guaranteed to be 8 bits, and int16_t is guaranteed to be 16 bits, etc.
If you need fixed size types, use types like uint32_t (unsigned integer 32 bits) defined in stdint.h. They are specified in C99.
Updated: C++11 brought the types from TR1 officially into the standard:
long long int
unsigned long long int
And the "sized" types from <cstdint>
int8_t
int16_t
int32_t
int64_t
(and the unsigned counterparts).
Plus you get:
int_least8_t
int_least16_t
int_least32_t
int_least64_t
Plus the unsigned counterparts.
These types represent the smallest integer types with at least the specified number of bits. Likewise there are the "fastest" integer types with at least the specified number of bits:
int_fast8_t
int_fast16_t
int_fast32_t
int_fast64_t
Plus the unsigned versions.
What "fast" means, if anything, is up to the implementation. It need not be the fastest for all purposes either.
The C++ Standard says it like this:
3.9.1, §2:
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion: It depends on which architecture you're working on. Any other assumption is false.
Nope, there is no standard for type sizes. Standard only requires that:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
The best thing you can do if you want variables of a fixed sizes is to use macros like this:
#ifdef SYSTEM_X
#define WORD int
#else
#define WORD long int
#endif
Then you can use WORD to define your variables. It's not that I like this but it's the most portable way.
For floating point numbers there is a standard (IEEE754): floats are 32 bit and doubles are 64. This is a hardware standard, not a C++ standard, so compilers could theoretically define float and double to some other size, but in practice I've never seen an architecture that used anything different.
We are allowed to define a synonym for the type so we can create our own "standard".
On a machine in which sizeof(int) == 4, we can define:
typedef int int32;
int32 i;
int32 j;
...
So when we transfer the code to a different machine where actually the size of long int is 4, we can just redefine the single occurrence of int.
typedef long int int32;
int32 i;
int32 j;
...
There is a standard and it is specified in the various standards documents (ISO, ANSI and whatnot).
Wikipedia has a great page explaining the various types and the max they may store:
Integer in Computer Science.
However even with a standard C++ compiler you can find out relatively easily using the following code snippet:
#include <iostream>
#include <limits>
int main() {
// Change the template parameter to the various different types.
std::cout << std::numeric_limits<int>::max() << std::endl;
}
Documentation for std::numeric_limits can be found at Roguewave. It includes a plethora of other commands you can call to find out the various limits. This can be used with any arbitrary type that conveys size, for example std::streamsize.
John's answer contains the best description, as those are guaranteed to hold. No matter what platform you are on, there is another good page that goes into more detail as to how many bits each type MUST contain: int types, which are defined in the standard.
I hope this helps!
When it comes to built in types for different architectures and different compilers just run the following code on your architecture with your compiler to see what it outputs. Below shows my Ubuntu 13.04 (Raring Ringtail) 64 bit g++4.7.3 output. Also please note what was answered below which is why the output is ordered as such:
"There are five standard signed integer types: signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list."
#include <iostream>
int main ( int argc, char * argv[] )
{
std::cout<< "size of char: " << sizeof (char) << std::endl;
std::cout<< "size of short: " << sizeof (short) << std::endl;
std::cout<< "size of int: " << sizeof (int) << std::endl;
std::cout<< "size of long: " << sizeof (long) << std::endl;
std::cout<< "size of long long: " << sizeof (long long) << std::endl;
std::cout<< "size of float: " << sizeof (float) << std::endl;
std::cout<< "size of double: " << sizeof (double) << std::endl;
std::cout<< "size of pointer: " << sizeof (int *) << std::endl;
}
size of char: 1
size of short: 2
size of int: 4
size of long: 8
size of long long: 8
size of float: 4
size of double: 8
size of pointer: 8
1) Table N1 in article "The forgotten problems of 64-bit programs development"
2) "Data model"
You can use:
cout << "size of datatype = " << sizeof(datatype) << endl;
datatype = int, long int etc.
You will be able to see the size for whichever datatype you type.
As mentioned the size should reflect the current architecture. You could take a peak around in limits.h if you want to see how your current compiler is handling things.
If you are interested in a pure C++ solution, I made use of templates and only C++ standard code to define types at compile time based on their bit size.
This make the solution portable across compilers.
The idea behind is very simple: Create a list containing types char, int, short, long, long long (signed and unsigned versions) and the scan the list and by the use of numeric_limits template select the type with given size.
Including this header you got 8 type stdtype::int8, stdtype::int16, stdtype::int32, stdtype::int64, stdtype::uint8, stdtype::uint16, stdtype::uint32, stdtype::uint64.
If some type cannot be represented it will be evaluated to stdtype::null_type also declared in that header.
THE CODE BELOW IS GIVEN WITHOUT WARRANTY, PLEASE DOUBLE CHECK IT.
I'M NEW AT METAPROGRAMMING TOO, FEEL FREE TO EDIT AND CORRECT THIS CODE.
Tested with DevC++ (so a gcc version around 3.5)
#include <limits>
namespace stdtype
{
using namespace std;
/*
* THIS IS THE CLASS USED TO SEMANTICALLY SPECIFY A NULL TYPE.
* YOU CAN USE WHATEVER YOU WANT AND EVEN DRIVE A COMPILE ERROR IF IT IS
* DECLARED/USED.
*
* PLEASE NOTE that C++ std define sizeof of an empty class to be 1.
*/
class null_type{};
/*
* Template for creating lists of types
*
* T is type to hold
* S is the next type_list<T,S> type
*
* Example:
* Creating a list with type int and char:
* typedef type_list<int, type_list<char> > test;
* test::value //int
* test::next::value //char
*/
template <typename T, typename S> struct type_list
{
typedef T value;
typedef S next;
};
/*
* Declaration of template struct for selecting a type from the list
*/
template <typename list, int b, int ctl> struct select_type;
/*
* Find a type with specified "b" bit in list "list"
*
*
*/
template <typename list, int b> struct find_type
{
private:
//Handy name for the type at the head of the list
typedef typename list::value cur_type;
//Number of bits of the type at the head
//CHANGE THIS (compile time) exp TO USE ANOTHER TYPE LEN COMPUTING
enum {cur_type_bits = numeric_limits<cur_type>::digits};
public:
//Select the type at the head if b == cur_type_bits else
//select_type call find_type with list::next
typedef typename select_type<list, b, cur_type_bits>::type type;
};
/*
* This is the specialization for empty list, return the null_type
* OVVERRIDE this struct to ADD CUSTOM BEHAVIOR for the TYPE NOT FOUND case
* (ie search for type with 17 bits on common archs)
*/
template <int b> struct find_type<null_type, b>
{
typedef null_type type;
};
/*
* Primary template for selecting the type at the head of the list if
* it matches the requested bits (b == ctl)
*
* If b == ctl the partial specified templated is evaluated so here we have
* b != ctl. We call find_type on the next element of the list
*/
template <typename list, int b, int ctl> struct select_type
{
typedef typename find_type<typename list::next, b>::type type;
};
/*
* This partial specified templated is used to select top type of a list
* it is called by find_type with the list of value (consumed at each call)
* the bits requested (b) and the current type (top type) length in bits
*
* We specialice the b == ctl case
*/
template <typename list, int b> struct select_type<list, b, b>
{
typedef typename list::value type;
};
/*
* These are the types list, to avoid possible ambiguity (some weird archs)
* we kept signed and unsigned separated
*/
#define UNSIGNED_TYPES type_list<unsigned char, \
type_list<unsigned short, \
type_list<unsigned int, \
type_list<unsigned long, \
type_list<unsigned long long, null_type> > > > >
#define SIGNED_TYPES type_list<signed char, \
type_list<signed short, \
type_list<signed int, \
type_list<signed long, \
type_list<signed long long, null_type> > > > >
/*
* These are acutally typedef used in programs.
*
* Nomenclature is [u]intN where u if present means unsigned, N is the
* number of bits in the integer
*
* find_type is used simply by giving first a type_list then the number of
* bits to search for.
*
* NB. Each type in the type list must had specified the template
* numeric_limits as it is used to compute the type len in (binary) digit.
*/
typedef find_type<UNSIGNED_TYPES, 8>::type uint8;
typedef find_type<UNSIGNED_TYPES, 16>::type uint16;
typedef find_type<UNSIGNED_TYPES, 32>::type uint32;
typedef find_type<UNSIGNED_TYPES, 64>::type uint64;
typedef find_type<SIGNED_TYPES, 7>::type int8;
typedef find_type<SIGNED_TYPES, 15>::type int16;
typedef find_type<SIGNED_TYPES, 31>::type int32;
typedef find_type<SIGNED_TYPES, 63>::type int64;
}
As others have answered, the "standards" all leave most of the details as "implementation defined" and only state that type "char" is at leat "char_bis" wide, and that "char <= short <= int <= long <= long long" (float and double are pretty much consistent with the IEEE floating point standards, and long double is typically same as double--but may be larger on more current implementations).
Part of the reasons for not having very specific and exact values is because languages like C/C++ were designed to be portable to a large number of hardware platforms--Including computer systems in which the "char" word-size may be 4-bits or 7-bits, or even some value other than the "8-/16-/32-/64-bit" computers the average home computer user is exposed to. (Word-size here meaning how many bits wide the system normally operates on--Again, it's not always 8-bits as home computer users may expect.)
If you really need a object (in the sense of a series of bits representing an integral value) of a specific number of bits, most compilers have some method of specifying that; But it's generally not portable, even between compilers made by the ame company but for different platforms. Some standards and practices (especially limits.h and the like) are common enough that most compilers will have support for determining at the best-fit type for a specific range of values, but not the number of bits used. (That is, if you know you need to hold values between 0 and 127, you can determine that your compiler supports an "int8" type of 8-bits which will be large enought to hold the full range desired, but not something like an "int7" type which would be an exact match for 7-bits.)
Note: Many Un*x source packages used "./configure" script which will probe the compiler/system's capabilities and output a suitable Makefile and config.h. You might examine some of these scripts to see how they work and how they probe the comiler/system capabilities, and follow their lead.
I notice that all the other answers here have focused almost exclusively on integral types, while the questioner also asked about floating-points.
I don't think the C++ standard requires it, but compilers for the most common platforms these days generally follow the IEEE754 standard for their floating-point numbers. This standard specifies four types of binary floating-point (as well as some BCD formats, which I've never seen support for in C++ compilers):
Half precision (binary16) - 11-bit significand, exponent range -14 to 15
Single precision (binary32) - 24-bit significand, exponent range -126 to 127
Double precision (binary64) - 53-bit significand, exponent range -1022 to 1023
Quadruple precision (binary128) - 113-bit significand, exponent range -16382 to 16383
How does this map onto C++ types, then? Generally the float uses single precision; thus, sizeof(float) = 4. Then double uses double precision (I believe that's the source of the name double), and long double may be either double or quadruple precision (it's quadruple on my system, but on 32-bit systems it may be double). I don't know of any compilers that offer half precision floating-points.
In summary, this is the usual:
sizeof(float) = 4
sizeof(double) = 8
sizeof(long double) = 8 or 16
unsigned char bits = sizeof(X) << 3;
where X is a char,int,long etc.. will give you size of X in bits.
From Alex B The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name).
Minimum ranges required by the standard (page 22) are:
and Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement platforms)
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to the number of bits enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
The actual implementation-specific ranges can be found in header in C, or in C++ (or even better, templated std::numeric_limits in header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
This is correct, however, you were also right in saying that:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double : 8 bytes
Because 32 bit architectures are still the default and most used, and they have kept these standard sizes since the pre-32 bit days when memory was less available, and for backwards compatibility and standardization it remained the same. Even 64 bit systems tend to use these and have extentions/modifications.
Please reference this for more information:
http://en.cppreference.com/w/cpp/language/types
As you mentioned - it largely depends upon the compiler and the platform. For this, check the ANSI standard, http://home.att.net/~jackklein/c/inttypes.html
Here is the one for the Microsoft compiler: Data Type Ranges.
You can use variables provided by libraries such as OpenGL, Qt, etc.
For example, Qt provides qint8 (guaranteed to be 8-bit on all platforms supported by Qt), qint16, qint32, qint64, quint8, quint16, quint32, quint64, etc.
On a 64-bit machine:
int: 4
long: 8
long long: 8
void*: 8
size_t: 8
There are four types of integers based on size:
short integer: 2 byte
long integer: 4 byte
long long integer: 8 byte
integer: depends upon the compiler (16 bit, 32 bit, or 64 bit)
Related
I'm looking for detailed information regarding the size of basic C++ types.
I know that it depends on the architecture (16 bits, 32 bits, 64 bits) and the compiler.
But are there any standards for C++?
I'm using Visual Studio 2008 on a 32-bit architecture. Here is what I get:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double: 8 bytes
I tried to find, without much success, reliable information stating the sizes of char, short, int, long, double, float (and other types I didn't think of) under different architectures and compilers.
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name). This is stated explicitly in the standard.
The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
unsigned char: 0 to 255
"plain" char: same range as signed char or unsigned char, implementation-defined
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to a number of bits high enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
Putting this all together, we are guaranteed that:
char, signed char, and unsigned char are at least 8 bits
signed short, unsigned short, signed int, and unsigned int are at least 16 bits
signed long and unsigned long are at least 32 bits
signed long long and unsigned long long are at least 64 bits
No guarantee is made about the size of float or double except that double provides at least as much precision as float.
The actual implementation-specific ranges can be found in <limits.h> header in C, or <climits> in C++ (or even better, templated std::numeric_limits in <limits> header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
For 32-bit systems, the 'de facto' standard is ILP32 — that is, int, long and pointer are all 32-bit quantities.
For 64-bit systems, the primary Unix 'de facto' standard is LP64 — long and pointer are 64-bit (but int is 32-bit). The Windows 64-bit standard is LLP64 — long long and pointer are 64-bit (but long and int are both 32-bit).
At one time, some Unix systems used an ILP64 organization.
None of these de facto standards is legislated by the C standard (ISO/IEC 9899:1999), but all are permitted by it.
And, by definition, sizeof(char) is 1, notwithstanding the test in the Perl configure script.
Note that there were machines (Crays) where CHAR_BIT was much larger than 8. That meant, IIRC, that sizeof(int) was also 1, because both char and int were 32-bit.
In practice there's no such thing. Often you can expect std::size_t to represent the unsigned native integer size on current architecture. i.e. 16-bit, 32-bit or 64-bit but it isn't always the case as pointed out in the comments to this answer.
As far as all the other built-in types go, it really depends on the compiler. Here's two excerpts taken from the current working draft of the latest C++ standard:
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: unsigned char, unsigned short int, unsigned int, unsigned long int, and unsigned long long int, each of which occupies the same amount of storage and has the same alignment requirements.
If you want to you can statically (compile-time) assert the sizeof these fundamental types. It will alert people to think about porting your code if the sizeof assumptions change.
There is standard.
C90 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long)
C99 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
Here is the C99 specifications. Page 22 details sizes of different integral types.
Here is the int type sizes (bits) for Windows platforms:
Type C99 Minimum Windows 32bit
char 8 8
short 16 16
int 16 32
long 32 32
long long 64 64
If you are concerned with portability, or you want the name of the type reflects the size, you can look at the header <inttypes.h>, where the following macros are available:
int8_t
int16_t
int32_t
int64_t
int8_t is guaranteed to be 8 bits, and int16_t is guaranteed to be 16 bits, etc.
If you need fixed size types, use types like uint32_t (unsigned integer 32 bits) defined in stdint.h. They are specified in C99.
Updated: C++11 brought the types from TR1 officially into the standard:
long long int
unsigned long long int
And the "sized" types from <cstdint>
int8_t
int16_t
int32_t
int64_t
(and the unsigned counterparts).
Plus you get:
int_least8_t
int_least16_t
int_least32_t
int_least64_t
Plus the unsigned counterparts.
These types represent the smallest integer types with at least the specified number of bits. Likewise there are the "fastest" integer types with at least the specified number of bits:
int_fast8_t
int_fast16_t
int_fast32_t
int_fast64_t
Plus the unsigned versions.
What "fast" means, if anything, is up to the implementation. It need not be the fastest for all purposes either.
The C++ Standard says it like this:
3.9.1, §2:
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion: It depends on which architecture you're working on. Any other assumption is false.
Nope, there is no standard for type sizes. Standard only requires that:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
The best thing you can do if you want variables of a fixed sizes is to use macros like this:
#ifdef SYSTEM_X
#define WORD int
#else
#define WORD long int
#endif
Then you can use WORD to define your variables. It's not that I like this but it's the most portable way.
For floating point numbers there is a standard (IEEE754): floats are 32 bit and doubles are 64. This is a hardware standard, not a C++ standard, so compilers could theoretically define float and double to some other size, but in practice I've never seen an architecture that used anything different.
We are allowed to define a synonym for the type so we can create our own "standard".
On a machine in which sizeof(int) == 4, we can define:
typedef int int32;
int32 i;
int32 j;
...
So when we transfer the code to a different machine where actually the size of long int is 4, we can just redefine the single occurrence of int.
typedef long int int32;
int32 i;
int32 j;
...
There is a standard and it is specified in the various standards documents (ISO, ANSI and whatnot).
Wikipedia has a great page explaining the various types and the max they may store:
Integer in Computer Science.
However even with a standard C++ compiler you can find out relatively easily using the following code snippet:
#include <iostream>
#include <limits>
int main() {
// Change the template parameter to the various different types.
std::cout << std::numeric_limits<int>::max() << std::endl;
}
Documentation for std::numeric_limits can be found at Roguewave. It includes a plethora of other commands you can call to find out the various limits. This can be used with any arbitrary type that conveys size, for example std::streamsize.
John's answer contains the best description, as those are guaranteed to hold. No matter what platform you are on, there is another good page that goes into more detail as to how many bits each type MUST contain: int types, which are defined in the standard.
I hope this helps!
When it comes to built in types for different architectures and different compilers just run the following code on your architecture with your compiler to see what it outputs. Below shows my Ubuntu 13.04 (Raring Ringtail) 64 bit g++4.7.3 output. Also please note what was answered below which is why the output is ordered as such:
"There are five standard signed integer types: signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list."
#include <iostream>
int main ( int argc, char * argv[] )
{
std::cout<< "size of char: " << sizeof (char) << std::endl;
std::cout<< "size of short: " << sizeof (short) << std::endl;
std::cout<< "size of int: " << sizeof (int) << std::endl;
std::cout<< "size of long: " << sizeof (long) << std::endl;
std::cout<< "size of long long: " << sizeof (long long) << std::endl;
std::cout<< "size of float: " << sizeof (float) << std::endl;
std::cout<< "size of double: " << sizeof (double) << std::endl;
std::cout<< "size of pointer: " << sizeof (int *) << std::endl;
}
size of char: 1
size of short: 2
size of int: 4
size of long: 8
size of long long: 8
size of float: 4
size of double: 8
size of pointer: 8
1) Table N1 in article "The forgotten problems of 64-bit programs development"
2) "Data model"
You can use:
cout << "size of datatype = " << sizeof(datatype) << endl;
datatype = int, long int etc.
You will be able to see the size for whichever datatype you type.
As mentioned the size should reflect the current architecture. You could take a peak around in limits.h if you want to see how your current compiler is handling things.
If you are interested in a pure C++ solution, I made use of templates and only C++ standard code to define types at compile time based on their bit size.
This make the solution portable across compilers.
The idea behind is very simple: Create a list containing types char, int, short, long, long long (signed and unsigned versions) and the scan the list and by the use of numeric_limits template select the type with given size.
Including this header you got 8 type stdtype::int8, stdtype::int16, stdtype::int32, stdtype::int64, stdtype::uint8, stdtype::uint16, stdtype::uint32, stdtype::uint64.
If some type cannot be represented it will be evaluated to stdtype::null_type also declared in that header.
THE CODE BELOW IS GIVEN WITHOUT WARRANTY, PLEASE DOUBLE CHECK IT.
I'M NEW AT METAPROGRAMMING TOO, FEEL FREE TO EDIT AND CORRECT THIS CODE.
Tested with DevC++ (so a gcc version around 3.5)
#include <limits>
namespace stdtype
{
using namespace std;
/*
* THIS IS THE CLASS USED TO SEMANTICALLY SPECIFY A NULL TYPE.
* YOU CAN USE WHATEVER YOU WANT AND EVEN DRIVE A COMPILE ERROR IF IT IS
* DECLARED/USED.
*
* PLEASE NOTE that C++ std define sizeof of an empty class to be 1.
*/
class null_type{};
/*
* Template for creating lists of types
*
* T is type to hold
* S is the next type_list<T,S> type
*
* Example:
* Creating a list with type int and char:
* typedef type_list<int, type_list<char> > test;
* test::value //int
* test::next::value //char
*/
template <typename T, typename S> struct type_list
{
typedef T value;
typedef S next;
};
/*
* Declaration of template struct for selecting a type from the list
*/
template <typename list, int b, int ctl> struct select_type;
/*
* Find a type with specified "b" bit in list "list"
*
*
*/
template <typename list, int b> struct find_type
{
private:
//Handy name for the type at the head of the list
typedef typename list::value cur_type;
//Number of bits of the type at the head
//CHANGE THIS (compile time) exp TO USE ANOTHER TYPE LEN COMPUTING
enum {cur_type_bits = numeric_limits<cur_type>::digits};
public:
//Select the type at the head if b == cur_type_bits else
//select_type call find_type with list::next
typedef typename select_type<list, b, cur_type_bits>::type type;
};
/*
* This is the specialization for empty list, return the null_type
* OVVERRIDE this struct to ADD CUSTOM BEHAVIOR for the TYPE NOT FOUND case
* (ie search for type with 17 bits on common archs)
*/
template <int b> struct find_type<null_type, b>
{
typedef null_type type;
};
/*
* Primary template for selecting the type at the head of the list if
* it matches the requested bits (b == ctl)
*
* If b == ctl the partial specified templated is evaluated so here we have
* b != ctl. We call find_type on the next element of the list
*/
template <typename list, int b, int ctl> struct select_type
{
typedef typename find_type<typename list::next, b>::type type;
};
/*
* This partial specified templated is used to select top type of a list
* it is called by find_type with the list of value (consumed at each call)
* the bits requested (b) and the current type (top type) length in bits
*
* We specialice the b == ctl case
*/
template <typename list, int b> struct select_type<list, b, b>
{
typedef typename list::value type;
};
/*
* These are the types list, to avoid possible ambiguity (some weird archs)
* we kept signed and unsigned separated
*/
#define UNSIGNED_TYPES type_list<unsigned char, \
type_list<unsigned short, \
type_list<unsigned int, \
type_list<unsigned long, \
type_list<unsigned long long, null_type> > > > >
#define SIGNED_TYPES type_list<signed char, \
type_list<signed short, \
type_list<signed int, \
type_list<signed long, \
type_list<signed long long, null_type> > > > >
/*
* These are acutally typedef used in programs.
*
* Nomenclature is [u]intN where u if present means unsigned, N is the
* number of bits in the integer
*
* find_type is used simply by giving first a type_list then the number of
* bits to search for.
*
* NB. Each type in the type list must had specified the template
* numeric_limits as it is used to compute the type len in (binary) digit.
*/
typedef find_type<UNSIGNED_TYPES, 8>::type uint8;
typedef find_type<UNSIGNED_TYPES, 16>::type uint16;
typedef find_type<UNSIGNED_TYPES, 32>::type uint32;
typedef find_type<UNSIGNED_TYPES, 64>::type uint64;
typedef find_type<SIGNED_TYPES, 7>::type int8;
typedef find_type<SIGNED_TYPES, 15>::type int16;
typedef find_type<SIGNED_TYPES, 31>::type int32;
typedef find_type<SIGNED_TYPES, 63>::type int64;
}
As others have answered, the "standards" all leave most of the details as "implementation defined" and only state that type "char" is at leat "char_bis" wide, and that "char <= short <= int <= long <= long long" (float and double are pretty much consistent with the IEEE floating point standards, and long double is typically same as double--but may be larger on more current implementations).
Part of the reasons for not having very specific and exact values is because languages like C/C++ were designed to be portable to a large number of hardware platforms--Including computer systems in which the "char" word-size may be 4-bits or 7-bits, or even some value other than the "8-/16-/32-/64-bit" computers the average home computer user is exposed to. (Word-size here meaning how many bits wide the system normally operates on--Again, it's not always 8-bits as home computer users may expect.)
If you really need a object (in the sense of a series of bits representing an integral value) of a specific number of bits, most compilers have some method of specifying that; But it's generally not portable, even between compilers made by the ame company but for different platforms. Some standards and practices (especially limits.h and the like) are common enough that most compilers will have support for determining at the best-fit type for a specific range of values, but not the number of bits used. (That is, if you know you need to hold values between 0 and 127, you can determine that your compiler supports an "int8" type of 8-bits which will be large enought to hold the full range desired, but not something like an "int7" type which would be an exact match for 7-bits.)
Note: Many Un*x source packages used "./configure" script which will probe the compiler/system's capabilities and output a suitable Makefile and config.h. You might examine some of these scripts to see how they work and how they probe the comiler/system capabilities, and follow their lead.
I notice that all the other answers here have focused almost exclusively on integral types, while the questioner also asked about floating-points.
I don't think the C++ standard requires it, but compilers for the most common platforms these days generally follow the IEEE754 standard for their floating-point numbers. This standard specifies four types of binary floating-point (as well as some BCD formats, which I've never seen support for in C++ compilers):
Half precision (binary16) - 11-bit significand, exponent range -14 to 15
Single precision (binary32) - 24-bit significand, exponent range -126 to 127
Double precision (binary64) - 53-bit significand, exponent range -1022 to 1023
Quadruple precision (binary128) - 113-bit significand, exponent range -16382 to 16383
How does this map onto C++ types, then? Generally the float uses single precision; thus, sizeof(float) = 4. Then double uses double precision (I believe that's the source of the name double), and long double may be either double or quadruple precision (it's quadruple on my system, but on 32-bit systems it may be double). I don't know of any compilers that offer half precision floating-points.
In summary, this is the usual:
sizeof(float) = 4
sizeof(double) = 8
sizeof(long double) = 8 or 16
unsigned char bits = sizeof(X) << 3;
where X is a char,int,long etc.. will give you size of X in bits.
From Alex B The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name).
Minimum ranges required by the standard (page 22) are:
and Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement platforms)
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to the number of bits enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
The actual implementation-specific ranges can be found in header in C, or in C++ (or even better, templated std::numeric_limits in header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
This is correct, however, you were also right in saying that:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double : 8 bytes
Because 32 bit architectures are still the default and most used, and they have kept these standard sizes since the pre-32 bit days when memory was less available, and for backwards compatibility and standardization it remained the same. Even 64 bit systems tend to use these and have extentions/modifications.
Please reference this for more information:
http://en.cppreference.com/w/cpp/language/types
As you mentioned - it largely depends upon the compiler and the platform. For this, check the ANSI standard, http://home.att.net/~jackklein/c/inttypes.html
Here is the one for the Microsoft compiler: Data Type Ranges.
You can use variables provided by libraries such as OpenGL, Qt, etc.
For example, Qt provides qint8 (guaranteed to be 8-bit on all platforms supported by Qt), qint16, qint32, qint64, quint8, quint16, quint32, quint64, etc.
On a 64-bit machine:
int: 4
long: 8
long long: 8
void*: 8
size_t: 8
There are four types of integers based on size:
short integer: 2 byte
long integer: 4 byte
long long integer: 8 byte
integer: depends upon the compiler (16 bit, 32 bit, or 64 bit)
I'm looking for detailed information regarding the size of basic C++ types.
I know that it depends on the architecture (16 bits, 32 bits, 64 bits) and the compiler.
But are there any standards for C++?
I'm using Visual Studio 2008 on a 32-bit architecture. Here is what I get:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double: 8 bytes
I tried to find, without much success, reliable information stating the sizes of char, short, int, long, double, float (and other types I didn't think of) under different architectures and compilers.
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name). This is stated explicitly in the standard.
The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
unsigned char: 0 to 255
"plain" char: same range as signed char or unsigned char, implementation-defined
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to a number of bits high enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
Putting this all together, we are guaranteed that:
char, signed char, and unsigned char are at least 8 bits
signed short, unsigned short, signed int, and unsigned int are at least 16 bits
signed long and unsigned long are at least 32 bits
signed long long and unsigned long long are at least 64 bits
No guarantee is made about the size of float or double except that double provides at least as much precision as float.
The actual implementation-specific ranges can be found in <limits.h> header in C, or <climits> in C++ (or even better, templated std::numeric_limits in <limits> header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
For 32-bit systems, the 'de facto' standard is ILP32 — that is, int, long and pointer are all 32-bit quantities.
For 64-bit systems, the primary Unix 'de facto' standard is LP64 — long and pointer are 64-bit (but int is 32-bit). The Windows 64-bit standard is LLP64 — long long and pointer are 64-bit (but long and int are both 32-bit).
At one time, some Unix systems used an ILP64 organization.
None of these de facto standards is legislated by the C standard (ISO/IEC 9899:1999), but all are permitted by it.
And, by definition, sizeof(char) is 1, notwithstanding the test in the Perl configure script.
Note that there were machines (Crays) where CHAR_BIT was much larger than 8. That meant, IIRC, that sizeof(int) was also 1, because both char and int were 32-bit.
In practice there's no such thing. Often you can expect std::size_t to represent the unsigned native integer size on current architecture. i.e. 16-bit, 32-bit or 64-bit but it isn't always the case as pointed out in the comments to this answer.
As far as all the other built-in types go, it really depends on the compiler. Here's two excerpts taken from the current working draft of the latest C++ standard:
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: unsigned char, unsigned short int, unsigned int, unsigned long int, and unsigned long long int, each of which occupies the same amount of storage and has the same alignment requirements.
If you want to you can statically (compile-time) assert the sizeof these fundamental types. It will alert people to think about porting your code if the sizeof assumptions change.
There is standard.
C90 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long)
C99 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
Here is the C99 specifications. Page 22 details sizes of different integral types.
Here is the int type sizes (bits) for Windows platforms:
Type C99 Minimum Windows 32bit
char 8 8
short 16 16
int 16 32
long 32 32
long long 64 64
If you are concerned with portability, or you want the name of the type reflects the size, you can look at the header <inttypes.h>, where the following macros are available:
int8_t
int16_t
int32_t
int64_t
int8_t is guaranteed to be 8 bits, and int16_t is guaranteed to be 16 bits, etc.
If you need fixed size types, use types like uint32_t (unsigned integer 32 bits) defined in stdint.h. They are specified in C99.
Updated: C++11 brought the types from TR1 officially into the standard:
long long int
unsigned long long int
And the "sized" types from <cstdint>
int8_t
int16_t
int32_t
int64_t
(and the unsigned counterparts).
Plus you get:
int_least8_t
int_least16_t
int_least32_t
int_least64_t
Plus the unsigned counterparts.
These types represent the smallest integer types with at least the specified number of bits. Likewise there are the "fastest" integer types with at least the specified number of bits:
int_fast8_t
int_fast16_t
int_fast32_t
int_fast64_t
Plus the unsigned versions.
What "fast" means, if anything, is up to the implementation. It need not be the fastest for all purposes either.
The C++ Standard says it like this:
3.9.1, §2:
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion: It depends on which architecture you're working on. Any other assumption is false.
Nope, there is no standard for type sizes. Standard only requires that:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
The best thing you can do if you want variables of a fixed sizes is to use macros like this:
#ifdef SYSTEM_X
#define WORD int
#else
#define WORD long int
#endif
Then you can use WORD to define your variables. It's not that I like this but it's the most portable way.
For floating point numbers there is a standard (IEEE754): floats are 32 bit and doubles are 64. This is a hardware standard, not a C++ standard, so compilers could theoretically define float and double to some other size, but in practice I've never seen an architecture that used anything different.
We are allowed to define a synonym for the type so we can create our own "standard".
On a machine in which sizeof(int) == 4, we can define:
typedef int int32;
int32 i;
int32 j;
...
So when we transfer the code to a different machine where actually the size of long int is 4, we can just redefine the single occurrence of int.
typedef long int int32;
int32 i;
int32 j;
...
There is a standard and it is specified in the various standards documents (ISO, ANSI and whatnot).
Wikipedia has a great page explaining the various types and the max they may store:
Integer in Computer Science.
However even with a standard C++ compiler you can find out relatively easily using the following code snippet:
#include <iostream>
#include <limits>
int main() {
// Change the template parameter to the various different types.
std::cout << std::numeric_limits<int>::max() << std::endl;
}
Documentation for std::numeric_limits can be found at Roguewave. It includes a plethora of other commands you can call to find out the various limits. This can be used with any arbitrary type that conveys size, for example std::streamsize.
John's answer contains the best description, as those are guaranteed to hold. No matter what platform you are on, there is another good page that goes into more detail as to how many bits each type MUST contain: int types, which are defined in the standard.
I hope this helps!
When it comes to built in types for different architectures and different compilers just run the following code on your architecture with your compiler to see what it outputs. Below shows my Ubuntu 13.04 (Raring Ringtail) 64 bit g++4.7.3 output. Also please note what was answered below which is why the output is ordered as such:
"There are five standard signed integer types: signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list."
#include <iostream>
int main ( int argc, char * argv[] )
{
std::cout<< "size of char: " << sizeof (char) << std::endl;
std::cout<< "size of short: " << sizeof (short) << std::endl;
std::cout<< "size of int: " << sizeof (int) << std::endl;
std::cout<< "size of long: " << sizeof (long) << std::endl;
std::cout<< "size of long long: " << sizeof (long long) << std::endl;
std::cout<< "size of float: " << sizeof (float) << std::endl;
std::cout<< "size of double: " << sizeof (double) << std::endl;
std::cout<< "size of pointer: " << sizeof (int *) << std::endl;
}
size of char: 1
size of short: 2
size of int: 4
size of long: 8
size of long long: 8
size of float: 4
size of double: 8
size of pointer: 8
1) Table N1 in article "The forgotten problems of 64-bit programs development"
2) "Data model"
You can use:
cout << "size of datatype = " << sizeof(datatype) << endl;
datatype = int, long int etc.
You will be able to see the size for whichever datatype you type.
As mentioned the size should reflect the current architecture. You could take a peak around in limits.h if you want to see how your current compiler is handling things.
If you are interested in a pure C++ solution, I made use of templates and only C++ standard code to define types at compile time based on their bit size.
This make the solution portable across compilers.
The idea behind is very simple: Create a list containing types char, int, short, long, long long (signed and unsigned versions) and the scan the list and by the use of numeric_limits template select the type with given size.
Including this header you got 8 type stdtype::int8, stdtype::int16, stdtype::int32, stdtype::int64, stdtype::uint8, stdtype::uint16, stdtype::uint32, stdtype::uint64.
If some type cannot be represented it will be evaluated to stdtype::null_type also declared in that header.
THE CODE BELOW IS GIVEN WITHOUT WARRANTY, PLEASE DOUBLE CHECK IT.
I'M NEW AT METAPROGRAMMING TOO, FEEL FREE TO EDIT AND CORRECT THIS CODE.
Tested with DevC++ (so a gcc version around 3.5)
#include <limits>
namespace stdtype
{
using namespace std;
/*
* THIS IS THE CLASS USED TO SEMANTICALLY SPECIFY A NULL TYPE.
* YOU CAN USE WHATEVER YOU WANT AND EVEN DRIVE A COMPILE ERROR IF IT IS
* DECLARED/USED.
*
* PLEASE NOTE that C++ std define sizeof of an empty class to be 1.
*/
class null_type{};
/*
* Template for creating lists of types
*
* T is type to hold
* S is the next type_list<T,S> type
*
* Example:
* Creating a list with type int and char:
* typedef type_list<int, type_list<char> > test;
* test::value //int
* test::next::value //char
*/
template <typename T, typename S> struct type_list
{
typedef T value;
typedef S next;
};
/*
* Declaration of template struct for selecting a type from the list
*/
template <typename list, int b, int ctl> struct select_type;
/*
* Find a type with specified "b" bit in list "list"
*
*
*/
template <typename list, int b> struct find_type
{
private:
//Handy name for the type at the head of the list
typedef typename list::value cur_type;
//Number of bits of the type at the head
//CHANGE THIS (compile time) exp TO USE ANOTHER TYPE LEN COMPUTING
enum {cur_type_bits = numeric_limits<cur_type>::digits};
public:
//Select the type at the head if b == cur_type_bits else
//select_type call find_type with list::next
typedef typename select_type<list, b, cur_type_bits>::type type;
};
/*
* This is the specialization for empty list, return the null_type
* OVVERRIDE this struct to ADD CUSTOM BEHAVIOR for the TYPE NOT FOUND case
* (ie search for type with 17 bits on common archs)
*/
template <int b> struct find_type<null_type, b>
{
typedef null_type type;
};
/*
* Primary template for selecting the type at the head of the list if
* it matches the requested bits (b == ctl)
*
* If b == ctl the partial specified templated is evaluated so here we have
* b != ctl. We call find_type on the next element of the list
*/
template <typename list, int b, int ctl> struct select_type
{
typedef typename find_type<typename list::next, b>::type type;
};
/*
* This partial specified templated is used to select top type of a list
* it is called by find_type with the list of value (consumed at each call)
* the bits requested (b) and the current type (top type) length in bits
*
* We specialice the b == ctl case
*/
template <typename list, int b> struct select_type<list, b, b>
{
typedef typename list::value type;
};
/*
* These are the types list, to avoid possible ambiguity (some weird archs)
* we kept signed and unsigned separated
*/
#define UNSIGNED_TYPES type_list<unsigned char, \
type_list<unsigned short, \
type_list<unsigned int, \
type_list<unsigned long, \
type_list<unsigned long long, null_type> > > > >
#define SIGNED_TYPES type_list<signed char, \
type_list<signed short, \
type_list<signed int, \
type_list<signed long, \
type_list<signed long long, null_type> > > > >
/*
* These are acutally typedef used in programs.
*
* Nomenclature is [u]intN where u if present means unsigned, N is the
* number of bits in the integer
*
* find_type is used simply by giving first a type_list then the number of
* bits to search for.
*
* NB. Each type in the type list must had specified the template
* numeric_limits as it is used to compute the type len in (binary) digit.
*/
typedef find_type<UNSIGNED_TYPES, 8>::type uint8;
typedef find_type<UNSIGNED_TYPES, 16>::type uint16;
typedef find_type<UNSIGNED_TYPES, 32>::type uint32;
typedef find_type<UNSIGNED_TYPES, 64>::type uint64;
typedef find_type<SIGNED_TYPES, 7>::type int8;
typedef find_type<SIGNED_TYPES, 15>::type int16;
typedef find_type<SIGNED_TYPES, 31>::type int32;
typedef find_type<SIGNED_TYPES, 63>::type int64;
}
As others have answered, the "standards" all leave most of the details as "implementation defined" and only state that type "char" is at leat "char_bis" wide, and that "char <= short <= int <= long <= long long" (float and double are pretty much consistent with the IEEE floating point standards, and long double is typically same as double--but may be larger on more current implementations).
Part of the reasons for not having very specific and exact values is because languages like C/C++ were designed to be portable to a large number of hardware platforms--Including computer systems in which the "char" word-size may be 4-bits or 7-bits, or even some value other than the "8-/16-/32-/64-bit" computers the average home computer user is exposed to. (Word-size here meaning how many bits wide the system normally operates on--Again, it's not always 8-bits as home computer users may expect.)
If you really need a object (in the sense of a series of bits representing an integral value) of a specific number of bits, most compilers have some method of specifying that; But it's generally not portable, even between compilers made by the ame company but for different platforms. Some standards and practices (especially limits.h and the like) are common enough that most compilers will have support for determining at the best-fit type for a specific range of values, but not the number of bits used. (That is, if you know you need to hold values between 0 and 127, you can determine that your compiler supports an "int8" type of 8-bits which will be large enought to hold the full range desired, but not something like an "int7" type which would be an exact match for 7-bits.)
Note: Many Un*x source packages used "./configure" script which will probe the compiler/system's capabilities and output a suitable Makefile and config.h. You might examine some of these scripts to see how they work and how they probe the comiler/system capabilities, and follow their lead.
I notice that all the other answers here have focused almost exclusively on integral types, while the questioner also asked about floating-points.
I don't think the C++ standard requires it, but compilers for the most common platforms these days generally follow the IEEE754 standard for their floating-point numbers. This standard specifies four types of binary floating-point (as well as some BCD formats, which I've never seen support for in C++ compilers):
Half precision (binary16) - 11-bit significand, exponent range -14 to 15
Single precision (binary32) - 24-bit significand, exponent range -126 to 127
Double precision (binary64) - 53-bit significand, exponent range -1022 to 1023
Quadruple precision (binary128) - 113-bit significand, exponent range -16382 to 16383
How does this map onto C++ types, then? Generally the float uses single precision; thus, sizeof(float) = 4. Then double uses double precision (I believe that's the source of the name double), and long double may be either double or quadruple precision (it's quadruple on my system, but on 32-bit systems it may be double). I don't know of any compilers that offer half precision floating-points.
In summary, this is the usual:
sizeof(float) = 4
sizeof(double) = 8
sizeof(long double) = 8 or 16
unsigned char bits = sizeof(X) << 3;
where X is a char,int,long etc.. will give you size of X in bits.
From Alex B The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name).
Minimum ranges required by the standard (page 22) are:
and Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement platforms)
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to the number of bits enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
The actual implementation-specific ranges can be found in header in C, or in C++ (or even better, templated std::numeric_limits in header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
This is correct, however, you were also right in saying that:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double : 8 bytes
Because 32 bit architectures are still the default and most used, and they have kept these standard sizes since the pre-32 bit days when memory was less available, and for backwards compatibility and standardization it remained the same. Even 64 bit systems tend to use these and have extentions/modifications.
Please reference this for more information:
http://en.cppreference.com/w/cpp/language/types
As you mentioned - it largely depends upon the compiler and the platform. For this, check the ANSI standard, http://home.att.net/~jackklein/c/inttypes.html
Here is the one for the Microsoft compiler: Data Type Ranges.
You can use variables provided by libraries such as OpenGL, Qt, etc.
For example, Qt provides qint8 (guaranteed to be 8-bit on all platforms supported by Qt), qint16, qint32, qint64, quint8, quint16, quint32, quint64, etc.
On a 64-bit machine:
int: 4
long: 8
long long: 8
void*: 8
size_t: 8
There are four types of integers based on size:
short integer: 2 byte
long integer: 4 byte
long long integer: 8 byte
integer: depends upon the compiler (16 bit, 32 bit, or 64 bit)
The following code has been compiled on 3 different compilers and 3 different processors and gave 2 different results:
typedef unsigned long int u32;
typedef signed long long s64;
int main ()
{ u32 Operand1,Operand2;
s64 Result;
Operand1=95;
Operand2=100;
Result= (s64)(Operand1-Operand2);}
Result produces 2 results:
either
-5 or 4294967291
I do understand that the operation of (Operand1-Operand2) is done in as 32-bit unsigned calculation, then when casted to s64 sign extension was done correctly in the first case but not done correctly for the 2nd case.
My question is whether the sign extension is possible to be controlled via compiler options, or it is compiler-dependent or maybe it is target-dependent.
Your issue is that you assume unsigned long int to be 32 bit wide and signed long long to be 64 bit wide. This assumption is wrong.
We can visualize what's going on by using types that have a guaranteed (by the standard) bit width:
int main() {
{
uint32_t large = 100, small = 95;
int64_t result = (small - large);
std::cout << "32 and 64 bits: " << result << std::endl;
} // 4294967291
{
uint32_t large = 100, small = 95;
int32_t result = (small - large);
std::cout << "32 and 32 bits: " << result << std::endl;
} // -5
{
uint64_t large = 100, small = 95;
int64_t result = (small - large);
std::cout << "64 and 64 bits: " << result << std::endl;
} // -5
return 0;
}
In every of these three cases, the expression small - large results in a result of unsigned integer type (of according width). This result is calculated using modular arithmetic.
In the first case, because that unsigned result can be stored in the wider signed integer, no conversion of the value is performed.
In the other cases the result cannot be stored in the signed integer. Thus an implementation defined conversion is performed, which usually means interpreting the bit pattern of the unsigned value as signed value. Because the result is "large", the highest bits will be set, which when treated as signed value (under two's complement) is equivalent to a "small" negative value.
To highlight the comment from Lưu Vĩnh Phúc:
Operand1-Operand2 is unsigned therefore when casting to s64 it's always zero extension. [..]
The sign extension is only done in the first case as only then there is a widening conversion, and it is indeed always zero extension.
Quotes from the standard, emphasis mine. Regarding small - large:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2^n$ where n is the number of bits used to represent the unsigned type). [..]
§ 4.7/2
Regarding the conversion from unsigned to signed:
If the destination type [of the integral conversion] is signed, the value is unchanged if it can be represented in the destination type; otherwise, the value is implementation-defined.
§ 4.7/3
Sign extension is platform dependent, where platform is a combination of a compiler, target hardware architecture and operating system.
Moreover, as Paul R mentioned, width of built-in types (like unsigned long) is platform-dependent too. Use types from <cstdint> to get fixed-width types. Nevertheless, they are just platform-dependent definitions, so their sign extension behavior still depends on the platform.
Here is a good almost-duplicate question about type sizes. And here is a good table about type size relations.
Type promotions, and the corresponding sign-extensions are specified by the C++ language.
What's not specified, but is platform-dependent, is the range of integer types provided. It's even Standard-compliant for char, short int, int, long int and long long int all to have the same range, provided that range satisfies the C++ Standard requirements for long long int. On such a platform, no widening or narrowing would ever happen, but signed<->unsigned conversion could still alter values.
I've been searching for a while but couldn't find a definite answer to this apparently simple question: what is the default length of an int?
I know that by default, an int is signed. But is it short or long?
According to the "Fundamental data types"table found in the following page, an int is a long int by default (4 bytes).
http://www.cplusplus.com/doc/tutorial/variables/
Is it always true, or does this depend on the OS (32bit/64bit), the compiler or other things?
It depends on the compiler implementor. An int is supposed to be the best "native" length for the platform. Best native here typically refers to whichever size is most handy/efficient/fast for the targeted processor to work with. Often you can expect int to have the same size as the processor's (integer) registers.
As others have pointed out, there are certain relationships about the various integer types' sizes that the compiler must adhere to, so it's the implementor is not free to choose anything. For instance, int can't be larger than long, and so on.
You often talk about programming models in relationship with issues like these, e.g. a compiler can chose to make the various types different sizes depending on the chosen model.
The standard requires only:
a range of a least ±32767 (i.e., at least 16 bits)
int is no shorter than short and no longer than long. It may be equal in size to one of them, or neither.
The exact size of integer types depends on the compiler. The de facto standard is
char is 8 bits
short is 16 bits
int is 16 bits on 16-bit systems, and 32 bits on both 32- and 64-bit systems
long may be either 32 or 64 bits
It depends on the architecture, that is the microprocessor/microcontroller you're compiling the code for (x86, ARM, PIC, Z80, 8051 etc.) and on the compiler, that is how the compiler implements the fundamental/built in data types.
You are guaranteed that a short int is at least 16 bits, and that a long int is at least 32 bits, and that plain int will be no smaller than a short nor larger than a long. But the actual sizes will be decided by the compiler implementor.
The C++ Standard says it like this :
3.9.1, §2 :
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion : it depends on which architecture you're working on. Any other assumption is false.
$4.4 from "The C++ programming Language" by Bjarne
Like char, each integer type comes in three forms: ‘‘plain’’ int , signed int, and unsigned int . In addition, integers come in three sizes: short int , ‘‘plain’’ int , and long int. A long int can be referred to as plain long . Similarly, short is a synonym for short int , unsigned for unsigned int, and signed for signed int .
The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules (§C.6.1, §C.6.2.1). Unlike plain chars, plain ints are always signed. The signed int types are simply more explicit synonyms for their plain int counterparts.
Section 4.6 of the same book states
Sizes of C++ objects are expressed in terms of multiples of the size of a char , so by definition the size of a char is 1 . The size of an object or type can be obtained using the sizeof operator
(§6.2). This is what is guaranteed
about sizes of fundamental types:
1 <= sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
1 <= sizeof(bool) <= sizeof(long)
sizeof(char) <= sizeof(wchar_t) <= sizeof(long)
sizeof(float) <= sizeof(double) <= sizeof(long double)
sizeof(N) <= sizeof(signed N) <= sizeof(unsigned N)
where N can be char , short int, int ,
or long int . In addition, it is
guaranteed that a char has at least 8
bits, a short at least 16 bits, and a
long at least 32 bits. A char can hold
a character of the machine’s character
set.
This clearly indicates that
sizeof(int) is implementation defined
but is guaranteed to be minimum 32bits
C++03 $3.9.1/3
"For each of the signed integer types,
there exists a corresponding (but
different) unsigned integer type:
“unsigned char”, “unsigned short int”,
“unsigned int”, and “unsigned long
int,” each of which occupies the
same amount of storage and has the
same alignment requirements (3.9) as
the corresponding signed integer
type40) ; that is, each signed integer
type has the same object
representation as its corresponding
unsigned integer type.
I'm looking for detailed information regarding the size of basic C++ types.
I know that it depends on the architecture (16 bits, 32 bits, 64 bits) and the compiler.
But are there any standards for C++?
I'm using Visual Studio 2008 on a 32-bit architecture. Here is what I get:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double: 8 bytes
I tried to find, without much success, reliable information stating the sizes of char, short, int, long, double, float (and other types I didn't think of) under different architectures and compilers.
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name). This is stated explicitly in the standard.
The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
unsigned char: 0 to 255
"plain" char: same range as signed char or unsigned char, implementation-defined
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to a number of bits high enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
Putting this all together, we are guaranteed that:
char, signed char, and unsigned char are at least 8 bits
signed short, unsigned short, signed int, and unsigned int are at least 16 bits
signed long and unsigned long are at least 32 bits
signed long long and unsigned long long are at least 64 bits
No guarantee is made about the size of float or double except that double provides at least as much precision as float.
The actual implementation-specific ranges can be found in <limits.h> header in C, or <climits> in C++ (or even better, templated std::numeric_limits in <limits> header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
For 32-bit systems, the 'de facto' standard is ILP32 — that is, int, long and pointer are all 32-bit quantities.
For 64-bit systems, the primary Unix 'de facto' standard is LP64 — long and pointer are 64-bit (but int is 32-bit). The Windows 64-bit standard is LLP64 — long long and pointer are 64-bit (but long and int are both 32-bit).
At one time, some Unix systems used an ILP64 organization.
None of these de facto standards is legislated by the C standard (ISO/IEC 9899:1999), but all are permitted by it.
And, by definition, sizeof(char) is 1, notwithstanding the test in the Perl configure script.
Note that there were machines (Crays) where CHAR_BIT was much larger than 8. That meant, IIRC, that sizeof(int) was also 1, because both char and int were 32-bit.
In practice there's no such thing. Often you can expect std::size_t to represent the unsigned native integer size on current architecture. i.e. 16-bit, 32-bit or 64-bit but it isn't always the case as pointed out in the comments to this answer.
As far as all the other built-in types go, it really depends on the compiler. Here's two excerpts taken from the current working draft of the latest C++ standard:
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: unsigned char, unsigned short int, unsigned int, unsigned long int, and unsigned long long int, each of which occupies the same amount of storage and has the same alignment requirements.
If you want to you can statically (compile-time) assert the sizeof these fundamental types. It will alert people to think about porting your code if the sizeof assumptions change.
There is standard.
C90 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long)
C99 standard requires that
sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
Here is the C99 specifications. Page 22 details sizes of different integral types.
Here is the int type sizes (bits) for Windows platforms:
Type C99 Minimum Windows 32bit
char 8 8
short 16 16
int 16 32
long 32 32
long long 64 64
If you are concerned with portability, or you want the name of the type reflects the size, you can look at the header <inttypes.h>, where the following macros are available:
int8_t
int16_t
int32_t
int64_t
int8_t is guaranteed to be 8 bits, and int16_t is guaranteed to be 16 bits, etc.
If you need fixed size types, use types like uint32_t (unsigned integer 32 bits) defined in stdint.h. They are specified in C99.
Updated: C++11 brought the types from TR1 officially into the standard:
long long int
unsigned long long int
And the "sized" types from <cstdint>
int8_t
int16_t
int32_t
int64_t
(and the unsigned counterparts).
Plus you get:
int_least8_t
int_least16_t
int_least32_t
int_least64_t
Plus the unsigned counterparts.
These types represent the smallest integer types with at least the specified number of bits. Likewise there are the "fastest" integer types with at least the specified number of bits:
int_fast8_t
int_fast16_t
int_fast32_t
int_fast64_t
Plus the unsigned versions.
What "fast" means, if anything, is up to the implementation. It need not be the fastest for all purposes either.
The C++ Standard says it like this:
3.9.1, §2:
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
(44) that is, large enough to contain
any value in the range of INT_MIN and
INT_MAX, as defined in the header
<climits>.
The conclusion: It depends on which architecture you're working on. Any other assumption is false.
Nope, there is no standard for type sizes. Standard only requires that:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
The best thing you can do if you want variables of a fixed sizes is to use macros like this:
#ifdef SYSTEM_X
#define WORD int
#else
#define WORD long int
#endif
Then you can use WORD to define your variables. It's not that I like this but it's the most portable way.
For floating point numbers there is a standard (IEEE754): floats are 32 bit and doubles are 64. This is a hardware standard, not a C++ standard, so compilers could theoretically define float and double to some other size, but in practice I've never seen an architecture that used anything different.
We are allowed to define a synonym for the type so we can create our own "standard".
On a machine in which sizeof(int) == 4, we can define:
typedef int int32;
int32 i;
int32 j;
...
So when we transfer the code to a different machine where actually the size of long int is 4, we can just redefine the single occurrence of int.
typedef long int int32;
int32 i;
int32 j;
...
There is a standard and it is specified in the various standards documents (ISO, ANSI and whatnot).
Wikipedia has a great page explaining the various types and the max they may store:
Integer in Computer Science.
However even with a standard C++ compiler you can find out relatively easily using the following code snippet:
#include <iostream>
#include <limits>
int main() {
// Change the template parameter to the various different types.
std::cout << std::numeric_limits<int>::max() << std::endl;
}
Documentation for std::numeric_limits can be found at Roguewave. It includes a plethora of other commands you can call to find out the various limits. This can be used with any arbitrary type that conveys size, for example std::streamsize.
John's answer contains the best description, as those are guaranteed to hold. No matter what platform you are on, there is another good page that goes into more detail as to how many bits each type MUST contain: int types, which are defined in the standard.
I hope this helps!
When it comes to built in types for different architectures and different compilers just run the following code on your architecture with your compiler to see what it outputs. Below shows my Ubuntu 13.04 (Raring Ringtail) 64 bit g++4.7.3 output. Also please note what was answered below which is why the output is ordered as such:
"There are five standard signed integer types: signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list."
#include <iostream>
int main ( int argc, char * argv[] )
{
std::cout<< "size of char: " << sizeof (char) << std::endl;
std::cout<< "size of short: " << sizeof (short) << std::endl;
std::cout<< "size of int: " << sizeof (int) << std::endl;
std::cout<< "size of long: " << sizeof (long) << std::endl;
std::cout<< "size of long long: " << sizeof (long long) << std::endl;
std::cout<< "size of float: " << sizeof (float) << std::endl;
std::cout<< "size of double: " << sizeof (double) << std::endl;
std::cout<< "size of pointer: " << sizeof (int *) << std::endl;
}
size of char: 1
size of short: 2
size of int: 4
size of long: 8
size of long long: 8
size of float: 4
size of double: 8
size of pointer: 8
1) Table N1 in article "The forgotten problems of 64-bit programs development"
2) "Data model"
You can use:
cout << "size of datatype = " << sizeof(datatype) << endl;
datatype = int, long int etc.
You will be able to see the size for whichever datatype you type.
As mentioned the size should reflect the current architecture. You could take a peak around in limits.h if you want to see how your current compiler is handling things.
If you are interested in a pure C++ solution, I made use of templates and only C++ standard code to define types at compile time based on their bit size.
This make the solution portable across compilers.
The idea behind is very simple: Create a list containing types char, int, short, long, long long (signed and unsigned versions) and the scan the list and by the use of numeric_limits template select the type with given size.
Including this header you got 8 type stdtype::int8, stdtype::int16, stdtype::int32, stdtype::int64, stdtype::uint8, stdtype::uint16, stdtype::uint32, stdtype::uint64.
If some type cannot be represented it will be evaluated to stdtype::null_type also declared in that header.
THE CODE BELOW IS GIVEN WITHOUT WARRANTY, PLEASE DOUBLE CHECK IT.
I'M NEW AT METAPROGRAMMING TOO, FEEL FREE TO EDIT AND CORRECT THIS CODE.
Tested with DevC++ (so a gcc version around 3.5)
#include <limits>
namespace stdtype
{
using namespace std;
/*
* THIS IS THE CLASS USED TO SEMANTICALLY SPECIFY A NULL TYPE.
* YOU CAN USE WHATEVER YOU WANT AND EVEN DRIVE A COMPILE ERROR IF IT IS
* DECLARED/USED.
*
* PLEASE NOTE that C++ std define sizeof of an empty class to be 1.
*/
class null_type{};
/*
* Template for creating lists of types
*
* T is type to hold
* S is the next type_list<T,S> type
*
* Example:
* Creating a list with type int and char:
* typedef type_list<int, type_list<char> > test;
* test::value //int
* test::next::value //char
*/
template <typename T, typename S> struct type_list
{
typedef T value;
typedef S next;
};
/*
* Declaration of template struct for selecting a type from the list
*/
template <typename list, int b, int ctl> struct select_type;
/*
* Find a type with specified "b" bit in list "list"
*
*
*/
template <typename list, int b> struct find_type
{
private:
//Handy name for the type at the head of the list
typedef typename list::value cur_type;
//Number of bits of the type at the head
//CHANGE THIS (compile time) exp TO USE ANOTHER TYPE LEN COMPUTING
enum {cur_type_bits = numeric_limits<cur_type>::digits};
public:
//Select the type at the head if b == cur_type_bits else
//select_type call find_type with list::next
typedef typename select_type<list, b, cur_type_bits>::type type;
};
/*
* This is the specialization for empty list, return the null_type
* OVVERRIDE this struct to ADD CUSTOM BEHAVIOR for the TYPE NOT FOUND case
* (ie search for type with 17 bits on common archs)
*/
template <int b> struct find_type<null_type, b>
{
typedef null_type type;
};
/*
* Primary template for selecting the type at the head of the list if
* it matches the requested bits (b == ctl)
*
* If b == ctl the partial specified templated is evaluated so here we have
* b != ctl. We call find_type on the next element of the list
*/
template <typename list, int b, int ctl> struct select_type
{
typedef typename find_type<typename list::next, b>::type type;
};
/*
* This partial specified templated is used to select top type of a list
* it is called by find_type with the list of value (consumed at each call)
* the bits requested (b) and the current type (top type) length in bits
*
* We specialice the b == ctl case
*/
template <typename list, int b> struct select_type<list, b, b>
{
typedef typename list::value type;
};
/*
* These are the types list, to avoid possible ambiguity (some weird archs)
* we kept signed and unsigned separated
*/
#define UNSIGNED_TYPES type_list<unsigned char, \
type_list<unsigned short, \
type_list<unsigned int, \
type_list<unsigned long, \
type_list<unsigned long long, null_type> > > > >
#define SIGNED_TYPES type_list<signed char, \
type_list<signed short, \
type_list<signed int, \
type_list<signed long, \
type_list<signed long long, null_type> > > > >
/*
* These are acutally typedef used in programs.
*
* Nomenclature is [u]intN where u if present means unsigned, N is the
* number of bits in the integer
*
* find_type is used simply by giving first a type_list then the number of
* bits to search for.
*
* NB. Each type in the type list must had specified the template
* numeric_limits as it is used to compute the type len in (binary) digit.
*/
typedef find_type<UNSIGNED_TYPES, 8>::type uint8;
typedef find_type<UNSIGNED_TYPES, 16>::type uint16;
typedef find_type<UNSIGNED_TYPES, 32>::type uint32;
typedef find_type<UNSIGNED_TYPES, 64>::type uint64;
typedef find_type<SIGNED_TYPES, 7>::type int8;
typedef find_type<SIGNED_TYPES, 15>::type int16;
typedef find_type<SIGNED_TYPES, 31>::type int32;
typedef find_type<SIGNED_TYPES, 63>::type int64;
}
As others have answered, the "standards" all leave most of the details as "implementation defined" and only state that type "char" is at leat "char_bis" wide, and that "char <= short <= int <= long <= long long" (float and double are pretty much consistent with the IEEE floating point standards, and long double is typically same as double--but may be larger on more current implementations).
Part of the reasons for not having very specific and exact values is because languages like C/C++ were designed to be portable to a large number of hardware platforms--Including computer systems in which the "char" word-size may be 4-bits or 7-bits, or even some value other than the "8-/16-/32-/64-bit" computers the average home computer user is exposed to. (Word-size here meaning how many bits wide the system normally operates on--Again, it's not always 8-bits as home computer users may expect.)
If you really need a object (in the sense of a series of bits representing an integral value) of a specific number of bits, most compilers have some method of specifying that; But it's generally not portable, even between compilers made by the ame company but for different platforms. Some standards and practices (especially limits.h and the like) are common enough that most compilers will have support for determining at the best-fit type for a specific range of values, but not the number of bits used. (That is, if you know you need to hold values between 0 and 127, you can determine that your compiler supports an "int8" type of 8-bits which will be large enought to hold the full range desired, but not something like an "int7" type which would be an exact match for 7-bits.)
Note: Many Un*x source packages used "./configure" script which will probe the compiler/system's capabilities and output a suitable Makefile and config.h. You might examine some of these scripts to see how they work and how they probe the comiler/system capabilities, and follow their lead.
I notice that all the other answers here have focused almost exclusively on integral types, while the questioner also asked about floating-points.
I don't think the C++ standard requires it, but compilers for the most common platforms these days generally follow the IEEE754 standard for their floating-point numbers. This standard specifies four types of binary floating-point (as well as some BCD formats, which I've never seen support for in C++ compilers):
Half precision (binary16) - 11-bit significand, exponent range -14 to 15
Single precision (binary32) - 24-bit significand, exponent range -126 to 127
Double precision (binary64) - 53-bit significand, exponent range -1022 to 1023
Quadruple precision (binary128) - 113-bit significand, exponent range -16382 to 16383
How does this map onto C++ types, then? Generally the float uses single precision; thus, sizeof(float) = 4. Then double uses double precision (I believe that's the source of the name double), and long double may be either double or quadruple precision (it's quadruple on my system, but on 32-bit systems it may be double). I don't know of any compilers that offer half precision floating-points.
In summary, this is the usual:
sizeof(float) = 4
sizeof(double) = 8
sizeof(long double) = 8 or 16
unsigned char bits = sizeof(X) << 3;
where X is a char,int,long etc.. will give you size of X in bits.
From Alex B The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).
One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name).
Minimum ranges required by the standard (page 22) are:
and Data Type Ranges on MSDN:
signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement platforms)
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as
the expression sizeof(type) * CHAR_BIT evaluates to the number of bits enough to contain required ranges, and
the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).
The actual implementation-specific ranges can be found in header in C, or in C++ (or even better, templated std::numeric_limits in header).
For example, this is how you will find maximum range for int:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
This is correct, however, you were also right in saying that:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
float : 4 bytes
double : 8 bytes
Because 32 bit architectures are still the default and most used, and they have kept these standard sizes since the pre-32 bit days when memory was less available, and for backwards compatibility and standardization it remained the same. Even 64 bit systems tend to use these and have extentions/modifications.
Please reference this for more information:
http://en.cppreference.com/w/cpp/language/types
As you mentioned - it largely depends upon the compiler and the platform. For this, check the ANSI standard, http://home.att.net/~jackklein/c/inttypes.html
Here is the one for the Microsoft compiler: Data Type Ranges.
You can use variables provided by libraries such as OpenGL, Qt, etc.
For example, Qt provides qint8 (guaranteed to be 8-bit on all platforms supported by Qt), qint16, qint32, qint64, quint8, quint16, quint32, quint64, etc.
On a 64-bit machine:
int: 4
long: 8
long long: 8
void*: 8
size_t: 8
There are four types of integers based on size:
short integer: 2 byte
long integer: 4 byte
long long integer: 8 byte
integer: depends upon the compiler (16 bit, 32 bit, or 64 bit)