fastest/smallest signed integer type - c++

I am reading about the Fixed width integer types (cpp reference ) and come across
the types int_fast8_t, int_fast16_t, int_fast32_t and int_least8_t,
int_least16_t, int_least32_t,etc. My questions are following
What does it mean by saying for example int_fast32_t is fastest signed integer type (with at least 32 bits )? Is the more common type unsigned int slow?
What does it mean by saying for example int_least32_t is smallest signed integer type?
what are the differences between int_fast32_t, int_least32_t and unsigned int?

int_fast32_t means that it is the fastest type of at least 32 bit for the processor. For most processors it is probably a 32 bit int. But imagine a 48 bit processor without a 32 bit add instruction. Keeping everything 48 bits is faster. int_least32_t is the smallest type for the target that can hold 32 bits. On the hypothetical 48 bit processor, there might be a 32 bit data type supported, with library support to implement them. Or int_least32_t might also be 48 bits. int is usually the fastest integer type for the target, but there is no guarantee as to the number of bits you'll get.

Related

int_fast32_t and int_fast16_t are typedefs of int. How is it supposed to be used?

stdint.h defines:
int32_t, int_least32_t, int_fast16_t and int_fast32_t as just int data type
How exactly they are different and how is it useful?
I'm ecpecially confused with the part of "int_fast16_t and int_fast32_t".
16 and 32 bits are implemented under the default int. How is it supposed to work?
Well the header implementation mentions the reasons clearly
To accommodate targets that are missing types that are exactly 8, 16,
32, or 64 bits wide, this implementation takes an approach of
cascading
redefintions, redefining __int_leastN_t to successively smaller exact-width
types.
Further more checking the section of this description (Minimum-width integer types) is helpful:-
The standard mandates that these have widths greater than or equal to
N, and that no smaller type with the same signedness has N or more
bits. For example, if a system provided only a uint32_t and uint64_t,
uint_least16_t must be equivalent to uint32_t.
int16_t is an integer which takes exactly 16 bits.
intfast16_t is a platform dependent implementation. On a 32 bit architecture, int16_t is misaligned and inefficient, as it has to first realigned in 32bit registers during load. intfast16_t is 32 bit memory aligned and takes more memory, but more efficient.
intleast16_t is also a platform dependent implementation. It is a packed implementation which is memory efficient, but inefficient when considering performance.
Similar is the case of int32_t, intfast32_t and intleast32_t.
Your's may be a 32bit architecture. That is why intfast16_t, int32_t, intfast32_t and intleast32_t are all defined as int, all are 32bit aligned. intfast16_t takes 32 bit, that is it.

Is it correct to assign int64_t or int32_t to ptrdiff_t?

Whilst working on porting a c++ cross platform (Windows & Linux) 32 bit code to accommodate 64 bit environments I had the following question:
On a 32 bit system is it functionally correct to assign a value from an int32_t type to a ptrdiff_t type?
On a 64 bit system is it functionally correct to assign a value from an int64_t type to a ptrdiff_t type?
Out of interest: On a 64 bit system is it functionally correct to assign a value from an int32_t type to a ptrdiff_t type?
Context: the signed ptrdiff_t value is used in some iterator arithmetic and could possibly take on a negative value as subtraction is used in the iterator arithmetic logic.
ptrdiff_t is in practice 32 bits on a 32-bit system, and 64 bits on a 64-bit system. It (1)can't be less. On a 16-bit system it has to be at least 17 (yes, that's not a typo) bits.
Since you ask, it's likely that some who maintain the code is unsure about this.
For them, just static_assert the sizes requirements, e.g. static_assert( sizeof(ptrdiff_t) >= sizeof(int), "").
(1) ptrdiff_t has to be sufficient to represent the pointer difference of any two arbitrary pointers to char in a contiguous array, so it must support the largest possible array of bytes.

Concerning integer types of bits in Dev-C++

I'm using Dev-C++ compiler and wxDev-C++ compiler whilst reading the C++ Primer Plus to learn the programming language. My book's literal explanation of integer types is as follows:
A short integer is at least 16 bits wide.
An int integer is at
least as big as short.
A long integer is at least 32 bits wide and
at least as big as int.
A long long integer is at least 64 bits wide
and at least as big as long.
Can anyone explain this to me?
Ok, I got to admit after long thinking I still don't understand where you need clarification but still gonna try to help you.
The c++ standard doesn't explicitly tell how many bits short, int, long or long long use but rather gives limits and leaves the rest up to the implementation. The limits are like you listed already.
As example:
On my Windows machine I got short = 16 bit, int and
long both are 32 bit and long long is 64 bit.
On my Linux machine I got short = 16 bit, int = 32 bit, long
and long long are both 64 bit. As you can see implementation of
long is different.
If you want to know how to find out how many bits a certain type uses on your platform/compiler try this:
#include <iostream>
#include <climits>
int main()
{
std::cout << sizeof(int) * CHAR_BIT;
return 0;
}
You can replace int with another type if you want.
Also you could look at http://en.cppreference.com/w/cpp/types/integer for some types with fixed bit width.
I hope that helps you since I don't really know how else I could help.
Let's go through the list one by one.
A short integer is at least 16 bits wide.
This means that there can be platforms where a short is 16 bits, platforms where it is 17 bits, platforms where it is 18 bits, etc., but no platform can give you a short that has only 17 bits or less.
So, for the sake of the argument, let's define several different platforms, which we just name with capital letters. Say platforms A to D have 16 bit short, E and F have 32 bit short, and platform G has 1024 bit short. All of those platforms are conforming so far.
An int integer is at least as big as short.
This means that you cannot make int smaller than short, but you can make it arbitrary large. For example, platforms A to D above could all use 16 bit int, but platforms E and F would have to make int at least 32 bits, and platform F must make it at least 1024 bits.
Let's assume that A chooses 16 bits, B, C and E choose 32 bits, D and F choose 64 bits, and G chooses again 1024 bits. All those choices fulfil the rules so far.
A long integer is at least 32 bits wide and at least as big as int.
Here we have two conditions. First, we have that long has to have at least 32 bits. So even platform A cannot choose long to be only 16 bits. Second, long also cannot be shorter than int, which means e.g. platforms D and F could not have 32 bit long because it would be shorter than their int. And for platform G, the minimum size for long is again 1024 bits.
So possible choices would be that platforms A, B and E choose 32 bit long, platforms C, D and F choose 64 bits, and G chooses 1024 bits.
A long long integer is at least 64 bits wide and at least as big as long.
Without going into detail again, now platforms A to F could all choose 64 bit or larger, and G could choose 1024 bits or larger.
Let's assume for completeness that platforms A to D choose 64 bits, E and F choose 128 bits, and G chooses 1024 bits.
Then we get the following list of platforms which all would be valid (but they are by far not all possible valid platforms):
A B C D E F G
short 16 16 16 16 32 32 1024
int 16 32 32 64 32 64 1024
long 32 32 64 64 32 64 1024
long long 64 64 64 64 128 128 1024
Platform A would be a typical 16 bit platform (except that at the time 16 bit platforms were current, long long wasn't yet an official type). Platform B would be a typical 32 bit platform, but also represents many 64 bit platforms. Platform C also is a possible 64 bit platform implementation. All others I don't believe are used anywhere, but they would be valid implementations anyway.
As you can see, e.g. on some platforms int has the same size as short, on others it has the same size as long, and on yet others, it may be somewhere in between, and on some platforms short and long may have the same size, in which case int also must have the same size.
I'm guessing you just don't understand how integers are represented on modern hardware. Go read about how you represent a number in binary first:
Binary number
Each bit represents a 0 or 1 in the binary representation of a number. To represent negative numbers we use some special tricks, which you can read about here:
Two's complement (used for signed numbers, e.g. C's int type)
How many "bits wide" the type is just constrains the largest (and smallest negative) numbers the type can represent. For example, the short type can represent numbers -32768 to 32767 with its 16 bits, whereas a 32-bit integer can represent -2147483648 to 2147483647.

What is uint_fast32_t and why should it be used instead of the regular int and uint32_t?

So the reason for typedef:ed primitive data types is to abstract the low-level representation and make it easier to comprehend (uint64_t instead of long long type, which is 8 bytes).
However, there is uint_fast32_t which has the same typedef as uint32_t. Will using the "fast" version make the program faster?
int may be as small as 16 bits on some platforms. It may not be sufficient for your application.
uint32_t is not guaranteed to exist. It's an optional typedef that the implementation must provide iff it has an unsigned integer type of exactly 32-bits. Some have a 9-bit bytes for example, so they don't have a uint32_t.
uint_fast32_t states your intent clearly: it's a type of at least 32 bits which is the best from a performance point-of-view. uint_fast32_t may be in fact 64 bits long. It's up to the implementation.
There's also uint_least32_t in the mix. It designates the smallest type that's at least 32 bits long, thus it can be smaller than uint_fast32_t. It's an alternative to uint32_t if the later isn't supported by the platform.
... there is uint_fast32_t which has the same typedef as uint32_t ...
What you are looking at is not the standard. It's a particular implementation (BlackBerry). So you can't deduce from there that uint_fast32_t is always the same as uint32_t.
See also:
Exotic architectures the standards committees care about.
My pragmatic opinion about integer types in C and C++.
The difference lies in their exact-ness and availability.
The doc here says:
unsigned integer type with width of exactly 8, 16, 32 and 64 bits respectively (provided only if the implementation directly supports the type):
uint8_t
uint16_t
uint32_t
uint64_t
And
fastest unsigned unsigned integer type with width of at least 8, 16, 32 and 64 bits respectively
uint_fast8_t
uint_fast16_t
uint_fast32_t
uint_fast64_t
So the difference is pretty much clear that uint32_t is a type which has exactly 32 bits, and an implementation should provide it only if it has type with exactly 32 bits, and then it can typedef that type as uint32_t. This means, uint32_t may or may not be available.
On the other hand, uint_fast32_t is a type which has at least 32 bits, which also means, if an implementation may typedef uint32_t as uint_fast32_t if it provides uint32_t. If it doesn't provide uint32_t, then uint_fast32_t could be a typedef of any type which has at least 32 bits.
When you #include inttypes.h in your program, you get access to a bunch of different ways for representing integers.
The uint_fast*_t type simply defines the fastest type for representing a given number of bits.
Think about it this way: you define a variable of type short and use it several times in the program, which is totally valid. However, the system you're working on might work more quickly with values of type int. By defining a variable as type uint_fast*t, the computer simply chooses the most efficient representation that it can work with.
If there is no difference between these representations, then the system chooses whichever one it wants, and uses it consistently throughout.
Note that the fast version could be larger than 32 bits. While the fast int will fit nicely in a register and be aligned and the like: but, it will use more memory. If you have large arrays of these your program will be slower due to more memory cache hits and bandwidth.
I don't think modern CPUS will benefit from fast_int32, since generally the sign extending of 32 to 64 bit can happen during the load instruction and the idea that there is a 'native' integer format that is faster is old fashioned.

Long Vs. Int C/C++ - What's The Point?

As I've learned recently, a long in C/C++ is the same length as an int. To put it simply, why? It seems almost pointless to even include the datatype in the language. Does it have any uses specific to it that an int doesn't have? I know we can declare a 64-bit int like so:
long long x = 0;
But why does the language choose to do it this way, rather than just making a long well...longer than an int? Other languages such as C# do this, so why not C/C++?
When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.
As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.
The specific guarantees are as follows:
char is at least 8 bits (1 byte by definition, however many bits it is)
short is at least 16 bits
int is at least 16 bits
long is at least 32 bits
long long (in versions of the language that support it) is at least 64 bits
Each type in the above list is at least as wide as the previous type (but may well be the same).
Thus it makes sense to use long if you need a type that's at least 32 bits, int if you need a type that's reasonably fast and at least 16 bits.
Actually, at least in C, these lower bounds are expressed in terms of ranges, not sizes. For example, the language requires that INT_MIN <= -32767, and INT_MAX >= +32767. The 16-bit requirements follows from this and from the requirement that integers are represented in binary.
C99 adds <stdint.h> and <inttypes.h>, which define types such as uint32_t, int_least32_t, and int_fast16_t; these are typedefs, usually defined as aliases for the predefined types.
(There isn't necessarily a direct relationship between size and range. An implementation could make int 32 bits, but with a range of only, say, -2**23 .. +2^23-1, with the other 8 bits (called padding bits) not contributing to the value. It's theoretically possible (but practically highly unlikely) that int could be larger than long, as long as long has at least as wide a range as int. In practice, few modern systems use padding bits, or even representations other than 2's-complement, but the standard still permits such oddities. You're more likely to encounter exotic features in embedded systems.)
long is not the same length as an int. According to the specification, long is at least as large as int. For example, on Linux x86_64 with GCC, sizeof(long) = 8, and sizeof(int) = 4.
long is not the same size as int, it is at least the same size as int. To quote the C++03 standard (3.9.1-2):
There are four signed integer types: “signed char”, “short int”,
“int”, and “long int.” In this list, each type provides at least as
much storage as those preceding it in the list. Plain ints have the
natural size suggested by the architecture of the execution
environment); the other signed integer types are provided to meet special needs.
My interpretation of this is "just use int, but if for some reason that doesn't fit your needs and you are lucky to find another integral type that's better suited, be our guest and use that one instead". One way that long might be better is if you 're on an architecture where it is... longer.
looking for something completely unrelated and stumbled across this and needed to answer. Yeah, this is old, so for people who surf on in later...
Frankly, I think all the answers on here are incomplete.
The size of a long is the size of the number of bits your processor can operate on at one time. It's also called a "word". A "half-word" is a short. A "doubleword" is a long long and is twice as large as a long (and originally was only implemented by vendors and not standard), and even bigger than a long long is a "quadword" which is twice the size of a long long but it had no formal name (and not really standard).
Now, where does the int come in? In part registers on your processor, and in part your OS. Your registers define the native sizes the CPU handles which in turn define the size of things like the short and long. Processors are also designed with a data size that is the most efficient size for it to operate on. That should be an int.
On todays 64bit machines you'd assume, since a long is a word and a word on a 64bit machine is 64bits, that a long would be 64bits and an int whatever the processor is designed to handle, but it might not be. Why? Your OS has chosen a data model and defined these data sizes for you (pretty much by how it's built). Ultimately, if you're on Windows (and using Win64) it's 32bits for both a long and int. Solaris and Linux use different definitions (the long is 64bits). These definitions are called things like ILP64, LP64, and LLP64. Windows uses LLP64 and Solaris and Linux use LP64:
Model ILP64 LP64 LLP64
int 64 32 32
long 64 64 32
pointer 64 64 64
long long 64 64 64
Where, e.g., ILP means int-long-pointer, and LLP means long-long-pointer
To get around this most compilers seem to support setting the size of an integer directly with types like int32 or int64.