How & where is it guaranteed that an uint8_t is 8-bits?
Digging into N3242 - "Working Draft, Standard for Programming Language C++", section 18.4.1
< cstdint > synopsis says -
`typedef unsigned integer type uint8_t; // optional`
So, in essence, a C++ standard conforming library is not needed to define uint8_t at all.
Update: Probably am just asking, which line of the standard says that uintN_t types are N bits?
From C++:
18.4.1 Header synopsis
... The header defines all functions, types, and macros the same as 7.18 in the C standard. ...
From C:
7.20.1.1 Exact-width integer types
1 The typedef name intN_t designates a signed integer type with width
N , no padding bits, and a two’s complement representation. Thus,
int8_t denotes such a signed integer type with a width of exactly 8
bits.
2 The typedef name uintN_t designates an unsigned integer type with
width N and no padding bits. Thus, uint24_t denotes such an unsigned
integer type with a width of exactly 24 bits.
3 These types are optional. However, if an implementation provides
integer types with widths of 8, 16, 32, or 64 bits, no padding bits,
and (for the signed types) that have a two’s complement
representation, it shall define the corresponding typedef names.
So, in essence, a C++ standard conforming library is not needed to define uint8_t at all.
Correct. As Nikos mentioned (+1), you just need an alternative when/if the typedef is not present/declared.
The <stdint.h> types are defined with reference to the C99 standard.
C99 draft N869 §7.17.1.1/2:
“The typedef name uintN_t designates an unsigned integer type with width N. Thus,
uint24_t denotes an unsigned integer type with a width of exactly 24 bits.”
If a type is defined by <stdint.h>, then so are its associated macros, and if the type is not defined, then neither are its associated macros, by C99 §7.18/4.
Thus, you can use the existence or not of the macro UINT8_MAX (required by C99 §7.18.2.1) to check for the presence or absence of the uint8_t type definition.
uint8_t is guaranteed to be 8 bits wide. If it doesn't exist, then obviously you can't use it. But if it's there, it's 8 bits. There's no possibility where it's provided but is not 8 bits.
Related
I need to know if intmax_t is always "the same type" as uintmax_t except using two's complement instead of unsigned value.
Or putting this in formal terms, will the code below always compile in a standard-compliant compiler?
#include <cstdint>
// The important assertion:
static_assert(sizeof(std::uintmax_t) == sizeof(std::intmax_t));
// Less important assertions:
static_assert(UINTMAX_MAX == static_cast<std::uintmax_t>(INTMAX_MAX) * 2 + 1);
static_assert(-static_cast<std::intmax_t>(UINTMAX_MAX / 2) - 1 == INTMAX_MIN);
I'm particularly interested in C++17.
I know that C++20 is the first version of the standard that enforces two's complement but the size of the variable is more important to me than the representation.
Yes, uintmax_t is guaranteed to be the unsigned counterpart of intmax_t.
From the C standard (N1570 7.20.1):
When typedef names differing only in the absence or presence of the initial u are defined, they shall denote corresponding signed and unsigned types as described in 6.2.5; an implementation providing one of these corresponding types shall also provide the other.
(C++ simply refers to C for the description of C standard library headers.)
6.2.5 p6:
For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements.
C++ is similar to C in this regard ([basic.fundamental]/3):
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type [...] which occupies the same amount of storage and has the same alignment requirements as the corresponding signed integer type; that is, each signed integer type has the same object representation as its corresponding unsigned integer type. Likewise, for each of the extended signed integer types there exists a corresponding extended unsigned integer type with the same amount of storage and alignment requirements.
Which means intmax_t and uintmax_t always have the same size.
However, it is not guaranteed that the other two assertions will hold (prior to C++20/C23).
From the comments/title, it seems you're asking if they're required to be the same size; in which case, the short answer is yes ... long answer ... kind of :)
Quote from [basic.fundamental]/3 (C++17 draft N4713)
For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: “unsigned char”, “unsigned short int”, “unsigned int”, “unsigned long int”, and “unsigned long long int”, each of which occupies the same amount of storage and has the same alignment requirements as its corresponding signed integer type.
(emphasis mine)
This guarantees that the unsigned versions take up the same size as their signed equivalents.
That being said, the standard [cstdint.syn] only states:
using intmax_t = signed integer type;
using uintmax_t = unsigned integer type;
The [basic.fundamental]/2 states
The standard and extended signed integer
types are collectively called signed integer types.
and The [basic.fundamental]/3 states
The standard and extended unsigned integer types are
collectively called unsigned integer types
So, technically, a compiler does not have to implement them as the same type, since that's an implementation detail; practically speaking though, they'd be the same.
As noted by duck, the C standard does indicate that there must be corresponding versions between types with u and no u prefix. The C standard is referenced via [cstdint.syn]/2
My answer was shown to be incorrect by duck. Thanks, duck!
https://stackoverflow.com/a/75203111/2027196
My original answer for future reference:
uintmax_t and intmax_t are not guaranteed by the standard to be the same width, but there is no system with a modern C compiler for which they will be different widths. There may not even be a non-modern C compiler for which they will be different widths (asserted without evidence in the same vein as "the sky is blue" type arguments).
The best you can get is that uintmax_t and intmax_t are guaranteed by convention to be the same width. I say "best you can get" but I'd be willing to rely on this guarantee more often than I rely on a compiler perfectly implementing all of its edge-case SFINAE requirements or similar.
Put your static_assert at the top of your library (maybe in an assumptions.hpp file) and then never worry about this problem ever again.
Are the types from <cstdint> (like e.g. int16_t, uint_fast64_t, int_least8_t) guaranteed to be typedefs for one of the built in types like short, unsigned long etc.?
Or is an implementation allowed to use types that are non of the usual built in ones to implement the fixed width types?
No, at least not for types intN_t. These types are guaranteed to have two’s complement representation (as per C99 7.18.1.1 which C++11 and C++14 reference). And standard integer types don't have to be two's complement.
C11 also has important change over C99 (which is actually just bugfix), emphasizing the point above:
7.20.1.1/3:
However, if an implementation provides integer types with
widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a
two’s complement representation, it shall define the corresponding typedef names.
These are specified by the C standard (and incorporated by reference by the C++ standard), which requires each to be a typedef for a signed integer type or unsigned integer type, as the case may be.
signed integer type is in turn defined by the core language to consist of the standard signed integer types (which are signed char, short int, int, long int and long long int) and any implementation-defined extended signed integer types.
Similarly, unsigned integer type is defined by the core language to consist of the standard unsigned integer types (which are unsigned char, unsigned short int, unsigned int, unsigned long int and unsigned long long int) and any implementation-defined extended unsigned integer types corresponding to the extended signed integer types.
In short, each of those typedefs may be one of the usual built-in types or an implementation-defined extended integer type. Most compilers do not support extended integer types, so on those compilers they must be built-in types.
I have a draft version of the C99 spec in front of me and a draft of the C++14 spec as well. Since these are drafts, this information might be incorrect, but I believe that the wording is the same in the final version.
In the C++14 spec, §18.4.1 has this to say about <cstdint>:
namespace std {
typedef signed-integer-type int8_t; // optional
typedef signed-integer-type int16_t; // optional
typedef signed-integer-type int32_t; // optional
typedef signed-integer-type int64_t; // optional
[ etc. ]
}
It then says
The header defines all functions, types, and macros the same as 7.18 in the C standard.
I went to the draft C99 standard, §7.18, and saw nothing that required the types defined to actually be aliases for built-in types like int, long int, etc. It just said that if these types exist, they have to meet certain constraints about their ranges, sizes, and memory layouts.
Overall, I strongly suspect that the answer is "no," but if I'm mistaken, I'm interested to see where I misread the spec. :-)
Hope this helps!
From the C++ standard: "There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types. ". (I think there is a similar line about extended unsigned integer types) Nothing is said how you would use these extended integer types, they are obviously non-portable and implementation defined.
However, int16_t etc. can be typedefs for extended integer types.
According to C and C++, CHAR_BIT >= 8.
But whenever CHAR_BIT > 8, uint8_t can't even be represented as 8 bits.
It must be larger, because CHAR_BIT is the minimum number of bits for any data type on the system.
On what kind of a system can uint8_t be legally defined to be a type other than unsigned char?
(If the answer is different for C and C++ then I'd like to know both.)
If it exists, uint8_t must always have the same width as unsigned char. However, it need not be the same type; it may be a distinct extended integer type. It also need not have the same representation as unsigned char; for instance, the bits could be interpreted in the opposite order. This is a silly example, but it makes more sense for int8_t, where signed char might be ones complement or sign-magnitude while int8_t is required to be twos complement.
One further "advantage" of using a non-char extended integer type for uint8_t even on "normal" systems is C's aliasing rules. Character types are allowed to alias anything, which prevents the compiler from heavily optimizing functions that use both character pointers and pointers to other types, unless the restrict keyword has been applied well. However, even if uint8_t has the exact same size and representation as unsigned char, if the implementation made it a distinct, non-character type, the aliasing rules would not apply to it, and the compiler could assume that objects of types uint8_t and int, for example, can never alias.
On what kind of a system can uint8_t be legally defined to be a type other than unsigned char?
In summary, uint8_t can only be legally defined on systems where CHAR_BIT is 8. It's an addressable unit with exactly 8 value bits and no padding bits.
In detail, CHAR_BIT defines the width of the smallest addressable units, and uint8_t can't have padding bits; it can only exist when the smallest addressable unit is exactly 8 bits wide. Providing CHAR_BIT is 8, uint8_t can be defined by a type definition for any 8-bit unsigned integer type that has no padding bits.
Here's what the C11 standard draft (n1570.pdf) says:
5.2.4.2.1 Sizes of integer types
1 The values given below shall be replaced by constant expressions suitable for use in #if
preprocessing directives. ... Their implementation-defined values shall be equal or
greater in magnitude (absolute value) to those shown, with the same sign.
-- number of bits for smallest object that is not a bit-field (byte)
CHAR_BIT 8
Thus the smallest objects must contain exactly CHAR_BIT bits.
6.5.3.4 The sizeof and _Alignof operators
...
4 When sizeof is applied to an operand that has type char, unsigned
char, or signed char, (or a qualified version thereof) the result is
1. ...
Thus, those are (some of) the smallest addressable units. Obviously int8_t and uint8_t may also be considered smallest addressable units, providing they exist.
7.20.1.1 Exact-width integer types
1 The typedef name intN_t designates a signed integer type with width
N, no padding bits, and a two’s complement representation. Thus,
int8_t denotes such a signed integer type with a width of exactly 8
bits.
2 The typedef name uintN_t designates an unsigned integer type with
width N and no padding bits. Thus, uint24_t denotes such an unsigned
integer type with a width of exactly 24 bits.
3 These types are optional. However, if an implementation provides
integer types with widths of 8, 16, 32, or 64 bits, no padding bits,
and (for the signed types) that have a two’s complement
representation, it shall define the corresponding typedef names.
The emphasis on "These types are optional" is mine. I hope this was helpful :)
A possibility that no one has so far mentioned: if CHAR_BIT==8 and unqualified char is unsigned, which it is in some ABIs, then uint8_t could be a typedef for char instead of unsigned char. This matters at least insofar as it affects overload choice (and its evil twin, name mangling), i.e. if you were to have both foo(char) and foo(unsigned char) in scope, calling foo with an argument of type uint8_t would prefer foo(char) on such a system.
Given this C++11 program, should I expect to see a number or a letter? Or not make expectations?
#include <cstdint>
#include <iostream>
int main()
{
int8_t i = 65;
std::cout << i;
}
Does the standard specify whether this type can or will be a character type?
From § 18.4.1 [cstdint.syn] of the C++0x FDIS (N3290), int8_t is an optional typedef that is specified as follows:
namespace std {
typedef signed integer type int8_t; // optional
//...
} // namespace std
§ 3.9.1 [basic.fundamental] states:
There are five standard signed integer types: “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list. There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types.
...
Types bool, char, char16_t, char32_t, wchar_t, and the signed and unsigned integer types are collectively called integral types. A synonym for integral type is integer type.
§ 3.9.1 also states:
In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.
It is tempting to conclude that int8_t may be a typedef of char provided char objects take on signed values; however, this is not the case as char is not among the list of signed integer types (standard and possibly extended signed integer types). See also Stephan T. Lavavej's comments on std::make_unsigned and std::make_signed.
Therefore, either int8_t is a typedef of signed char or it is an extended signed integer type whose objects occupy exactly 8 bits of storage.
To answer your question, though, you should not make assumptions. Because functions of both forms x.operator<<(y) and operator<<(x,y) have been defined, § 13.5.3 [over.binary] says that we refer to § 13.3.1.2 [over.match.oper] to determine the interpretation of std::cout << i. § 13.3.1.2 in turn says that the implementation selects from the set of candidate functions according to § 13.3.2 and § 13.3.3. We then look to § 13.3.3.2 [over.ics.rank] to determine that:
The template<class traits> basic_ostream<char,traits>& operator<<(basic_ostream<char,traits>&, signed char) template would be called if int8_t is an Exact Match for signed char (i.e. a typedef of signed char).
Otherwise, the int8_t would be promoted to int and the basic_ostream<charT,traits>& operator<<(int n) member function would be called.
In the case of std::cout << u for u a uint8_t object:
The template<class traits> basic_ostream<char,traits>& operator<<(basic_ostream<char,traits>&, unsigned char) template would be called if uint8_t is an Exact Match for unsigned char.
Otherwise, since int can represent all uint8_t values, the uint8_t would be promoted to int and the basic_ostream<charT,traits>& operator<<(int n) member function would be called.
If you always want to print a character, the safest and most clear option is:
std::cout << static_cast<signed char>(i);
And if you always want to print a number:
std::cout << static_cast<int>(i);
int8_t is exactly 8 bits wide (if it exists).
The only predefined integer types that can be 8 bits are char, unsigned char, and signed char. Both short and unsigned short are required to be at least 16 bits.
So int8_t must be a typedef for either signed char or plain char (the latter if plain char is signed).
If you want to print an int8_t value as an integer rather than as a character, you can explicitly convert it to int.
In principle, a C++ compiler could define an 8-bit extended integer type (perhaps called something like __int8), and make int8_t a typedef for it. The only reason I can think of to do so would be to avoid making int8_t a character type. I don't know of any C++ compilers that have actually done this.
Both int8_t and extended integer types were introduced in C99. For C, there's no particular reason to define an 8-bit extended integer type when the char types are available.
UPDATE:
I'm not entirely comfortable with this conclusion. int8_t and uint8_t were introduced in C99. In C, it doesn't particularly matter whether they're character types or not; there are no operations for which the distinction makes a real difference. (Even putc(), the lowest-level character output routine in standard C, takes the character to be printed as an int argument). int8_t, and uint8_t, if they're defined, will almost certainly be defined as character types -- but character types are just small integer types.
C++ provides specific overloaded versions of operator<< for char, signed char, and unsigned char, so that std::cout << 'A' and std::cout << 65 produce very different output. Later, C++ adopted int8_t and uint8_t, but in such a way that, as in C, they're almost certainly character types. For most operations, this doesn't matter any more than it does in C, but for std::cout << ... it does make a difference, since this:
uint8_t x = 65;
std::cout << x;
will probably print the letter A rather than the number 65.
If you want consistent behavior, add a cast:
uint8_t x = 65;
std::cout << int(x); // or static_cast<int>(x) if you prefer
I think the root of the problem is that there's something missing from the language: very narrow integer types that are not character types.
As for the intent, I could speculate that the committee members either didn't think about the issue, or decided it wasn't worth addressing. One could argue (and I would) that the benefits of adding the [u]int*_t types to the standard outweighs the inconvenience of their rather odd behavior with std::cout << ....
I'll answer your questions in reverse order.
Does the standard specify whether this type can or will be a character type?
Short answer: int8_t is signed char in the most popular platforms (GCC/Intel/Clang on Linux and Visual Studio on Windows) but might be something else in others.
The long answer follows.
Section 18.4.1 of the C++11 Standard provides the synopsis of <cstdint> which includes the following
typedef signed integer type int8_t; //optional
Later in the same section, paragraph 2, it says
The header [<cstdint>] defines all functions, types, and macros the same as 7.18 in the C standard.
where C standard means C99 as per 1.1/2:
C ++ is a general purpose programming language based on the C programming language as described in ISO/IEC 9899:1999 Programming languages — C (hereinafter referred to as the C standard).
Hence, the definition of int8_t is to be found in Section 7.18 of the C99 standard. More precisely, C99's Section 7.18.1.1 says
The typedef name intN_t designates a signed integer type with width N , no padding bits, and a two’s complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
In addition, C99's Section 6.2.5/4 says
There are five standard signed integer types, designated as signed char, short int, int, long int, and long long int. (These and other types may be designated in several additional ways, as described in 6.7.2.) There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types.
Finally, C99's Section 5.2.4.2.1 imposes minimum sizes for standard signed integer types. Excluding signed char, all others are at least 16 bits long.
Therefore, int8_t is either signed char or an 8 bits long extended (non standard) signed integer type.
Both glibc (the GNU C library) and Visual Studio C library define int8_t as signed char. Intel and Clang, at least on Linux, also use libc and hence, the same applies to them. Therefore, in the most popular platforms int8_t is signed char.
Given this C++11 program, should I expect to see a number or a letter? Or not make expectations?
Short answer: In the most popular platforms (GCC/Intel/Clang on Linux and Visual Studio on Windows) you will certainly see the letter 'A'. In other platforms you might get see 65 though. (Thanks to DyP for pointing this out to me.)
In the sequel, all references are to the C++11 standard (current draft, N3485).
Section 27.4.1 provides the synopsis of <iostream>, in particular, it states the declaration of cout:
extern ostream cout;
Now, ostream is a typedef for a template specialization of basic_ostream as per Section 27.7.1:
template <class charT, class traits = char_traits<charT> >
class basic_ostream;
typedef basic_ostream<char> ostream;
Section 27.7.3.6.4 provides the following declaration:
template<class traits>
basic_ostream<char,traits>& operator<<(basic_ostream<char,traits>& out, signed char c);
If int8_t is signed char then it's this overload that's going to be called. The same section also specifies that the effect of this call is printing the character (not the number).
Now, let's consider the case where int8_t is an extended signed integer type. Obviously, the standard doesn't specify overloads of operator<<() for non standard types but thanks to promotions and convertions one of the provided overloads might accept the call. Indeed, int is at least 16 bits long and can represent all the values of int8_t. Then 4.5/1 gives that int8_t can be promoted to int. On the other hand, 4.7/1 and 4.7/2 gives that int8_t can be converted to signed char. Finally, 13.3.3.1.1 yields that promotion is favored over convertion during overload resolution. Therefore, the following overload (declared in in 23.7.3.1)
basic_ostream& basic_ostream::operator<<(int n);
will be called. This means that, this code
int8_t i = 65;
std::cout << i;
will print 65.
Update:
1. Corrected the post following DyP's comment.
2. Added the following comments on the possibility of int8_t be a typedef for char.
As said, the C99 standard (Section 6.2.5/4 quoted above) defines 5 standard signed integer types (char is not one of them) and allows implementations to add their onw which are referred as non standard signed integer types. The C++ standard reinforces that definition in Section 3.9.1/2:
There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int” [...] There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types.
Later, in the same section, paragraph 7 says:
Types bool, char, char16_t, char32_t, wchar_t, and the signed and unsigned integer types are collectively called integral types. A synonym for integral type is integer type.
Therefore, char is an integer type but char is neither a signed integer type nor an unsigned integer type and Section 18.4.1 (quoted above) says that int8_t, when present, is a typedef for a signed integer type.
What might be confusing is that, depending on the implementation, char can take the same values as a signed char. In particular, char might have a sign but it's still not a signed char. This is explicitly said in Section 3.9.1/1:
[...] Plain char, signed char, and unsigned char are three distinct types. [...] In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.
This also implies that char is not a signed integer type as defined by 3.9.1/2.
3. I admit that my interpretation and, specifically, the sentence "char is neither a signed integer type nor an unsigned integer type" is a bit controversial.
To strength my case, I would like to add that Stephan T. Lavavej said the very same thing here and Johannes Schaub - litb also used the same sentence in a comment on this post.
The working draft copy I have, N3376, specifies in [cstdint.syn] § 18.4.1 that the int types are typically typedefs.
namespace std {
typedef signed integer type int8_t; // optional
typedef signed integer type int16_t; // optional
typedef signed integer type int32_t; // optional
typedef signed integer type int64_t; // optional
typedef signed integer type int_fast8_t;
typedef signed integer type int_fast16_t;
typedef signed integer type int_fast32_t;
typedef signed integer type int_fast64_t;
typedef signed integer type int_least8_t;
typedef signed integer type int_least16_t;
typedef signed integer type int_least32_t;
typedef signed integer type int_least64_t;
typedef signed integer type intmax_t;
typedef signed integer type intptr_t; // optional
typedef unsigned integer type uint8_t; // optional
typedef unsigned integer type uint16_t; // optional
typedef unsigned integer type uint32_t; // optional
typedef unsigned integer type uint64_t; // optional
typedef unsigned integer type uint_fast8_t;
typedef unsigned integer type uint_fast16_t;
typedef unsigned integer type uint_fast32_t;
typedef unsigned integer type uint_fast64_t;
typedef unsigned integer type uint_least8_t;
typedef unsigned integer type uint_least16_t;
typedef unsigned integer type uint_least32_t;
typedef unsigned integer type uint_least64_t;
typedef unsigned integer type uintmax_t;
typedef unsigned integer type uintptr_t; // optional
} // namespace std
Since the only requirement made is that it must be 8 bits, then typedef to a char is acceptable.
char/signed char/unsigned char are three different types, and a char is not always 8 bits. on most platform they are all 8-bits integer, but std::ostream only defined char version of >> for behaviors like scanf("%c", ...).
I am porting some code from C to C++. During the conversion I encountered:
uint128_t does not name a type
My compiler: gcc version 5.2.1
My operating system: Ubuntu 15.1
This compiled fine as C and I thought it would be resolved by including stdint.h but it has not. So far I have not tried anything else since there doesn't seem to be a lot of information on this error (example). uint128_t is used throughout this entire program and is essential for the build, therefore I can not remove it, and I'm not sure about using a different integer type.
Below is an example of where and how it is used.
union {
uint16_t u16;
uint32_t u32;
uint128_t u128;
} value;
Would it be okay to define a uint128_t or should I look at my compiler?
GCC has builtin support for the types __int128, unsigned __int128, __int128_t and __uint128_t (the last two are undocumented). Use them to define your own types:
typedef __int128 int128_t;
typedef unsigned __int128 uint128_t;
Alternatively, you can use __mode__(TI):
typedef int int128_t __attribute__((mode(TI)));
typedef unsigned int uint128_t __attribute__((mode(TI)));
Quoting the documentation:
TImode
“Tetra Integer” (?) mode represents a sixteen-byte integer.
Sixteen byte = 16 * CHAR_BIT >= 128.
I thought this would be resolved by including stdint.h but it has not.
Well, it may not.
First to check the C++ header, cstdint, from C++14, chapter § 18.4.1,
namespace std {.....
typedef unsigned integer type uint8_t; // optional
typedef unsigned integer type uint16_t; // optional
typedef unsigned integer type uint32_t; // optional
typedef unsigned integer type uint64_t; // optional
.....
and,
The header defines all functions, types, and macros the same as 7.18 in the C standard. [..]
Then quote the C11 standard, chapter §7.20.1.1 (emphasis mine)
The typedef name uintN_t designates an unsigned integer type with width N and no
padding bits. Thus, uint24_t denotes such an unsigned integer type with a width of
exactly 24 bits.
These types are optional. However, if an implementation provides integer types with
widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a
two’s complement representation, it shall define the corresponding typedef names.
So, here we notice two things.
An implementation is not mandated to provide support for the fixed-width ints.
Standard limits the width upto 64, as we see it. having a width more than that is once again not mandated in the standard. You need to check the documentation of the environment in use.
As pointed out by other answer, C++ standard does not require 128 bit integer to be available, nor to be typedefed as uint128_t even if present. If your compiler/architecture does not support 128 bit integers and you need them, you could use boost to emulate them:
http://www.boost.org/doc/libs/1_58_0/libs/multiprecision/doc/html/boost_multiprecision/tut/ints/cpp_int.html
I think that the boost library will automatically use the native type if available