I am new to c++ and was wondering if there is a difference between uint32_t and tUInt32 ?
Is it just syntactic sugar ?
Or is it just a result of using namespace std;?
I know what they represent (see: https://www.badprog.com/c-type-what-are-uint8-t-uint16-t-uint32-t-and-uint64-t ). I am simply confused why two different ways to represent them exist and which to use.
Although I have searched SO with uint32_t and tUInt32, I hope this is not a duplicate.
Thank you for your time.
tUInt32 doesn't seem to be standard. I found a reference to it here in the Symbian OS docs, which defines it as a typedef of unsigned long int, which is not guaranteed to be the same as uint32_t (uint32_t is guaranteed to be 32 bits, whereas unsigned long int is not in general - Symbian may guarantee it is 32 bits, but I can't find a reference for this).
Related
The answers around the web (one, two) only tell parts of the story, and omit some details that I would love you to help me to clarify:
int, by the C++ standard, must be >= 32 bits.
Different compilers (implementations) can make int to allocate different amount of bits by default. So Compiler A can make int to allocate 50 bits (for example) when I declare integer.
Also, different compilers may / may not use bit padding for optimization
__int32 (Microsoft-specifc) and int32_t are here to solve the problem and:
Force compiler to allocate exactly 32 bits.
Never use bit padding before / after this data type.
Which one of my guesses are correct? Sorry to reasking old question, but I am actally confused in all of that allocation features.
Thanks in advance for your time.
Which one of my guesses are correct?
int, by the C++ standard, must be >= 32 bits.
Not correct. int must be >= 16 bits. long must be >= 32 bits.
Different compilers (implementations) can make int to allocate different amount of bits
Correct.
... by default.
I don't know of a compiler with configurable int sizes - it usually depends directly on target architecture - but I suppose that would be a possibility.
Also, different compilers may / may not use bit padding
They may. They aren't required to.
__int32 (Microsoft-specifc) and int32_t are here to solve the problem and:
Force compiler to allocate exactly 32 bits.
Never use bit padding before / after this data type.
Correct. More specifically, std::int32_t is an alias of one of the fundamental types which has exactly 32 bits and no padding. If such integer type is not provided by the compiler, then std::int32_t alias will not be provided either.
Microsoft documentation promises that __int32 exists and that it is another name of int, and that it has 32 non-padding bits. Elsewhere, Microsoft documents that int32_t is also an alias of int. As such, there is no difference other than __int32 not being a standard name.
So i am porting one of my programs to a new gaming console. The problem is that the SDK used to compile my c++ application doesn't support __int16, BUT it does have int16_t.
Would it be 'safe' to use int16_t in replace of __int16?
Also, if im not mistaken could i just use unsigned short int for a 16 bit integer rather than using int16_t or __int16?
They will be the same.
People used to define their own fixed width types before the standard ones came out. Just use a typedef - that's what they are for.
int16_t and __int16 should both be signed 16-bit integers. Substituting one for the other should be fine. unsigned short int is completely different. It is unsigned rather than signed and it isn't guaranteed to be 16 bits. It could end up being a different size.
I've looked at some answers that use short in C#, but I'm not sure if they really answer my question here. Is short in C++ another name for int? I know you can make short int, which seems to be able to handle a lot of values but I'm still starting out, so obviously if it's short it's not a lot of values. But in this code snippet here:
short lives,aliensKilled;
it doesn't use int short, it just uses short. So I guess my question is, can I just use short as a replacement for int if I'm not going under -32,768 or over 32,767?
Also, is it okay to just replace short with int, and it won't really mess with anything as long as I change the appropriate things? (Btw lives and aliensKilled are both variable names.)
In C++ (and C), short, short int, and int short are different names for the same type. This type is guaranteed to have a range of at least -32,767..+32,767. (No, that's not a typo.)
On most modern systems, short is 16 bits and int is 32 bits. You can replace int with short without ill effects as long as you don't exceed the range of a short. On most modern systems, exceeding the range of a short will usually result in the values wrapping around—this behavior is not guaranteed by the standard and you should not rely on it, especially now that common C++ compilers will prune code paths that contain signed integer overflow.
However, in most situations, there is little benefit to replacing int with short. I would only replace int with short if I had at least thousands of them. There's not always a benefit, by using short you can reduce the memory used and the bandwidth required, but you can potentially increase the number of CPU cycles required to convert from short to int (a short is always "promoted" to int when you do arithmetic on it).
short int, int short and short are all synonymous in C and C++.
These work like int, but the range is smaller (typically, but not always) 16 bit. As long as none of the code relies on the transitions when the number "wraps around" due to it being 16 bits (that is, no calculation goes above the highest value (SHORT_MAX) or below the lowest value (SHORT_MIN)), using a larger type (int, long) will work just fine.
C++ (and C# and Objective-C and other direct descendants of C) have a quirky way of naming and specifying the primitive integral types.
As specified by C++, short and int are simple-type-specifiers, which can be mixed and matched along with the keywords long, signed, and unsigned in any of a page-full of combinations.
The general pattern for the single type short int is [signed] short [int], which is to say the signed and int keywords are optional.
Note that even if int and short are the same size on a particular platform, they are still different types. int has at least the same range as short so it's numerically a drop-in replacement, but you can't use an int * or int & where a short * or short & is required. Besides that C++ provides all kinds of machinery for working with types… for a large program written around short, converting to int may take some work.
Note also that there is no advantage to declaring something short unless you really have a reason to save a few bytes. It is poor style and leads to overflow errors, and can even reduce performance as CPUs today aren't optimized for 16-bit operations. And as Dietrich notes, according to the crazy way C arithmetic semantics are specified, the short is upcast to int before any operation is performed and then if the result is assigned back to a short, it's cast back again. This dance usually has no effect but can still lead to compiler warnings and worse.
In any case, the best practice is to typedef your own types for whatever jobs you need done. Always use int by default, and leverage int16_t, uint32_t from <stdint.h> (<cstdint> since C++11), etc instead of relying on platform-dependent short and long.
Yes, short is equivalent to short int, and it uses at least 2 bytes, but if you stay in the range you can replace int with short without any problem.
Yes, you can use it. short = short int. Signed -32768 to 32767 and unsigned 0 to 65535 .
short can at max be two bytes long. On machines where int is two bytes, short and int have same range i.e. -32767 to +32767. For most of the new platforms, int is 4 bytes, catering to much larger range of values.
I recommend to go for explicit declaration such as int16_t for short and int32_t for int to avoid any confusion.
Also notice that for the following code:
short a = 32767;
a++;
cout << a;
It will print -32768.
So, if you go over its limit, it will "go back" with the counting.
What is the difference between signed and normal short in c++? Is the range is different?
short is signed by default, so there is no difference.
The names signed short int, signed short, short int and short are synonymes and mean same type in C++.
Integers are signed by default in C++, which IMO brings the existence of the signed keyword into question. Technically, it is redundant, maybe it does contribute with some clarity, but hardly anyone uses it in production. Everyone is pretty much aware integers are signed by default. I honestly can't remember the last time I've seen signed in production code.
As for floats and doubles - they cannot be unsigned at all, they are always signed.
In this regard C++ syntax is a little redundant, at least IMO. There is a number of different ways to say the same thing, e.g. signed short int, signed short, short int and short , and what you say still might be platform or even compiler dependent.
Frameworks like Qt for example declare their own conventions which are shorter and informative, like for example:
quint8, quint16, quint32, quint64 are all unsigned integers, with the number signifying the size in bits, in the same logic:
qint8, qint16, qint32, qint64 are signed integers with the respective bit width.
uint is, at least for me, much more preferable to either unsigned or unsigned int, in the same logic you also have ushort which is preferable to unsigned short int. There is also uchar to complete the short-hard family.
I want to define a byte type in my C++ program, basically an unsigned char what is the most idiomatic way to go about doing this?
I want to define a byte type to abstract away the different representations and make it possible to create typesafe arrays of this new byte ( 8 bit ) type that is backed by an unsigned char for a bit manipulation library I am working on for a very specific use case of a program I am creating. I want it to be very explicit that this is an 8 bit byte specific to the domain of my program and that is is not subject to the varying implementations based on platform or compiler.
char, unsigned char, or signed char are all one byte; std::uint8_t (from <cstdint>) is an 8-bit byte (a signed variant exists too). This last one only exists on systems that do have 8-bit bytes. There is also std::uint_least8_t (from the same header), which has at least 8 bits and std::uint_fast8_t, which has at least 8 bits and is supposed to be the most efficient one.
The most idiomatic way is to just use signed char or unsigned char. You can use typedef if you want to call it byte or if you need it to be strongly typed you could use BOOST_STRONG_TYPEDEF.
If you need it to be exactly 8 bits, you can use uint8_t from <cstdint> but it is not guaranteed to exist on all platforms.
To be honest, this is one of the most irritating "features" in C++ for me.
Yes, you can use std::uint8_t or unsigned char, which on most systems the former will be the typedef of the latter.
But... This is not type safe, as typedef will not create a new type. And commitee refused to add a "strong typedef" to the standard.
consider
void foo (std::uint8_t);
void foo (unsigned char); // ups...
I am currently using the uint8_t approach. The way I see it is, if a platform does not have an 8 bit type (in which case my code will not function on that platform), then I don't want it to be running anyways, because I would end up with unexpected behaviour due to the fact that I am processing data with the assumption that it is 8 bits, when in fact it is not. So I don't see why you should use unsigned char, assume it is 8 bits, and then perform all your calculations based on that assumption. It's just asking for trouble in my opinion.