Is `short` the same as `int` in C++? - c++

I've looked at some answers that use short in C#, but I'm not sure if they really answer my question here. Is short in C++ another name for int? I know you can make short int, which seems to be able to handle a lot of values but I'm still starting out, so obviously if it's short it's not a lot of values. But in this code snippet here:
short lives,aliensKilled;
it doesn't use int short, it just uses short. So I guess my question is, can I just use short as a replacement for int if I'm not going under -32,768 or over 32,767?
Also, is it okay to just replace short with int, and it won't really mess with anything as long as I change the appropriate things? (Btw lives and aliensKilled are both variable names.)

In C++ (and C), short, short int, and int short are different names for the same type. This type is guaranteed to have a range of at least -32,767..+32,767. (No, that's not a typo.)
On most modern systems, short is 16 bits and int is 32 bits. You can replace int with short without ill effects as long as you don't exceed the range of a short. On most modern systems, exceeding the range of a short will usually result in the values wrapping around—this behavior is not guaranteed by the standard and you should not rely on it, especially now that common C++ compilers will prune code paths that contain signed integer overflow.
However, in most situations, there is little benefit to replacing int with short. I would only replace int with short if I had at least thousands of them. There's not always a benefit, by using short you can reduce the memory used and the bandwidth required, but you can potentially increase the number of CPU cycles required to convert from short to int (a short is always "promoted" to int when you do arithmetic on it).

short int, int short and short are all synonymous in C and C++.
These work like int, but the range is smaller (typically, but not always) 16 bit. As long as none of the code relies on the transitions when the number "wraps around" due to it being 16 bits (that is, no calculation goes above the highest value (SHORT_MAX) or below the lowest value (SHORT_MIN)), using a larger type (int, long) will work just fine.

C++ (and C# and Objective-C and other direct descendants of C) have a quirky way of naming and specifying the primitive integral types.
As specified by C++, short and int are simple-type-specifiers, which can be mixed and matched along with the keywords long, signed, and unsigned in any of a page-full of combinations.
The general pattern for the single type short int is [signed] short [int], which is to say the signed and int keywords are optional.
Note that even if int and short are the same size on a particular platform, they are still different types. int has at least the same range as short so it's numerically a drop-in replacement, but you can't use an int * or int & where a short * or short & is required. Besides that C++ provides all kinds of machinery for working with types… for a large program written around short, converting to int may take some work.
Note also that there is no advantage to declaring something short unless you really have a reason to save a few bytes. It is poor style and leads to overflow errors, and can even reduce performance as CPUs today aren't optimized for 16-bit operations. And as Dietrich notes, according to the crazy way C arithmetic semantics are specified, the short is upcast to int before any operation is performed and then if the result is assigned back to a short, it's cast back again. This dance usually has no effect but can still lead to compiler warnings and worse.
In any case, the best practice is to typedef your own types for whatever jobs you need done. Always use int by default, and leverage int16_t, uint32_t from <stdint.h> (<cstdint> since C++11), etc instead of relying on platform-dependent short and long.

Yes, short is equivalent to short int, and it uses at least 2 bytes, but if you stay in the range you can replace int with short without any problem.

Yes, you can use it. short = short int. Signed -32768 to 32767 and unsigned 0 to 65535 .

short can at max be two bytes long. On machines where int is two bytes, short and int have same range i.e. -32767 to +32767. For most of the new platforms, int is 4 bytes, catering to much larger range of values.
I recommend to go for explicit declaration such as int16_t for short and int32_t for int to avoid any confusion.

Also notice that for the following code:
short a = 32767;
a++;
cout << a;
It will print -32768.
So, if you go over its limit, it will "go back" with the counting.

Related

C++ Does converting long, short and all ints to uint32_t, int32_t and so forth help at all?

I run a gaming server that is coded C++, also a bit of ASM and C there. I saw someone had updated the same server I run and amongst all the updates was the fact that all int, unsigned, short and everything else he had changed to int32_t, uint32_t, uint64_t and the rest.
Is there any benefit in changing all of them to the above said ones? Lets say I changed all int to int32_t and all unsigned int to uint32_t and of course everything else thats possible to change.
I was trying to read and understand if there are any benefits but I simply didnt grasp the real meaning of them. So yeah, the question goes: Is there any benefit in doing what I just said?
The compiler I use is Orwell Dev-C++
The normal types, like int and unsigned int, have variable size depending on which platform you run. int32_t and uint32_t, however, is guaranteed to be 32 bits on any platform which have a 32-bit integral type. On platforms that don't, it doesn't exist. The size of a int may vary, usually its 32 or 64 bits long. The same rules apply to the other types, like int64_t is 64 bits.
To know the size of the data types is needed for instance in network programming because the network packets are sent between different platforms with different default integral sizes while the size of the data stored in the packets, like addresses and port numbers, are always the same. An IPv4-address is always 32 bits long, and one should use a data type which is guaranteed to be this size to store it.
In most programs, where you just store numbers, you should use the normal types. The size of these is the most efficient on the current platform.
uint32_t and int32_t add a little bit more control to the handling of your data types by having a fixed size and sign - IMHO the more predictability you have in your program, the better it is. You also can see on the typename whether it is unsigned or not which may not always be self-evident
With well defined size you also are more protected when doing portable code.
Actually, the default int and long are so vague and compiler/platform dependent that I for one try to avoid them. With int and long:
If you need to know the size of your struct, you need to know how many bytes for int and long, per architecture, per operative system, per compiler. It's a waste of precious brain.
If you need to explain to somebody the probability of having a collision in that hash table, would you say "it depends on the size of int, which depends on your architecture", or would you rather say, 1.0e-5? In other words, int and long "undefine" the properties of your program.
If you use int, and the compiler swears that that's 64 bits, the chances of it optimizing it to 32 or 16 are minimal. So you end up using much more memory if all you wanted was space for representing a short symbol... not that I know of many alphabets with 2^32 characters.
With more informative types, like uint32_t, or uint_least32_t, your compiler can perhaps make better assumptions and use 64 bits if it believes that that would bring better performance. I don't know that for sure. But you as human have better chances of understanding the range of values in which your program works well.
Additionally, all binary protocols likely need to specify the size and endianess of integers.
I have no problem with int32_t in some contexts, such as struct fields, but the
practice of writing all your int as int32_t, and unsigned as uint32_t (for portability?) is much overused; I think, in practical terms, you don't get any real portability benefit from this, and there are some significant downsides.
Most likely the code you are writing will only ever be compiled on a machine where int is 32 bits. If you are going to port it to a machine with 16-bit ints, then you generally have much bigger problems than a blanket typedef will solve.
If you do move to a 16-bit machine, you are probably going to want to change a great number of the local int32_t variables to int anyway since most of them are counts and indices, the 16-bit machine can't have more than 32K of most things, but people just wrote them all that way out of habit, and if you leave them as int32_t the code will be huge and slow.
int being 64 bits in the future sometime? Won't happen, for a number of reasons, most of them just pragmatic things.
On some weird machines int could be e.g. 24 bits. Very unusual, but again, you have bigger problems than a typedef will solve, and likely the first thing you want to do if porting to such a beast, is to change all the int32_t which are just counting things back to int.
So much for it not solving portability problems. What downside is there?
Certain library functions have int * parameters, e.g. frexp. You need to supply an address of an int variable, or it won't port, and int32_t may or may not be int even if int is 32 bits (see below). I've seen people write frexp( val, (int*)&myvar); just so they can write int32_t myvar; instead of int myvar;, which is terrible and can create a bug undetected by the compiler if int32_t is ever a different size from int.
Likewise printf( "%d", intvar); requires an int, not some typedef which happens to be the same size as int but might really be long, and gcc/clang issue warnings about this if int32_t is long.
On many platforms int32_t is long (32-bit long), even though int is also 32 bits, this is unfortunate (and I think it can be traced back to Microsoft needing to thunk things over 16/32/64 and deciding that long should be 32 bits forever and vice versa).
The uncertainty of whether int32_t is int or long can cause issues with c++ overloads, when porting code across different platforms.

C++: Strange math results

The following calculation results in "1" for me.
unsigned long iEndFrame = ((4294966336 + 1920 - 1) / 1920) + 1;
Does anybody see why? I thought that unsigned long could handle this.
Thank you very much for the help.
The values on the right of your calculation have type unsigned int or int.
4294966336 + 1920 = 4294968256
assuming sizeof(unsigned int) == 4, this overflows, leaving you with 960
(960-1)/1920 = 0
(due to rounding in integer arithmetic). This leaves you with
0 + 1 = 1
If you want to perform the calculation using a larger type (and assuming that sizeof(unsigned long) > sizeof(unsigned int)), you need a cast inside the calculation
unsigned long iEndFrame = (((unsigned long)4294966336 + 1920 - 1) / 1920) + 1;
or, as noted by Slava, set one of the literal values to have unsigned long type using the UL suffix
unsigned long iEndFrame = ((4294966336UL + 1920 - 1) / 1920) + 1;
"I thought that unsigned long could handle this." - I'm not sure what you mean by this. There's nothing in the expression that you evaluate that would suggest that it should use unsigned long. The fact that you eventually initialize an unsigned long variable with it has no effect on the process of evaluation. The initializing expression is handled completely independently from that.
Now, in C++ language an unsuffixed decimal integral literal always has signed type. The compiler is not allowed to choose unsigned types for any of the literals used in your expression. The compiler is required to use either int, long int or long long int (the last one - in C++11), depending on the value of the literal. The compiler is allowed to use implementation-dependent extended integral types, but only if they are signed. And if all signed types are too small, the behavior of your program is undefined.
If we assume that we are working with typical real-life platforms with integer types having the "traditional" 16, 32 or 64 bits in width, then there are only two possibilities here
If on your platform all signed integer types are too small to represent 4294966336, then your program has undefined behavior. End of story.
If at least one signed integer type is large enough for 4294966336, there should be no evaluation overflow and your expression should evaluate to 2236963.
So, the only real language-level explanation for that 1 result is that you are running into undefined behavior, because all signed types are too small to represent the literals you used in your expression.
If on your platform some signed integer type is actually sufficiently large to represent 4294966336 (i.e. some type has at least 64 bits), then the result can only be explained by the fact the your compiler is broken.
P.S. Note that the possibility of unsigned int being used for the evaluation only existed in old C language - C89/90. That would produce the third explanation for the result you obtained. But, again, that explanation would only apply to C89/90 compilers, not to C++ compilers or C99 compilers. And your question is tagged [C++].
I meant to leave this as a comment, but do not have enough reputation. I wanted to share this anyway, because I think that there are two components to tmighty's question and one of them might not be answered quite correctly, from what I understand.
First, you need to be explicit about the data type you are using, i.e. use a suffix such as UL or explicit type conversion. Other answers have made this clear enough, I think.
Second, you need to then pick a data type that is large enough.
You have already stated "I thought that unsigned long could handle this" and other answers seem to confirm this, but from what I know - and double-checking just now to try to make sure - unsigned long may not be large enough. It is guaranteed to be at least 4294967295, which would be too small for your use case. Specifically, I have just checked under Windows using VC++ and both compilation as 32 bit and 64 bit define ULONG_MAX to be 4294967295.
What you may need to do instead is use the "ULL" suffix and use unsigned long long, since its maximum seems to be at least 18446744073709551615. Even if there are platforms that define unsigned long to be large enough for your purpose, I think you might want to use a data type that is actually guaranteed to be large enough.

Should I use "unsigned" every time i know I'm processing unsigned values?

Often values are known to be positive. For example TCP/UDP sequence number is always positive value. Both int and unsigned int are big enough to store even the biggest sequence number so I can use any of these types. There are many other examples when values are known to be positive.
Are there any reasons to use unsigned type when capacity of regular signed type is enough (and often more than enough)?
Personally I tend to use regular types because:
int is probably a little bit more readable than uint or unsigned int
I don't need to include extra headers for UINT etc.
I will avoid extra casts somewhere further in the program
Reasons to use unsigned type I can imagine:
help compiler generated better code?
help another programmer to understand that variable is unsigned
avoid possible bugs (for example when int is assigned to UINT compiler likely will generate compile-time error and we should check that value we assign is not negative)
One reason is that comparing signed and unsigned numbers can lead to surprising results. In C and (I think) C++, comparing signed and unsigned numbers causes the signed number to be interpreted as unsigned. If the signed value happens to be negative, reading it as unsigned will give a LARGER value than any unsigned number, which is not what you want. See this question for an example in Objective-C, which uses the same conversion rules as C.

What is the difference between signed and normal short

What is the difference between signed and normal short in c++? Is the range is different?
short is signed by default, so there is no difference.
The names signed short int, signed short, short int and short are synonymes and mean same type in C++.
Integers are signed by default in C++, which IMO brings the existence of the signed keyword into question. Technically, it is redundant, maybe it does contribute with some clarity, but hardly anyone uses it in production. Everyone is pretty much aware integers are signed by default. I honestly can't remember the last time I've seen signed in production code.
As for floats and doubles - they cannot be unsigned at all, they are always signed.
In this regard C++ syntax is a little redundant, at least IMO. There is a number of different ways to say the same thing, e.g. signed short int, signed short, short int and short , and what you say still might be platform or even compiler dependent.
Frameworks like Qt for example declare their own conventions which are shorter and informative, like for example:
quint8, quint16, quint32, quint64 are all unsigned integers, with the number signifying the size in bits, in the same logic:
qint8, qint16, qint32, qint64 are signed integers with the respective bit width.
uint is, at least for me, much more preferable to either unsigned or unsigned int, in the same logic you also have ushort which is preferable to unsigned short int. There is also uchar to complete the short-hard family.

Explanation for Why int should be preferred over long/short in c++?

I read somewhere that int data type gives better performance (as compared to long and short) regardless of the OS as its size gets modified according to the word size of the OS. Where as long and short occupy 4 and 2 bytes which may or may not match with the word size of OS.
Could anyone give a good explanation of this?
From the standard:
3.9.1, §2 :
There are five signed integer types :
"signed char", "short int", "int",
"long int", and "long long int". In
this list, each type provides at least
as much storage as those preceding it
in the list. Plain ints have the
natural size suggested by the
architecture of the execution
environment (44); the other signed
integer types are provided to meet
special needs.
So you can say char <= short <= int <= long <= long long.
But you cannot tell that a short is 2 byte and a long 4.
Now to your question, most compiler align the int to the register size of their target platform which make alignment easier and access on some platforms faster. But that does not mean that you should prefer int.
Take the data type according to your needs. Do not optimize without performance measure.
int is traditionally the most "natural" integral type for the machine on which the program is to run. What is meant by "most natural" is not too clear, but I would expect that it would not be slower than other types. More to the point, perhaps, is that there is an almost universal tradition for using int in preference to other types when there is no strong reason for doing otherwise. Using other integral types will cause an experienced C++ programmer, on reading the code, to ask why.
short only optimize storage size; calculations always extend to an int, if applicable (i.e. unless short is already same size)
not sure that int should be preferred to longs; the obvious case being when int's capacity doesn't suffice
You already mention native wordsize, so I'll leave that
Eskimos reportedly use forty or more different words for snow. When you only want to communicate that it's snow, then the word "snow" suffices. Source code is not about instructing the compiler: it's about communicating between humans, even if the communication may only be between your current and somewhat later self…
Cheers & hth.
int does not give better performance than the other types. Really, on most modern platforms, all of the integer types will perform similarly, excepting long long. If you want the "fastest" integer available on your platform, C++ does not give you a way to do that.
On the other hand, if you're willing to use things defined by C99, you can use one of the "fastint" types defined there.
Also, on modern machines, memory hierarchy is more important than CPU calculations in most cases. Using smaller integer types lets you fit more integers into CPU cache, which will increase performance in pretty much all cases.
Finally, I would recommend not using int as a default data type. Usually, I see people reach for int when they really want an unsigned integer instead. The conversion from signed to unsigned can lead to subtle integer overflow bugs, which can lead to security vulnerabilities.
Don't choose the data type because of an intrinsic "speed" -- choose the right data type to solve the problem you're looking to solve.