Too many types in declaration C++ - c++

When unsigned/signed long int a; is possible
why unsigned/signed long float/double a; is not possible ?
Why do I get too many types in declaration error for the latter and not for the former ?

There are three floating point types: float, double and long double. None of these have unsigned equivalents, so putting signed or unsigned in front of them is not valid. There is no such type as long float.

You are getting that message because a long double exists, but an unsigned long double does not. unsigned can also be interpreted as an int, therefore you possess two types in the latter declaration: unsigned and long double. I do not believe there is a long float in C++.

That is because the first (long int) is a documented variable type, while the second isn't.
The data types that the C++ language supports are:
char
unsigned char
signed char
int
unsigned int
signed int
short int
unsigned short int
signed short int
long int
signed long int
unsigned long int
float
double
long double

Related

unsigned long long VS unsigned long long int

I want to know the main difference with unsigned long long and unsigned long long int. Can they be used inter-changeably.
For calculations involving huge decimal numbers like 9223372036854775807, which one is preferred?
Thanks.
Both of following types are semantically equivalent: minimum 64bit integer without sign and with equal or bigger size than unsigned long int
unsigned long long
unsigned long long int

What types can "unsigned" be used with? When is "signed" needed?

int
short
long
long long
unsigned int / unsigned
unsigned short
unsigned long
unsigned long long
char
bool
float
double
I just never get the limit. Are these all or are there more like:
unsigned char
unsigned bool
unsigned float
unsigned double
or any other?
I have a tomorrow and I want to be clear with the basics.
I just never get the limit. Are these all[?] ...
Don't bother providing links, I have a text book for that matter. Just
answer my question. Yes or No? This is really frustrating. Nothing has
been explicitly mentioned anywhere.
No.
Integer and character types (e.g., int, short, char, wchar_t, etc.) support signedness modifiers (signed/unsigned) and can therefore all be unsigned.
Floating point types (e.g., float, double, long double) do not support signedness modifiers and therefore cannot be unsigned or explicitly signed, for that matter.
A few examples of valid expressions:
char
unsigned char
int
signed int
unsigned short
unsigned long long
A few examples of invalid expressions:
signed double
unsigned double
unsigned float
signed unsigned int

What the difference between unsigned short int and unsigned int or unsigned short?

According to: http://en.wikipedia.org/wiki/C_data_types you can use unsigned short type or unsigned short int type. But what the difference between them? I know what is unsigned short and I know what is unsigned int but what does unsigned short int mean? Is it short or is it int?
unsigned short and unsigned short int refer to exactly the same datatype and are interchangeable.
"but what does unsigned short int mean? Is it short or is it int?"
There's no difference. It's an unsigned short, actually the int keyword is optional for short or long declarations. The unsigned keyword may applied to all of these type declarations, and makes them just unsigned.

What are the rules for 'typing' of parameters when explicit integer suffixes are not given?

For example, if I were to say:
#define UINT_DEF 500u
Then such a definition would have the type unsigned int. However, what is the default rule for when such suffixes are not given? As in
#define SOME_DEF 500
being placed in the type int. That is, at compile-time, if no suffix is given, are the constants slotted into the lowest data type in which they fit?
Would, for instance,
#define SOME_DEF_2 100
Acquire the datatype of char since it fits?
I asked a previous question on a similar topic and had some good responses. However, little was said to the case where no suffix is given. It was said that if a given suffix is requested of the compiler and the assigned value does not fit in such a type then the constant would get promoted, but little else was said about it. I imagine the answer to be something similar to this in that a default casting (perhaps the smallest available) is given to the constants and in such cases where the value should not fit into this default type then a promotion is realized.
And finally, do arithmetic promotion rules still apply as normal for macros? That is, would
#define TEST_DEF 5000000/50
#define TEST_DEF_2 5000000/50.0
respectively evaluate to 100,000 with a type of long int and 100,000.00 of type float (assuming 5,000,000 is a long and 50 is an int/char, whatever).
Or in the case:
#define TEST_MACRO(x) (16*x)
Since 16 is a constant of type int most likely, would TEST_MACRO(70000) promote the whole thing to long?
#define SOME_DEF 500
500 has type int. The type of an unsuffixed decimal integer constant is the first of the corresponding list in which its value can be represented: int, long, long long.
Then:
#define TEST_DEF 5000000/50
#define TEST_DEF_2 5000000/50.0
Assuming 5000000 is of type int in your system then:
5000000/50 is of type int
5000000/50.0 is of type double
Of course the fact that it is macro does not change anything as macros are just relatively simple textual substitutions.
Finally, assuming 70000 is of type int then:
16 * 70000 is also of type int
Per the 2011 online draft of the C standard:
6.4.4.1 Integer constants
...
5 The type of an integer constant is the first of the corresponding list in which its value can be represented.
Suffix Decimal Constant Octal or Hexadecimal
Constant
-----------------------------------------------------------------
None int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
-----------------------------------------------------------------
u or U unsigned int unsigned int
unsigned long int unsigned long int
unsigned long long int unsigned long long int
------------------------------------------------------------------
l or L long int long int
long long int unsigned long int
long long int
unsigned long long int
------------------------------------------------------------------
Both u or U unsigned long int unsigned long int
and l or L unsigned long long int unsigned long long int
------------------------------------------------------------------
ll or LL long long int long long int
unsigned long long int
------------------------------------------------------------------
Both u or U unsigned long long int unsigned long long int
and ll or LL
So, if you have a decimal integer constant without a suffix, its type will be the smallest of int, long int, or long long int that can represent that value.
Not so elegant but possible is to cast:
#include <inttypes.h> /* For uint16_t */
#define MYFLOAT ((float) 1)
#define MYUNSIGNED16BITINT ((uint16_t) 42.)
#define MYVOIDPOINTER ((void *) 0)

Is `int` by default `signed long int` in C++?

Is int by default signed long int in C++?
Is it platform and/or compiler dependent? If so, how?
[EDIT]
Are any of the following guaranteed to be duplicate?
signed short int
signed int
signed long int
signed long long int
unsigned short int
unsigned int
unsigned long int
unsigned long long int
plain int is signed, whether or not it's the same size as long int is platform-dependent.
What's guaranteed is that
sizeof (int) <= sizeof (long)
and int is big enough to hold at least all values from -32767 to 32767.
What the standard says: (section [basic.fundamental]:
There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list. There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types. Plain ints have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs.
All of the integer types are different, i.e. you can safely overload functions for all of them and you won't get any conflict. However, some times use the same number of bits for their representation. Even if they use the same number of bits signed and unsigned types always have a different range. Except for char, using any integer type without signed is equivalent to using it with signed, i.e. signed int and int are equivalent. char is a different type as signed char and unsigned char but char has the same representation and range of either signed char or unsigned char. You can use std::numeric_limits<char>::is_signed to find out which it uses.
On to the more interesting aspects. The following conditions are all true:
7 <= std::numeric_limits<signed char>::digits
sizeof(char) == 1
sizeof(char) == sizeof(signed char)
sizeof(char) == sizeof(unsigned char)
15 <= std::numeric_limits<short>::digits
sizeof(char) <= sizeof(short)
sizeof(short) <= sizeof(int)
31 <= std::numeric_limits<long>::digits
sizeof(int) <= sizeof(long)
63 <= std::numeric_limits<long long>::digits
sizeof(long) <= sizeof(long long)
sizeof(X) == sizeof(signed X)
sizeof(signed X) == sizeof(unsigned X)
(where "X" is one of char, short, int, long, and long long).
This means that the size of all integer types can be the same as long as this types hold at least 64 bits (and apparently the Cray X-MP was such a beast). On contemporary machines typically sizeof(int) == sizeof(long) but there are machines where sizeof(int) == sizeof(short). Whether long is 32 or 64 bits depends on the actual architecture and both kinds are currently around.
Plain int is equivalent to signed int. That much is standard. Anything past that is not guaranteed; int and long are different types, even if your particular compiler makes them the same size. The only guarantee you have is that a long is at least as big as an int.
The long and short modifiers are not exactly like signed and unsigned. The latter two can be put on any integer type, but if you leave them off, then signed is the default for each integer type (except char). So int and signed int are the same type.
For long and short, if you leave them off, neither is chosen, but the resulting type is different. long int, short int and int are all different types, with short int <= int <= long int.
The int after long, short, signed and unsigned is optional: signed int and signed are the same type.
In C++ int is signed int by default, so there is no problem with that. However, int and long int are different types in C++, so this is not the same from the point of view of the language. Implementation of int and long int is platform/compiler specific - they are both integral types which might be identical. The only limitation C++ standard imposes is that sizeof( long int ) >= sizeof( int ).
signed and int are both the same as signed int by default.
Neither is the same type as signed short int or signed long int.