What is the difference between unsigned int a=2 and int a=2U
Also,
sizeof(a) operator gives the same value for int a=2 and int a=2L , why? Shouldn't the size be doubled.
UPDATE:
Thanks all for the answers.
here is the summery:
long int or int are types with which variable is declared. 2 or 2L is the value with which variable is initialised.
Size of the variable is declared by type instead of its initialisation, so both will have same size
In C++, all variables are declared with type. C++ forces1 you to specify the type explicitly, but doesn't force you to initialize the variable at all.
long int a = 2;
long int b = 2L;
long int c;
This code makes 3 variables of the same type long int.
int a = 2;
int b = 2L;
int c;
This code makes 3 variables of the same type int.
The idea of type is roughly "the set of all values the variable can take". It doesn't (and cannot) depend on the initial value of the variable - whether it's 2 or 2L or anything else.
So, if you have two variables of different type but same value
int a = 2L;
long int b = 2;
The difference between them is what they can do further in the code. For example:
a += 2147483647; // most likely, overflow
b += 2147483647; // probably calculates correctly
The type of the variable won't change from the point it's defined onwards.
Another example:
int x = 2.5;
Here the type of x is int, and it's initialized to 2. Even though the initializer has a different type, C++ regards the declaration type of x "more important".
1 BTW C++ has support for "type inference"; you can use it if you want the type of the initializer to be important:
auto a = 2L; // "a" has type "long int"
auto b = 2; // "b" has type "int"
What is difference between "long int a=2" and "int a=2L"?
The former defines a variable a as having type long int initialised from the value 2, the latter defines it as having type int initialised from the value 2L. The initialiser is implicitly converted to the type of the variable, and does not affect the type of the variable.
Or what is the difference between long char c='a' and char c=L'a'
The former defines a variable c as having type long char initialised from the value 'a', the latter defines it as having type char initialised from the value L'a'. Since the type long char doesn't exist, the former is an error. The type of L'a' is called wchar_t, not long char, and in the latter case is again converted to the type of the variable.
or what is the difference between unsigned int a=2 and int a=2U
The former defines a variable a as having type unsigned int initialised from the value 2, the latter defines it as having type int initialised from the value 2U. Yet again, the initialiser does not affect the type of the variable.
Also,
sizeof(a) operator gives the same value for int a=2 and int a=2L , why? Shouldn't the size be doubled.
Since they both define a as type int, sizeof(a) should give sizeof(int) for both.
#include <stdio.h>
int main()
{
int a1=2; //a1=2
int a2=2L; //a2=2
int a3=2.5673; //a3=2
int a4='A'; //a4=65
return 0;
}
Here, even though the value of a3 and a4 is float and char respectively, the value will be transformed to int as a3 and a4 is declared as an int. In the same way, the value of a2 will be transformed into int even though the value was set as 2L.
The variable doesn't depend on the value, rather on type declaration. int a will always be an integer, no matter what's it's value is.
Related
I am now beginning to use auto keyword in c++ 11. One thing I found it not so smart is illustrated in the following code:
unsigned int a = 7;
unsigned int b = 23;
auto c = a - b;
std::cout << c << std::endl;
As you can see, the type of c variable is unsigned int. But my intention is the difference of two unsigned int should be int. So I expect variable c is equal to -16. How could I use auto more wisely so that it can infer the type of c variable to int? Thanks.
Both a and b have a type of unsigned int. Consequently, the type of expression a - b is deduced as unsigned int and c has a type of unsigned int. So auto here works as it is supposed to do.
If you want to change a type from unsigned int to int you might use static_cast:
auto c = static_cast<int>(a - b);
Or explicitly specify the type for c:
int c = a - b;
You misunderstand what unsigned int type is.
unsigned int is a mod-2^k for some k (usually 32) integer. Subtracting two such mod-2^k integers is well defined, and is not a signed integer.
If you want a type that models a bounded set of integers from -2^k to 2^k-1 (with k usually equal to 31), use int instead of unsigned int. If you want them to be positive, simply make them positive.
Despite its name, unsigned int is not an int that has no sign and is thus positive. Instead, it is a very specific integral type that happens to have no notion of sign.
If you don't need mod-2^k math for some unknown implementation defined k, and are not desparate for every single bit of of magnitude to be packed into a value, don't use unsigned int.
What you appear to want is something like
positive<int> a = 7;
positive<int> b = 23;
auto c = a-b; // c is of type `int`, because the difference of two positive values may be negative
With a few syntax changes and a lot of work that might be possible, but it isn't what unsigned means.
So just because you don't know how basic expressions work, the C++11 auto keyword is dumb? How does that even make any sense to you?
In the expression auto c = a - b;, the auto c = has nothing whatsoever to do with the type used in the sub-expression a - b.
Ever since the very first C language draft, the type used by an expression is determined by the operands of that expression. This is true for everything from pre-standard K&R C to C++17.
Now what you need to do if you want negative numbers is, not too surprisingly, to use negative types. Change the operands to (signed) int or cast them to that type before invoking the - operator.
Declaring the result as a signed type without changing the type of the operands of - is not a good idea, because then you force a conversion from unsigned to signed, which isn't necessarily well-defined behavior.
Now, if the operands have different types, or are of small integer types, they will get implicitly promoted as per the usual arithmetic conversions. That does not apply in this specific case, since both operands are of the same type and not of a small integer type.
But... this is why the auto keyword is dumb and dangerous. Consider:
unsigned short a = 7;
unsigned short b = 23;
auto c = a - b;
Since both operands were unsigned, the programmer intended to use unsigned arithmetic. But here both operands are implicitly promoted to int, with an unintended change of signedness. The auto keyword pretends that nothing happened, whereas unsigned int c = a - b; would increase the chance of compiler diagnostic messages, or at least increase the chance of warnings from external static analysis tools. And it will also accidentally smoother out what could have otherwise been a change-of-signedness bug.
In addition, with auto we would end up with the wrong, unintended type for the declared variable.
I'd like to know in what order are types derived when using auto in C++? For example if I have
auto x = 12.5;
Will that result in a float or a double? Is there any reason it chooses one over the other in terms of speed,efficiency or size? And in what order are the types derived? Does it try int then double then string or is it not that simple?
Thanks
While C++ allows initialization of different typed variables with the same kind of literal, all literals in C++ have one specific type. Therefore the type deduction for auto variables does not need to be special for initialization with literals, it just takes the type of the right hand side (the single, unambiguous type of the literal, in your case) and applies it to the variable.
Examples for literals and their different types:
12.5 //double
12.5f //float
13 //int
13u //unsigned int
13l //long
13ull //unsigned long long
"foo" //char const [4]
'f' //char
So what about float f = 12.5;? Very simple: here the float f is initialized with a literal of type double and an implicit conversion takes place. 12.5 for itself never is float, it always is double.
An exception where the type of the auto variable does not have the type of the literal is when array-to-pointer decay takes place, which is the case for all string literals:
auto c = "bar"; //c has type char const*, while "bar" has type char const[4]
But this again is not special for literals but holds for all kinds of arrays:
int iarr[5] = {};
auto x = iarr; //x has type int*
How does C/C++ deal if you pass an int as a parameter into a method that takes in a byte (a char)? Does the int get truncated? Or something else?
For example:
void method1()
{
int i = //some int;
method2(i);
}
void method2(byte b)
{
//Do something
}
How does the int get "cast" to a byte (a char)? Does it get truncated?
If byte stands for char type, the behavior will depend on whether char is signed or unsigned on your platform.
If char is unsigned, the original int value is reduced to the unsigned char range modulo UCHAR_MAX+1. Values in [0, UCHAR_MAX] range are preserved. C language specification describes this process as
... the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
If char type is signed, then values within [SCHAR_MIN, SCHAR_MAX] range are preserved, while any values outside this range are converted in some implementation-defined way. (C language additionally explicitly allows an implementation-defined signal to be raised in such situations.) I.e. there's no universal answer. Consult your platform's documentation. Or, better, write code that does not rely on any specific conversion behavior.
Just truncated AS bit pattern (byte is in general unsigned char, however, you have to check)
int i = -1;
becomes
byte b = 255; when byte = unsigned char
byte b = -1; when byte = signed char
i = 0; b = 0;
i = 1024; b = 0;
i = 1040; b = 16;
Quoting the C++ 2003 standard:
Clause 5.2.2 paragrah 4: When a function is called, each parameter (8.3.5) shall be initialized (8.5, 12.8, 12.1) with its corresponding
argument.
So, b is initialized with i. What does that mean?
8.5/14 the initial value of the object being initialized is the (possibly converted) value of the initializer
expression. Standard conversions (clause 4) will be used, if necessary, to convert the initializer
expression to the … destination type; no user-defined conversions are considered
Oh, i is converted, using the standard conversions. What does that mean? Among many other standard conversions are these:
4.7/2 If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source
integer (modulo 2n where n is the number of bits used to represent the unsigned type).
4.7/3 If the destination type is signed, the value is unchanged if it can be represented in the destination type (and
bit-field width); otherwise, the value is implementation-defined.
Oh, so if char is unsigned, the value is truncated to the number of bits in a char (or computed modulo UCHAR_MAX+1, whichever way you want to think about it.)
And if char is signed, then the value is unchanged, if it fits; implementation-defined otherwise.
In practice, on the computers and compilers you care about, the value is always truncated to fit in 8 bits, regardless of whether chars are signed or unsigned.
You don't tell what a byte is, but if you pass a parameter that is convertible to the parameter type, the value will be converted.
If the types have different value ranges there is a risk that the value is outside the range of the parameter type, and then it will not work. If it is within the range, it will be safe.
Here's an example:
1) Code:
#include <stdio.h>
void
method1 (unsigned char b)
{
int a = 10;
printf ("a=%d, b=%d...\n", a, b);
}
void
method2 (unsigned char * b)
{
int a = 10;
printf ("a=%d, b=%d...\n", a, *b);
}
int
main (int argc, char *argv[])
{
int i=3;
method1 (i);
method2 (i);
return 0;
}
2) Compile (with warning):
$ gcc -o x -Wall -pedantic x.c
x.c: In function `main':
x.c:22: warning: passing arg 1 of `method2' makes pointer from integer without a cast
3) Execute (with crash):
$ ./x
a=10, b=3...
Segmentation fault (core dumped)
'Hope that helps - both with your original question, and with related issues.
There are two cases to worry about:
// Your input "int i" gets truncated
void method2(byte b)
{
...
// Your "method2()" stack gets overwritten
void method2(byte * b)
{
...
It will be cast to a byte the same as if you casted it explicitly as (byte)i.
Your sample code above might be a different case though, unless you have a forward declaration for method2 that is not shown. Because method2 is not yet declared at the time it is called, the compiler doesn't know the type of its first parameter. In C, functions should be declared (or defined) before they are called. What happens in this case is that the compiler assumes (as an implicit declaration) that method2's first parameter is an int and method2 receives an int. Officially that results in undefined behaviour, but on most architectures, both int and byte would be passed in the same size register anyway and it will happen to work.
What is Type Conversion and what is Type Casting?
When should I use each of them?
Detail: Sorry if this is an obvious question; I'm new to C++, coming from a ruby background and being used to to_s and to_i and the like.
Conversion is when a value is, um, converted to a different type. The result is a value of the target type, and there are rules for what output value results from what input (of the source type).
For example:
int i = 3;
unsigned int j;
j = i; // the value of "i" is converted to "unsigned int".
The result is the unsigned int value that is equal to i modulo UINT_MAX+1, and this rule is part of the language. So, in this case the value (in English) is still "3", but it's an unsigned int value of 3, which is subtly different from a signed int value of 3.
Note that conversion happened automatically, we just used a signed int value in a position where an unsigned int value is required, and the language defines what that means without us actually saying that we're converting. That's called an "implicit conversion".
"Casting" is an explicit conversion.
For example:
unsigned int k = (unsigned int)i;
long l = long(i);
unsigned int m = static_cast<unsigned int>(i);
are all casts. Specifically, according to 5.4/2 of the standard, k uses a cast-expression, and according to 5.2.3/1, l uses an equivalent thing (except that I've used a different type). m uses a "type conversion operator" (static_cast), but other parts of the standard refer to those as "casts" too.
User-defined types can define "conversion functions" which provide specific rules for converting your type to another type, and single-arg constructors are used in conversions too:
struct Foo {
int a;
Foo(int b) : a(b) {} // single-arg constructor
Foo(int b, int c) : a(b+c) {} // two-arg constructor
operator float () { return float(a); } // conversion function
};
Foo f(3,4); // two-arg constructor
f = static_cast<Foo>(4); // conversion: single-arg constructor is called
float g = f; // conversion: conversion function is called
Classic casting (something like (Bar)foo in C, used in C++ with reinterpret_cast<>) is when the actual memory contents of a variable are assumed to be a variable of a different type. Type conversion (ie. Boost's lexical_cast<> or other user-defined functions which convert types) is when some logic is performed to actually convert a variable from one type to another, like integer to a string, where some code runs to logically form a string out of a given integer.
There is also static and dynamic casting, which are used in inheritance, for instance, to force usage of a parent's member functions on a child's type (dynamic_cast<>), or vice-versa (static_cast<>). Static casting also allows you to perform the typical "implicit" type conversion that occurs when you do something like:
float f = 3.14;
int i = f; //float converted to int by dropping the fraction
which can be rewritten as:
float f = 3.14;
int i = static_cast<int>(f); //same thing
In C++, any expression has a type. when you use an expression of one type (say type S) in a context where a value of another type is required (say type D), the compiler tries to convert the expression from type S to type D. If such an implicit conversion doesn't exist, this results in an error. The word type cast is not standard but is the same as conversion.
E.G.
void f(int x){}
char c;
f(c); //c is converted from char to int.
The conversions are ranked and you can google for promotions vs. conversions for more details.
There are 5 explicit cast operators in C++ static_cast, const_cast, reinterpret_cast and dynamic_cast, and also the C-style cast
Type conversion is when you actually convert a type in another type, for example a string into an integer and vice-versa, a type casting is when the actual content of the memory isn't changed, but the compiler interpret it in a different way.
Type casting indicates you are treating a block of memory differently.
int i = 10;
int* ip = &i;
char* cp = reinterpret_cast<char*>(ip);
if ( *cp == 10 ) // Here, you are treating memory that was declared
{ // as int to be char.
}
Type conversion indicates that you are converting a value from one type to another.
char c = 'A';
int i = c; // This coverts a char to an int.
// Memory used for c is independent of memory
// used for i.
I cannot initialize a non-const reference to type T1 from a convertible type T2. However, I can with a const reference.
long l;
const long long &const_ref = l; // fine
long long &ref = l; // error: invalid initialization of reference of
// type 'long long int&' from expression of type
// 'long int'
Most problems I encountered were related to r-values that cannot be assigned to a non-const reference. This is not the case here -- can someone explain? Thanks.
An integer promotion results in an rvalue. long can be promoted to a long long, and then it gets bound to a const reference. Just as if you had done:
typedef long long type;
const type& x = type(l); // temporary!
Contrarily an rvalue, as you know, cannot be bound to a non-const reference. (After all, there is no actual long long to refer to.)
long long is not necessarily sized equal to long and may even use an entire different internal representation. Therefor you cannot bind a non-const reference to long to an object of type long long or the other way around. The Standard forbids it, and your compiler is correct to not allow it.
You can wonder the same way about the following code snippet:
long a = 0;
long long b = 0;
a = b; // works!
long *pa = 0;
long long *pb = pa;
The last initialization won't work. Just because a type is convertible to another one, doesn't mean another type that compounds one of them is convertible to a third type that compounds the other one. Likewise, for the following case
struct A { long i; };
struct B { long long i; };
A a;
B b = a; // fail!
In this case A and B each compound the type long and long long respectively, much like long& and long long& compound these types. However they won't be convertible into each other just because of that fact. Other rules happen to apply.
If the reference is to const, a temporary object is created that has the correct type, and the reference is then bound to that object.
I'm not a standards lawyer, but I think this is because long long is wider than long. A const reference is permitted because you won't be changing the value of l. A regular reference might lead to an assignment that's too big for l, so the compiler won't allow it.
Let's assume that it's possible :
long long &ref = l;
It means that later in the code you can change the value referenced by ref to the value that is bigger then long type can hold but ok for long long. Practically, it means that you overwrite extra bytes of memory which can be used by different variable with unpredictable results.