how to make c++11 auto smarter? - c++

I am now beginning to use auto keyword in c++ 11. One thing I found it not so smart is illustrated in the following code:
unsigned int a = 7;
unsigned int b = 23;
auto c = a - b;
std::cout << c << std::endl;
As you can see, the type of c variable is unsigned int. But my intention is the difference of two unsigned int should be int. So I expect variable c is equal to -16. How could I use auto more wisely so that it can infer the type of c variable to int? Thanks.

Both a and b have a type of unsigned int. Consequently, the type of expression a - b is deduced as unsigned int and c has a type of unsigned int. So auto here works as it is supposed to do.
If you want to change a type from unsigned int to int you might use static_cast:
auto c = static_cast<int>(a - b);
Or explicitly specify the type for c:
int c = a - b;

You misunderstand what unsigned int type is.
unsigned int is a mod-2^k for some k (usually 32) integer. Subtracting two such mod-2^k integers is well defined, and is not a signed integer.
If you want a type that models a bounded set of integers from -2^k to 2^k-1 (with k usually equal to 31), use int instead of unsigned int. If you want them to be positive, simply make them positive.
Despite its name, unsigned int is not an int that has no sign and is thus positive. Instead, it is a very specific integral type that happens to have no notion of sign.
If you don't need mod-2^k math for some unknown implementation defined k, and are not desparate for every single bit of of magnitude to be packed into a value, don't use unsigned int.
What you appear to want is something like
positive<int> a = 7;
positive<int> b = 23;
auto c = a-b; // c is of type `int`, because the difference of two positive values may be negative
With a few syntax changes and a lot of work that might be possible, but it isn't what unsigned means.

So just because you don't know how basic expressions work, the C++11 auto keyword is dumb? How does that even make any sense to you?
In the expression auto c = a - b;, the auto c = has nothing whatsoever to do with the type used in the sub-expression a - b.
Ever since the very first C language draft, the type used by an expression is determined by the operands of that expression. This is true for everything from pre-standard K&R C to C++17.
Now what you need to do if you want negative numbers is, not too surprisingly, to use negative types. Change the operands to (signed) int or cast them to that type before invoking the - operator.
Declaring the result as a signed type without changing the type of the operands of - is not a good idea, because then you force a conversion from unsigned to signed, which isn't necessarily well-defined behavior.
Now, if the operands have different types, or are of small integer types, they will get implicitly promoted as per the usual arithmetic conversions. That does not apply in this specific case, since both operands are of the same type and not of a small integer type.
But... this is why the auto keyword is dumb and dangerous. Consider:
unsigned short a = 7;
unsigned short b = 23;
auto c = a - b;
Since both operands were unsigned, the programmer intended to use unsigned arithmetic. But here both operands are implicitly promoted to int, with an unintended change of signedness. The auto keyword pretends that nothing happened, whereas unsigned int c = a - b; would increase the chance of compiler diagnostic messages, or at least increase the chance of warnings from external static analysis tools. And it will also accidentally smoother out what could have otherwise been a change-of-signedness bug.
In addition, with auto we would end up with the wrong, unintended type for the declared variable.

Related

MISRA-C rule 10.5

I have the following static analysis report that the code is not compliant for MISRA-C rule 10.5 :
Results of ~ and << operations on operands of underlying types
unsigned char and unsigned short should immediately be cast to the
operand's underlying type
The code is :
unsigned val = ~0U;
From what i understand, the idea is to have to maximum value of an unsigned int which can also be achieved by unsigned val = UINT_MAX; from <limits.h>.
However, i would like to know if there is something wrong with this code.
To me, 0U is of type unsigned integer and has a value of 0, then operator ~ is applied which gives a result of type unsigned int with all bits set to 1 and finally it is copied to an unsigned int.
I would say that no integer promotion happens here before the operator ~ because we already have an unsigned int.
Moreover, the tool gives the same report somewhere else in the code :
uint16_t val2 = (uint16_t)(~0U); /* cast is explicit so what is wrong here ? */
I am not sure whether or not this code needs a fix for both "offending" lines.
The concept underlying type was used in the older MISRAs, nowadays deprecated and replaced with essential type categories. The definition of underlying type is the type that each of the operands in an expression would have if not for implicit type promotion.
In the expression ~0U, the type of the integer constant 0U is unsigned int. It is not a small integer type, so no integer promotion takes place upon ~. The underlying type is the same as the actual C language type, unsigned int.
unsigned val = ... assigns an unsigned int to an unsigned int. No lvalue conversion through assignment or other such implicit assignment takes place. The underlying type remains unsigned int.
Thus if your tool gives a warning about no cast to underlying type on the line unsigned val = ~0U;, your tool is simply broken.
From what i understand, the idea is to have to maximum value of an unsigned int
Very likely.
However, i would like to know if there is something wrong with this code.
No, nothing is wrong with your code. However, other MISRA rules require you to replace the default "primitive types" with types from stdint.h (or equivalent on pre-C99 compilers). So in case this is a 32 bit system, the MISRA compliant version of your code is uint32_t val.
uint16_t val2 = (uint16_t)(~0U);
This is MISRA compliant code. The underlying type of the assignment expression is uint16_t. Casting to uint16_t is correct and proper practice. Again, your tool is broken.

Is masking before unsigned left shift in C/C++ too paranoid?

This question is motivated by me implementing cryptographic algorithms (e.g. SHA-1) in C/C++, writing portable platform-agnostic code, and thoroughly avoiding undefined behavior.
Suppose that a standardized crypto algorithm asks you to implement this:
b = (a << 31) & 0xFFFFFFFF
where a and b are unsigned 32-bit integers. Notice that in the result, we discard any bits above the least significant 32 bits.
As a first naive approximation, we might assume that int is 32 bits wide on most platforms, so we would write:
unsigned int a = (...);
unsigned int b = a << 31;
We know this code won't work everywhere because int is 16 bits wide on some systems, 64 bits on others, and possibly even 36 bits. But using stdint.h, we can improve this code with the uint32_t type:
uint32_t a = (...);
uint32_t b = a << 31;
So we are done, right? That's what I thought for years. ... Not quite. Suppose that on a certain platform, we have:
// stdint.h
typedef unsigned short uint32_t;
The rule for performing arithmetic operations in C/C++ is that if the type (such as short) is narrower than int, then it gets widened to int if all values can fit, or unsigned int otherwise.
Let's say that the compiler defines short as 32 bits (signed) and int as 48 bits (signed). Then these lines of code:
uint32_t a = (...);
uint32_t b = a << 31;
will effectively mean:
unsigned short a = (...);
unsigned short b = (unsigned short)((int)a << 31);
Note that a is promoted to int because all of ushort (i.e. uint32) fits into int (i.e. int48).
But now we have a problem: shifting non-zero bits left into the sign bit of a signed integer type is undefined behavior. This problem happened because our uint32 was promoted to int48 - instead of being promoted to uint48 (where left-shifting would be okay).
Here are my questions:
Is my reasoning correct, and is this a legitimate problem in theory?
Is this problem safe to ignore because on every platform the next integer type is double the width?
Is a good idea to correctly defend against this pathological situation by pre-masking the input like this?: b = (a & 1) << 31;. (This will necessarily be correct on every platform. But this could make a speed-critical crypto algorithm slower than necessary.)
Clarifications/edits:
I'll accept answers for C or C++ or both. I want to know the answer for at least one of the languages.
The pre-masking logic may hurt bit rotation. For example, GCC will compile b = (a << 31) | (a >> 1); to a 32-bit bit-rotation instruction in assembly language. But if we pre-mask the left shift, it is possible that the new logic is not translated into bit rotation, which means now 4 operations are performed instead of 1.
Speaking to the C side of the problem,
Is my reasoning correct, and is this a legitimate problem in theory?
It is a problem that I had not considered before, but I agree with your analysis. C defines the behavior of the << operator in terms of the type of the promoted left operand, and it it conceivable that the integer promotions result in that being (signed) int when the original type of that operand is uint32_t. I don't expect to see that in practice on any modern machine, but I'm all for programming to the actual standard as opposed to my personal expectations.
Is this problem safe to ignore because on every platform the next integer type is double the width?
C does not require such a relationship between integer types, though it is ubiquitous in practice. If you are determined to rely only on the standard, however -- that is, if you are taking pains to write strictly conforming code -- then you cannot rely on such a relationship.
Is a good idea to correctly defend against this pathological situation by pre-masking the input like this?: b = (a & 1) << 31;.
(This will necessarily be correct on every platform. But this could
make a speed-critical crypto algorithm slower than necessary.)
The type unsigned long is guaranteed to have at least 32 value bits, and it is not subject to promotion to any other type under the integer promotions. On many common platforms it has exactly the same representation as uint32_t, and may even be the same type. Thus, I would be inclined to write the expression like this:
uint32_t a = (...);
uint32_t b = (unsigned long) a << 31;
Or if you need a only as an intermediate value in the computation of b, then declare it as an unsigned long to begin with.
Q1: Masking before the shift does prevent undefined behavior that OP has concern.
Q2: "... because on every platform the next integer type is double the width?" --> no. The "next" integer type could be less than 2x or even the same size.
The following is well defined for all compliant C compilers that have uint32_t.
uint32_t a;
uint32_t b = (a & 1) << 31;
Q3: uint32_t a; uint32_t b = (a & 1) << 31; is not expected to incur code that performs a mask - it is not needed in the executable - just in the source. If a mask does occur, get a better compiler should speed be an issue.
As suggested, better to emphasize the unsigned-ness with these shifts.
uint32_t b = (a & 1U) << 31;
#John Bollinger good answer well details how to handle OP's specific problem.
The general problem is how to form a number that is of at least n bits, a certain sign-ness and not subject to surprising integer promotions - the core of OP's dilemma. The below fulfills this by invoking an unsigned operation that does not change the value - effective a no-op other than type concerns. The product will be at least the width of unsigned or uint32_t. Casting, in general, may narrow the type. Casting needs to be avoided unless narrowing is certain to not occur. An optimization compiler will not create unnecessary code.
uint32_t a;
uint32_t b = (a + 0u) << 31;
uint32_t b = (a*1u) << 31;
Taking a clue from this question about possible UB in uint32 * uint32 arithmetic, the following simple approach should work in C and C++:
uint32_t a = (...);
uint32_t b = (uint32_t)((a + 0u) << 31);
The integer constant 0u has type unsigned int. This promotes the addition a + 0u to uint32_t or unsigned int, whichever is wider. Because the type has rank int or higher, no more promotion occurs, and the shift can be applied with the left operand being uint32_t or unsigned int.
The final cast back to uint32_t will just suppress potential warnings about a narrowing conversion (say if int is 64 bits).
A decent C compiler should be able to see that adding zero is a no-op, which is less onerous than seeing that a pre-mask has no effect after an unsigned shift.
To avoid unwanted promotion, you may use the greater type with some typedef, as
using my_uint_at_least32 = std::conditional_t<(sizeof(std::uint32_t) < sizeof(unsigned)),
unsigned,
std::uint32_t>;
For this segment of code:
uint32_t a = (...);
uint32_t b = a << 31;
To promote a to a unsigned type instead of signed type, use:
uint32_t b = a << 31u;
When both sides of << operator is an unsigned type, then this line in 6.3.1.8 (C standard draft n1570) applies:
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
The problem you are describing is caused you use 31 which is signed int type so another line in 6.3.1.8
Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.
forces a to promoted to a signed type
Update:
This answer is not correct because 6.3.1.1(2) (emphasis mine):
...
If an int can represent all values of the original type (as restricted
by the width, for a bit-field), the value is converted to an int;
otherwise, it is converted to an unsigned int. These are called the
integer promotions.58) All other types are unchanged by the integer
promotions.
and footnote 58 (emphasis mine):
58) The integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses.
Since only integer promotion is happening and not common arithmetic conversion, using 31u does not guarantee a to be converted to unsigned int as stated above.

Threshold an absolute value

I have the following function:
char f1( int a, unsigned b ) { return abs(a) <= b; }
For execution speed, I want to rewrite it as follows:
char f2( int a, unsigned b ) { return (unsigned)(a+b) <= 2*b; } // redundant cast
Or alternatively with this signature that could have subtle implications even for non-negative b:
char f3( int a, int b ) { return (unsigned)(a+b) <= 2*b; }
Both of these alternatives work under a simple test on one platform, but I need it to portable. Assuming non-negative b and no risk of overflow, is this a valid optimization for typical hardware and C compilers? Is it also valid for C++?
Note: As C++ on gcc 4.8 x86_64 with -O3, f1() uses 6 machine instructions and f2() uses 4. The instructions for f3() are identical to those for f2(). Also of interest: if b is given as a literal, both functions compile to 3 instructions that directly map to the operations specified in f2().
Starting with the original code with signature
char f2( int a, unsigned b );
this contains the expression
a + b
Since one of these operands has a signed and the other an (corresponding) unsigned integer type (thus they have the same "integer conversion rank"), then - following the "Usual arithmetic conversions" (§ 6.3.1.8) - the operand with signed integer type is converted to the unsigned type of the other operand.
Conversion to an unsigned integer type is well defined, even if the value in question cannot be represented by the new type:
[..] if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. 60
§ 6.3.1.3/2
Footnote 60 just says that the described arithmetic works with the mathematical value, not the typed one.
Now, with the updated code
char f2_updated( int a, int b ); // called f3 in the question
things would look different. But since b is assumed to be non-negative, and assuming that INT_MAX <= UINT_MAX you can convert b to an unsigned without fearing it to have a different mathematical value afterwards. Thus you could write
char f2_updated( int a, int b ) {
return f2(a, (unsigned)b); // cast unnecessary but to make it clear
}
Looking again at f2 the expression 2*b further limits the allowed range of b to be not larger than UINT_MAX/2 (otherwise the mathematical result would be wrong).
So as long as you stay within these bounds, every thing is fine.
Note: Unsigned types do not overflow, they "wrap" according to modular arithmetic.
Quotes from N1570 (a C11 working draft)
A final remark:
IMO the only really reasonable choice to write this function is as
#include <stdbool.h>
#include <assert.h>
bool abs_bounded(int value, unsigned bound) {
assert(bound <= (UINT_MAX / 2));
/* NOTE: Casting to unsigned makes the implicit conversion that
otherwise would happen explicit. */
return ((unsigned)value + bound) <= (2 * bound);
}
Using a signed type for the bound does not make much sense, because the absolute of a value cannot be less than a negative number. abs_bounded(value, something_negative) would be always false. If there's the possibility of a negative bound, then I'd catch this outside of this function (otherwise it does "too much"), like:
int some_bound;
// ...
if ((some_bound >= 0) && abs_bounded(my_value, some_bound)) {
// yeeeha
}
As OP wants fast and portable code (and b is positive), it first makes sense to code safely:
// return abs(a) <= b;
inline bool f1_safe(int a, unsigned b ) {
return (a >= 0 && a <= b) || (a < 0 && 0u - a <= b);
}
This works for all a,b (assuming UINT_MAX > INT_MAX). Next, compare alternatives using an optimized compile (let the compiler do what it does best).
The following slight variation on OP's code will work in C/C++ but risks portability issues unless "Assuming non-negative b and no risk of overflow" can be certain on all target machines.
bool f2(int a, unsigned b) { return a+b <= b*2; }
In the end, OP goal of fast and portable code may find code the works optimally for the select platform, but not with others - such is micro-optimization.
To determine if the 2 expressions are equivalent for your purpose, you must study the domain of definition:
abs(a) <= b is defined for all values of int a and unsigned b, with just one special case for a = INT_MIN;. On 2s complement architectures, abs(INT_MIN) is not defined but most likely evaluates to INT_MIN, which converted to unsigned as required for the <= with an unsigned value, yields the correct value.
(unsigned)(a+b) <= 2*b may produce a different result for b > UINT_MAX/2. For example, it will evaluate to false for a = 1 and b = UINT_MAX/2+1. There might be more cases where you alternate formula gives an incorrect result.
EDIT: OK, the question was edited... and b is now an int.
Note that a+b invokes undefined behavior in case of overflow and the same for 2*b. So you make the assumption that neither a+b nor 2*b overflow. Furthermore, if b is negative, you little trick does not work.
If a is in the range -INT_MAX/2..INT_MAX/2 and b in the range 0..INT_MAX/2, it seems to function as expected. The behavior is identical in C and C++.
Whether it is an optimization depends completely on the compiler, command line options, hardware capabilities, surrounding code, inlining, etc. You already address this part and tell us that you shave one or two instructions... Just remember that this kind of micro-optimization is not absolute. Even counting instructions does not necessarily help find the best performance. Did you perform some benchmarks to measure if this optimization is worthwhile? Is the difference even measurable?
Micro-optimizing such a piece of code is self-defeating: it makes the code less readable and potentially incorrect. b might not be negative in the current version, but if the next maintainer changes that, he/she might not see the potential implications.
Yes, this is portable to compliant platforms. The conversion from signed to unsigned is well defined:
Conversion between signed integer and unsigned integer
int to unsigned int conversion
Signed to unsigned conversion in C - is it always safe?
The description in the C spec is a bit contrived:
if the new type is unsigned, the value is converted by repeatedly
adding or subtracting one more than the maximum value that can be
represented in the new type until the value is in the range of the new
type.
The C++ spec addresses the same conversion in a more sensible way:
In a two's complement representation, this conversion is conceptual
and there is no change in the bit pattern
In the question, f2() and f3() achieve the same results in a slightly different way.
In f2() the presence of the unsigned operand causes a conversion of the signed operand as required here for C++. The unsigned addition may-or-may-not then result in a wrap-around past zero, which is also well defined [citation needed].
In f3() the addition occurs in signed representation with no trickiness, and then the result is (explicitly) converted to unsigned. So this is slightly simpler than f2() (and also more clear).
In both cases, the you end up with the same unsigned representation of the sum, which can then be compared (as unsigned) to 2*b. And the trick of treating a signed value as an unsigned type allows you to check a two-sided range with only a single comparison. Note also that this is a bit more flexible than using the abs() function since the trick doesn't require that the range be centered around zero.
Commentary on the "usual arithmetic conversions"
I think this question demonstrated that using unsigned types is generally a bad idea. Look at the confusion it caused here.
It can be tempting to use unsigned for documentation purposes (or to take advantage of the shifted value range), but due to the conversion rules, this may tend to be a mistake. In my opinion, the "usual arithmetic conversions" are not sensible if you assume that arithmetic is more likely to involve negative values than to overflow signed values.
I asked this followup question to clarify the point: mixed-sign integer math depends on variable size. One new thing that I have learned is that mixed-sign operations are not generally portable because the conversion type will depend on the size relative to that of int.
In summary: Using type declarations or casts to perform unsigned operations is a low-level coding style that should be approached with the requisite caution.

Does cast change variable in c++, or only tell compiler its ok

If I have this code:
int A;
unsigned int B;
if (A==B) foo();
the compiler will complain about mixed types in comparison. If I cast A like this:
if ((unsigned int) A==B) foo();
does this instruct the compiler to insert code to convert A from int to unsigned int? Or does it just tell the compiler don't worry about, ignore the type mismatch?
UPDATE: If this is unsafe (as pointed out below), how should I handle this comparison? (Wouldn't assigning the contents of an int to an unsigned int for later comparison also be unsafe)
UPDATE: Wow are there some different answers (from people with thousands of posts). I've accepted what seems like the best, but anyone reading this question should read ALL answers carefully.
When casting, at least at the conceptual level, compiler will create a temporary variable of the type specified in the cast expression.
You may test that this expression:
(unsigned int) A = B; // This time assignment is intended
will generate an error pointing modification of a temporary (const) variable.
Of course compiler is free to optimize away any temporary variables created through a cast. Nevertheless a valid method to build a temporary must exist.
The cast implies a conversion, if necessary. But this is problematic for negative values. They are mapped to positive values on the unsigned type. Thus you have to make sure a negative value never compares equal any (positive) unsigned value:
int A;
unsigned int B;
...
if ( (A >= 0) && (static_cast<unsigned int>(A) == B) )
foo();
This works because the unsigned variant of an integer type is guaranteed to hold all positive values (including 0) of the corressponding signed type.
Notice the usage of a static_cast instead of the "classic" C-style cast.
With plain types, in C and C++, == is always done with both operands converted to the same type. In OP's code, A is converted to unsigned first.
If I cast ... does this instruct the compiler to insert code to convert A from int to unsigned int?
Yes, but that code would have occurred anyway. Without the cast, the compiler is simple warning that it is going to do something that the programmer may not have intended.
Or (If I cast ) does it just tell the compiler don't worry about, ignore the type mismatch?
The type mis-match is not ignored. By supplying the cast, there is no type mis-match to warn about.
How should I handle this comparison?
Insure A is not negative, then convert to unsigned with a cast.
int A;
unsigned int B;
// if (A==B) foo();
if (A >= 0 && (unsigned)A == B) foo();
Every non-negative int can be converted to an unsigned with no value change.
The range of nonnegative values of a signed integer type is a subrange of the
corresponding unsigned integer type C11dr §6.2.5 9
So you question is just about a signed/unsigned comparison.
C++ standard says in clause 5 Expressions [expr] § 10:
Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield
result types in a similar way. The purpose is to yield a common type, which is also the type of the result.
This pattern is called the usual arithmetic conversions, which are defined as follow:...
Otherwise, if the operand that has unsigned integer type has rank greater than or equal to the
rank of the type of the other operand, the operand with signed integer type shall be converted to
the type of the operand with unsigned integer type.
and in 4.7 Integral conversions [conv.integral] §2
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source
integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s
complement representation, this conversion is conceptual and there is no change in the bit pattern (if there
is no truncation). —end note ]
That means that on a common system using 2-complement for negative numbers and 32 bits for an int or unsigned int, (unsigned int) -1 will end in 4294967295.
It may be what you want or not, the compiler just warn you that it will consider them as equal.
If it is not what you want, just first test whether the signed value is negative. If it is, say that they are not equal an skip the equality comparison.
It depends on type of cast and what you are casting. In your particular case nothing is going to happen, but in other cases the actual code will be performed. Simplest example:
void foo(double d) {};
...
int x;
foo(static_cast<double>(x));
In this example there would be code generated.

Type Conversion/Casting Confusion in C++

What is Type Conversion and what is Type Casting?
When should I use each of them?
Detail: Sorry if this is an obvious question; I'm new to C++, coming from a ruby background and being used to to_s and to_i and the like.
Conversion is when a value is, um, converted to a different type. The result is a value of the target type, and there are rules for what output value results from what input (of the source type).
For example:
int i = 3;
unsigned int j;
j = i; // the value of "i" is converted to "unsigned int".
The result is the unsigned int value that is equal to i modulo UINT_MAX+1, and this rule is part of the language. So, in this case the value (in English) is still "3", but it's an unsigned int value of 3, which is subtly different from a signed int value of 3.
Note that conversion happened automatically, we just used a signed int value in a position where an unsigned int value is required, and the language defines what that means without us actually saying that we're converting. That's called an "implicit conversion".
"Casting" is an explicit conversion.
For example:
unsigned int k = (unsigned int)i;
long l = long(i);
unsigned int m = static_cast<unsigned int>(i);
are all casts. Specifically, according to 5.4/2 of the standard, k uses a cast-expression, and according to 5.2.3/1, l uses an equivalent thing (except that I've used a different type). m uses a "type conversion operator" (static_cast), but other parts of the standard refer to those as "casts" too.
User-defined types can define "conversion functions" which provide specific rules for converting your type to another type, and single-arg constructors are used in conversions too:
struct Foo {
int a;
Foo(int b) : a(b) {} // single-arg constructor
Foo(int b, int c) : a(b+c) {} // two-arg constructor
operator float () { return float(a); } // conversion function
};
Foo f(3,4); // two-arg constructor
f = static_cast<Foo>(4); // conversion: single-arg constructor is called
float g = f; // conversion: conversion function is called
Classic casting (something like (Bar)foo in C, used in C++ with reinterpret_cast<>) is when the actual memory contents of a variable are assumed to be a variable of a different type. Type conversion (ie. Boost's lexical_cast<> or other user-defined functions which convert types) is when some logic is performed to actually convert a variable from one type to another, like integer to a string, where some code runs to logically form a string out of a given integer.
There is also static and dynamic casting, which are used in inheritance, for instance, to force usage of a parent's member functions on a child's type (dynamic_cast<>), or vice-versa (static_cast<>). Static casting also allows you to perform the typical "implicit" type conversion that occurs when you do something like:
float f = 3.14;
int i = f; //float converted to int by dropping the fraction
which can be rewritten as:
float f = 3.14;
int i = static_cast<int>(f); //same thing
In C++, any expression has a type. when you use an expression of one type (say type S) in a context where a value of another type is required (say type D), the compiler tries to convert the expression from type S to type D. If such an implicit conversion doesn't exist, this results in an error. The word type cast is not standard but is the same as conversion.
E.G.
void f(int x){}
char c;
f(c); //c is converted from char to int.
The conversions are ranked and you can google for promotions vs. conversions for more details.
There are 5 explicit cast operators in C++ static_cast, const_cast, reinterpret_cast and dynamic_cast, and also the C-style cast
Type conversion is when you actually convert a type in another type, for example a string into an integer and vice-versa, a type casting is when the actual content of the memory isn't changed, but the compiler interpret it in a different way.
Type casting indicates you are treating a block of memory differently.
int i = 10;
int* ip = &i;
char* cp = reinterpret_cast<char*>(ip);
if ( *cp == 10 ) // Here, you are treating memory that was declared
{ // as int to be char.
}
Type conversion indicates that you are converting a value from one type to another.
char c = 'A';
int i = c; // This coverts a char to an int.
// Memory used for c is independent of memory
// used for i.